2017-04-24 10:21 GMT+02:00 Pranith Kumar Karampuri <pkarampu at redhat.com>:> Are you suggesting this process to be easier through commands, rather than > for administrators to figure out how to place the data? > > [1] http://lists.gluster.org/pipermail/gluster-users/2016-July/027431.htmlAdmin should always have the ability to choose where to place data, but something easier should be added, like in any other SDS. Something like: gluster volume add-brick gv0 new_brick if gv0 is a replicated volume, the add-brick should automatically add the new brick and rebalance data automatically, still keeping the required redundancy level In case admin would like to set a custom placement for data, it should specify a "force" argument or something similiar. tl;dr: as default, gluster should preserve data redundancy allowing users to add single bricks without having to think how to place data. This will make gluster way easier to manage and much less error prone, thus increasing the resiliency of the whole gluster. after all , if you have a replicated volume, is obvious that you want your data to be replicated and gluster should manage this on it's own. Is this something are you planning or considering for further implementation? I know that lack of metadata server (this is a HUGE advantage for gluster) means less flexibility, but as there is a manual workaround for adding single bricks, gluster should be able to handle this automatically.
Anyway, the proposed workaround: https://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/ won't work with just a single volume made up of 2 replicated bricks. If I have a replica 2 volume with server1:brick1 and server2:brick1, how can I add server3:brick1 ? I don't have any bricks to "replace" This is something i would like to see implemented in gluster. 2017-04-29 16:08 GMT+02:00 Gandalf Corvotempesta <gandalf.corvotempesta at gmail.com>:> 2017-04-24 10:21 GMT+02:00 Pranith Kumar Karampuri <pkarampu at redhat.com>: >> Are you suggesting this process to be easier through commands, rather than >> for administrators to figure out how to place the data? >> >> [1] http://lists.gluster.org/pipermail/gluster-users/2016-July/027431.html > > Admin should always have the ability to choose where to place data, > but something > easier should be added, like in any other SDS. > > Something like: > > gluster volume add-brick gv0 new_brick > > if gv0 is a replicated volume, the add-brick should automatically add > the new brick and rebalance data automatically, still keeping the > required redundancy level > > In case admin would like to set a custom placement for data, it should > specify a "force" argument or something similiar. > > tl;dr: as default, gluster should preserve data redundancy allowing > users to add single bricks without having to think how to place data. > This will make gluster way easier to manage and much less error prone, > thus increasing the resiliency of the whole gluster. > after all , if you have a replicated volume, is obvious that you want > your data to be replicated and gluster should manage this on it's own. > > Is this something are you planning or considering for further implementation? > I know that lack of metadata server (this is a HUGE advantage for > gluster) means less flexibility, but as there is a manual workaround > for adding > single bricks, gluster should be able to handle this automatically.
Il 29 apr 2017 4:12 PM, "Gandalf Corvotempesta" < gandalf.corvotempesta at gmail.com> ha scritto: Anyway, the proposed workaround: https://joejulian.name/blog/how-to-expand-glusterfs- replicated-clusters-by-one-server/ won't work with just a single volume made up of 2 replicated bricks. If I have a replica 2 volume with server1:brick1 and server2:brick1, how can I add server3:brick1 ? I don't have any bricks to "replace" Can someone confirm this? Is possible to use the method described by Joe even with only 3 bricks ? What if I would like to add the fourth? I'm really asking, not criticizing. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170501/db75e32c/attachment.html>