On Tue, May 2, 2017 at 12:04 AM, Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote:> 2017-05-01 20:30 GMT+02:00 Shyam <srangana at redhat.com>: > > Yes, as a matter of fact, you can do this today using the CLI and > creating > > nx2 instead of 1x2. 'n' is best decided by you, depending on the growth > > potential of your cluster, as at some point 'n' wont be enough if you > grow > > by some nodes. > > > > But, when a brick is replaced we will fail to address "(a) ability to > retain > > replication/availability levels" as we support only homogeneous > replication > > counts across all DHT subvols. (I could be corrected on this when using > > replace-brick though) > > > Yes, but this is error prone. >Why?> > I'm still thinking that saving (I don't know where, I don't know how) > a mapping between > files and bricks would solve many issues and add much more flexibility. >-- Pranith -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170502/90e10d4f/attachment.html>
On 05/01/2017 11:36 AM, Pranith Kumar Karampuri wrote:> > > On Tue, May 2, 2017 at 12:04 AM, Gandalf Corvotempesta > <gandalf.corvotempesta at gmail.com > <mailto:gandalf.corvotempesta at gmail.com>> wrote: > > 2017-05-01 20:30 GMT+02:00 Shyam <srangana at redhat.com > <mailto:srangana at redhat.com>>: > > Yes, as a matter of fact, you can do this today using the CLI > and creating > > nx2 instead of 1x2. 'n' is best decided by you, depending on the > growth > > potential of your cluster, as at some point 'n' wont be enough > if you grow > > by some nodes. > > > > But, when a brick is replaced we will fail to address "(a) > ability to retain > > replication/availability levels" as we support only homogeneous > replication > > counts across all DHT subvols. (I could be corrected on this > when using > > replace-brick though) > > > Yes, but this is error prone. > > > Why? >Because it's done by humans.> > I'm still thinking that saving (I don't know where, I don't know how) > a mapping between > files and bricks would solve many issues and add much more > flexibility. > > > > > -- > Pranith > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170501/451ad626/attachment.html>
On 05/01/2017 02:36 PM, Pranith Kumar Karampuri wrote:> > > On Tue, May 2, 2017 at 12:04 AM, Gandalf Corvotempesta > <gandalf.corvotempesta at gmail.com > <mailto:gandalf.corvotempesta at gmail.com>> wrote: > > 2017-05-01 20:30 GMT+02:00 Shyam <srangana at redhat.com > <mailto:srangana at redhat.com>>: > > Yes, as a matter of fact, you can do this today using the CLI and creating > > nx2 instead of 1x2. 'n' is best decided by you, depending on the growth > > potential of your cluster, as at some point 'n' wont be enough if you grow > > by some nodes. > > > > But, when a brick is replaced we will fail to address "(a) ability to retain > > replication/availability levels" as we support only homogeneous replication > > counts across all DHT subvols. (I could be corrected on this when using > > replace-brick though) > > > Yes, but this is error prone. > > > Why?To add to Pranith's question, (and to touch a raw nerve, my apologies) there is no rebalance in this situation (yet), if you notice. I do agree that for the duration a brick is replaced its replication count is down by 1, is that your concern? In which case I do note that without (a) above, availability is at risk during the operation. Which needs other strategies/changes to ensure tolerance to errors/faults.> > > > I'm still thinking that saving (I don't know where, I don't know how) > a mapping between > files and bricks would solve many issues and add much more flexibility. > > > > > -- > Pranith
2017-05-01 20:36 GMT+02:00 Pranith Kumar Karampuri <pkarampu at redhat.com>:> Why?Because you have to manually replace bricks with the newer one, format the older one and add it back. What happens if, by mistake, we replace the older brick with another brick on the same disk ? Currently you have to only check proper placement based on server, with this workaround you also have to check for brick placement on each disk. You add a level and thus you are incrementing the moving parts and operations that may go wrong.