Gandalf Corvotempesta
2016-Nov-09 06:53 UTC
[Gluster-users] Automation of single server addition to replica
Il 09 nov 2016 1:23 AM, "Joe Julian" <joe at julianfamily.org> ha scritto:> > Replicas are defined in the order bricks are listed in the volume createcommand. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4.> > See alsohttps://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/>i really hope this could be automated in newer gluster versions There is almost no sense to make a replica on the same server so gluster should automatically move bricks to preserve data consistency when adding servers. Ceph does this by moving objects around and you don't have to add servers in a multiple of replica the rebalance command could be used to rebalance newly added bricks by preserving replicas in a proper state -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161109/0f2198c1/attachment.html>
Lopez, Dan-Joe
2016-Nov-09 18:21 UTC
[Gluster-users] Automation of single server addition to replica
Thanks Joe and Gandalf! I?ve look at the blog that you wrote Joe, but it seems to reference a more complicated scenario than I am working with. We have a` replica n` volume, and I want to make it a` replica n+1` volume. Is that possible? Dan-Joe Lopez From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Gandalf Corvotempesta Sent: Tuesday, November 8, 2016 10:54 PM To: Joe Julian <joe at julianfamily.org> Cc: gluster-users at gluster.org Subject: Re: [Gluster-users] Automation of single server addition to replica Il 09 nov 2016 1:23 AM, "Joe Julian" <joe at julianfamily.org<mailto:joe at julianfamily.org>> ha scritto:> > Replicas are defined in the order bricks are listed in the volume create command. So gluster volume create myvol replica 2 server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 server4:/data/brick1 will replicate between server1 and server2 and replicate between server3 and server4. > > See also https://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/ >i really hope this could be automated in newer gluster versions There is almost no sense to make a replica on the same server so gluster should automatically move bricks to preserve data consistency when adding servers. Ceph does this by moving objects around and you don't have to add servers in a multiple of replica the rebalance command could be used to rebalance newly added bricks by preserving replicas in a proper state -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161109/0878565e/attachment.html>
Joe Julian
2016-Nov-09 18:32 UTC
[Gluster-users] Automation of single server addition to replica
On 11/08/2016 10:53 PM, Gandalf Corvotempesta wrote:> > Il 09 nov 2016 1:23 AM, "Joe Julian" <joe at julianfamily.org > <mailto:joe at julianfamily.org>> ha scritto: > > > > Replicas are defined in the order bricks are listed in the volume > create command. So gluster volume create myvol replica 2 > server1:/data/brick1 server2:/data/brick1 server3:/data/brick1 > server4:/data/brick1 will replicate between server1 and server2 and > replicate between server3 and server4. > > > > See also > https://joejulian.name/blog/how-to-expand-glusterfs-replicated-clusters-by-one-server/ > > > > i really hope this could be automated in newer gluster versions > There is almost no sense to make a replica on the same server so gluster > should automatically move bricks to preserve data consistency when > adding servers. > > Ceph does this by moving objects around and you don't have to add > servers in a multiple of replica >Yes, and ceph has a metadata server to manage this, which breaks horribly if you have a cascading failure where your sas expanders start dropping drives when the throughput reaches the max bandwidth (not that I've /ever/ had that problem... <sigh>). The final straw in that failure scenario is that the database could never converge between all the monitors as the objects were moving around and eventually all 5 monitors ran out of database space - losing the object map and all the data. I'm not blaming ceph for that failure, but just pointing out that gluster's lack of a metadata server is part of its design philosophy which serves a specific engineering requirement that ceph does not fulfill. Luckily, we have both tools to use where they're each most appropriate.> the rebalance command could be used to rebalance newly added bricks by > preserving replicas in a proper state >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20161109/e9fa86b4/attachment.html>