I have two peers with two bricks on each of them, and each two bricks(on different peer) make up a replicate volume, and two replicate volumes make up a distribute volume. I mount the volume on the client,and start a vm(virtual machine). when a peer restart (shutdown -r now). and my question is: 1. how does glusterfs deal with the data on the peer, and does the ?restart? operation would trigger some process of data healing or some process to keep the data (on different peers with a same replicate volume) identical. 2. if it does, and can you show me a function in the source code? thanks, yz -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160320/534982b9/attachment.html>
Joe Julian
2016-Mar-19 17:06 UTC
[Gluster-users] how to deal with data when a peer restart
Yes, gluster does handle this. /How/ it handles this I don't have time to write this morning. As for the source code, look at the afr translator (xlators/cluster/afr/src). On 03/19/2016 09:26 AM, ?? wrote:> > I have two peers with two bricks on each of them, and each two > bricks(on different peer) make up a replicate volume, and two > replicate volumes make up a distribute volume. > > I mount the volume on the client,and start a vm(virtual machine). > > > when a peer restart (shutdown -r now). and my question is: > > 1. how does glusterfs deal with the data on the peer, and does the > ?restart? operation would trigger some process of data healing or > some process to keep the data (on different peers with a same > replicate volume) identical. > 2. if it does, and can you show me a function in the source code? > > > thanks, > > yz > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160319/4dd9db5d/attachment.html>