Tomer Paretsky
2016-Apr-27 12:08 UTC
[Gluster-users] "gluster volume heal full" locking all files after adding a brick
Hi all i am currently running two replica 3 volumes acting as storage for VM images. due to some issues with glusterfs over ext4 filesystem (kernel panics), i tried removing one of the bricks from each volume from a single server, and than re adding them after re-formatting the underlying partition to xfs, on only one of the hosts for testing purposes. the commands used were: 1) gluster volume remove-brick gv1 replica 2 <server1>:/storage/gv1/brk force 2) gluster volume remove-brick gv2 replica 2 <server1>:/storage/gv2/brk force 3) reformatted /storage/gv1 and /storage/gv2 to xfs (these are the local/physical mountpoints of the gluster bricks) 4) gluster volume add-brick gv1 replica 3 <server1>:/storage/gv1/brk 5) gluster volume add-brick gv2 replica 3 <server1>:/storage/gv2/brk so far - so good -- both bricks were successfully re added to the volume. 6) gluster volume heal gv1 full 7) gluster volume heal gv2 full the heal operation started and i can see files being replicated into the newly added bricks BUT - all the files on the two nodes which were not touched are now locked (ReadOnly), i presume, until the heal operation finishes and replicates all the files to the newly added bricks (which might take a while..) now as far as i understood the documentation of the healing process - the files should not have been locked at all. or am i missing something fundemental here? is there a way to prevent locking of the source files during a heal -full operation? is there a better way to perform the process i just described? your help is enormously appreciated, Cheers, Tomer Paretsky -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160427/67f95499/attachment.html>
Alastair Neil
2016-Apr-27 16:25 UTC
[Gluster-users] "gluster volume heal full" locking all files after adding a brick
what are the quorum setting on the volumes? On 27 April 2016 at 08:08, Tomer Paretsky <tomerp at wirexsystems.com> wrote:> Hi all > > i am currently running two replica 3 volumes acting as storage for VM > images. > due to some issues with glusterfs over ext4 filesystem (kernel panics), i > tried removing one of the bricks from each volume from a single server, and > than re adding them after re-formatting the underlying partition to xfs, on > only one of the hosts for testing purposes. > > the commands used were: > > 1) gluster volume remove-brick gv1 replica 2 <server1>:/storage/gv1/brk > force > 2) gluster volume remove-brick gv2 replica 2 <server1>:/storage/gv2/brk > force > > 3) reformatted /storage/gv1 and /storage/gv2 to xfs (these are the > local/physical mountpoints of the gluster bricks) > > 4) gluster volume add-brick gv1 replica 3 <server1>:/storage/gv1/brk > 5) gluster volume add-brick gv2 replica 3 <server1>:/storage/gv2/brk > > so far - so good -- both bricks were successfully re added to the volume. > > 6) gluster volume heal gv1 full > 7) gluster volume heal gv2 full > > the heal operation started and i can see files being replicated into the > newly added bricks BUT - all the files on the two nodes which were not > touched are now locked (ReadOnly), i presume, until the heal operation > finishes and replicates all the files to the newly added bricks (which > might take a while..) > > now as far as i understood the documentation of the healing process - the > files should not have been locked at all. or am i missing something > fundemental here? > > is there a way to prevent locking of the source files during a heal -full > operation? > > is there a better way to perform the process i just described? > > your help is enormously appreciated, > Cheers, > Tomer Paretsky > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160427/b074ed96/attachment.html>