No takers on this one? On 22/06/15 14:37, John Gardeniers wrote:> Until last weekend we had a simple 1x2 replicated volume, consisting > of a single brick on each peer. After a drive failure screwed the > brick on one peer we decided to create a new peer and swap the bricks. > Running "gluster volume replace-brick gluster-rhev > dead_peer:/gluster_brick_1 new_peer:/gluster_brick_1 commit force". > > After trying for some time and not wishing to rely on a single peer we > added kari as an additional replica with "gluster volume add-brick > gluster-rhev replica 3 new_peer:/gluster_brick_1 force". > > Can we now *safely* remove the dead brick and revert back to replica 2? > > regards, > John > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users > > ______________________________________________________________________ > This email has been scanned by the Symantec Email Security.cloud service. > For more information please visit http://www.symanteccloud.com > ______________________________________________________________________
On 06/25/2015 03:07 AM, John Gardeniers wrote:> No takers on this one? > > On 22/06/15 14:37, John Gardeniers wrote: >> Until last weekend we had a simple 1x2 replicated volume, consisting >> of a single brick on each peer. After a drive failure screwed the >> brick on one peer we decided to create a new peer and swap the bricks. >> Running "gluster volume replace-brick gluster-rhev >> dead_peer:/gluster_brick_1 new_peer:/gluster_brick_1 commit force".Did replace brick succeeded? Ideally if you run replace brick commit force, that can result into data loss until and unless you explicitly take care of it.>> >> After trying for some time and not wishing to rely on a single peer we >> added kari as an additional replica with "gluster volume add-brick >> gluster-rhev replica 3 new_peer:/gluster_brick_1 force". >> >> Can we now *safely* remove the dead brick and revert back to replica 2?If the earlier replace brick didn't happen, then you can go for remove-brick start followed by commit once the status is completed. But double check the data as well.>> >> regards, >> John >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://www.gluster.org/mailman/listinfo/gluster-users >> >> ______________________________________________________________________ >> This email has been scanned by the Symantec Email Security.cloud service. >> For more information please visit http://www.symanteccloud.com >> ______________________________________________________________________ > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users >-- ~Atin
On 06/25/2015 03:07 AM, John Gardeniers wrote:> No takers on this one? >Hi John, If you either replace a brick of a replica or increase the replica count by adding another brick, you will need to perform `gluster volume heal <volname> full` to sync the data into the new/replaced brick. If you are running glusterfs 3.6 or newer. there is a bug (underfix ) in 'heal <volname> full` due to which heal won't be triggered in some scenarios, in which case you can trigger lookups from the mount (find /mount-poin |xargs stat). Lookups would also trigger heals. You can then remove the replaced brick if there are no more pending heals. HTH, Ravi [1] https://bugzilla.redhat.com/show_bug.cgi?id=1112158> On 22/06/15 14:37, John Gardeniers wrote: >> Until last weekend we had a simple 1x2 replicated volume, consisting >> of a single brick on each peer. After a drive failure screwed the >> brick on one peer we decided to create a new peer and swap the >> bricks. Running "gluster volume replace-brick gluster-rhev >> dead_peer:/gluster_brick_1 new_peer:/gluster_brick_1 commit force". >> >> After trying for some time and not wishing to rely on a single peer >> we added kari as an additional replica with "gluster volume add-brick >> gluster-rhev replica 3 new_peer:/gluster_brick_1 force". >> >> Can we now *safely* remove the dead brick and revert back to replica 2? >> >> regards, >> John >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://www.gluster.org/mailman/listinfo/gluster-users >> >> ______________________________________________________________________ >> This email has been scanned by the Symantec Email Security.cloud >> service. >> For more information please visit http://www.symanteccloud.com >> ______________________________________________________________________ > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users