Daniel Berteaud
2017-Nov-15 07:24 UTC
[Gluster-users] Help with reconnecting a faulty brick
Le 13/11/2017 ? 21:07, Daniel Berteaud a ?crit?:> > Le 13/11/2017 ? 10:04, Daniel Berteaud a ?crit?: >> >> Could I just remove the content of the brick (including the >> .glusterfs directory) and reconnect ? >> > > In fact, what would be the difference between reconnecting the brick > with a wiped FS, and using > > gluster volume remove-brick vmstore replica 1 master1:/mnt/bricks/vmstore > gluster volume add-brick myvol replica 2 master1:/mnt/bricks/vmstore > gluster volume heal vmstore full > > As explained here: > http://lists.gluster.org/pipermail/gluster-users/2014-January/015533.html > > ?No one can help ? Cheers, Daniel -- Logo FWS *Daniel Berteaud* FIREWALL-SERVICES SAS. Soci?t? de Services en Logiciels Libres Tel : 05 56 64 15 32 <tel:0556641532> Matrix: @dani:fws.fr /www.firewall-services.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171115/91c6da73/attachment.html>
On 11/15/2017 12:54 PM, Daniel Berteaud wrote:> > > > Le 13/11/2017 ? 21:07, Daniel Berteaud a ?crit?: >> >> Le 13/11/2017 ? 10:04, Daniel Berteaud a ?crit?: >>> >>> Could I just remove the content of the brick (including the >>> .glusterfs directory) and reconnect ? >>> >>If it is only the brick that is faulty on the bad node, but everything else is fine, like glusterd running, the node being a part of the trusted storage pool etc,? you could just kill the brick first and do step-13 in "10.6.2. Replacing a Host Machine with the Same Hostname", (the mkdir of non-existent dir, followed by setfattr of non-existent key) of https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/pdf/Administration_Guide/Red_Hat_Storage-3.1-Administration_Guide-en-US.pdf, then restart the brick by restarting glusterd on that node. Read 10.5 and 10.6 sections in the doc to get a better understanding of replacing bricks.>> In fact, what would be the difference between reconnecting the brick >> with a wiped FS, and using >> >> gluster volume remove-brick vmstore replica 1 master1:/mnt/bricks/vmstore >> gluster volume add-brick myvol replica 2 master1:/mnt/bricks/vmstore >> gluster volume heal vmstore full >> >> As explained here: >> http://lists.gluster.org/pipermail/gluster-users/2014-January/015533.html >> >> ? > > No one can help ? > > Cheers, > Daniel > > -- > > Logo FWS > > *Daniel Berteaud* > > FIREWALL-SERVICES SAS. > Soci?t? de Services en Logiciels Libres > Tel : 05 56 64 15 32 <tel:0556641532> > Matrix: @dani:fws.fr > /www.firewall-services.com/ > > > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171115/78dce251/attachment.html>
Daniel Berteaud
2017-Nov-16 07:24 UTC
[Gluster-users] Help with reconnecting a faulty brick
Le 15/11/2017 ? 09:45, Ravishankar N a ?crit?:> If it is only the brick that is faulty on the bad node, but everything > else is fine, like glusterd running, the node being a part of the > trusted storage pool etc,? you could just kill the brick first and do > step-13 in "10.6.2. Replacing a Host Machine with the Same Hostname", > (the mkdir of non-existent dir, followed by setfattr of non-existent > key) of > https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/pdf/Administration_Guide/Red_Hat_Storage-3.1-Administration_Guide-en-US.pdf, > then restart the brick by restarting glusterd on that node. Read 10.5 > and 10.6 sections in the doc to get a better understanding of > replacing bricks.Thanks, I'll try that. Any way in this situation to check which file will be healed from which brick before reconnecting ? Using some getfattr tricks ? Regards, Daniel -- Logo FWS *Daniel Berteaud* FIREWALL-SERVICES SAS. Soci?t? de Services en Logiciels Libres Tel : 05 56 64 15 32 <tel:0556641532> Matrix: @dani:fws.fr /www.firewall-services.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20171116/23d48d47/attachment.html>