Gandalf Corvotempesta
2017-Jun-11 11:05 UTC
[Gluster-users] How to remove dead peer, osrry urgent again :(
Il 11 giu 2017 1:00 PM, "Atin Mukherjee" <amukherj at redhat.com> ha scritto: Yes. And please ensure you do this after bringing down all the glusterd instances and then once the peer file is removed from all the nodes restart glusterd on all the nodes one after another. If you have to bring down all gluster instances before file removal, you also bring down the whole gluster storage -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170611/175fb4af/attachment.html>
Atin Mukherjee
2017-Jun-11 11:23 UTC
[Gluster-users] How to remove dead peer, osrry urgent again :(
On Sun, 11 Jun 2017 at 16:35, Gandalf Corvotempesta < gandalf.corvotempesta at gmail.com> wrote:> > > Il 11 giu 2017 1:00 PM, "Atin Mukherjee" <amukherj at redhat.com> ha scritto: > > Yes. And please ensure you do this after bringing down all the glusterd > instances and then once the peer file is removed from all the nodes restart > glusterd on all the nodes one after another. > > > If you have to bring down all gluster instances before file removal, you > also bring down the whole gluster storage >Until and unless server side quorum is not enabled that's not correct. I/O path should be active even though management plane is down. We can still get this done by one node after another with out bringing down all glusterd instances at one go but just wanted to ensure the workaround is safe and clean. _______________________________________________> Gluster-users mailing list > Gluster-users at gluster.org > http://lists.gluster.org/mailman/listinfo/gluster-users-- --Atin -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170611/522a2792/attachment.html>
Lindsay Mathieson
2017-Jun-11 11:44 UTC
[Gluster-users] How to remove dead peer, osrry urgent again :(
On 11/06/2017 9:23 PM, Atin Mukherjee wrote:> Until and unless server side quorum is not enabled that's not correct. > I/O path should be active even though management plane is down. We can > still get this done by one node after another with out bringing down > all glusterd instances at one go but just wanted to ensure the > workaround is safe and clean.Not quite sure of your wording here but I * brought down all glusterd with "systemctl stop glusterfs-server.service" on each node * rm /var/lib/glusterd/peers/de673495-8cb2-4328-ba00-0419357c03d7 on each node * systemctl start glusterfs-server.service" on each node Several hundred shards needed to be healed after that, but all done now with no split-brain. And: root at vng:~# gluster peer status Number of Peers: 2 Hostname: vnh.proxmox.softlog Uuid: 9eb54c33-7f79-4a75-bc2b-67111bf3eae7 State: Peer in Cluster (Connected) Hostname: vnb.proxmox.softlog Uuid: 43a1bf8c-3e69-4581-8e16-f2e1462cfc36 State: Peer in Cluster (Connected) Which is good. Not in a position to test quorum by rebooting a node right now though :) but I'm going to assume its all good, probably test next weekend. Thanks for all the help, much appreciated. -- Lindsay Mathieson -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170611/7c03f6eb/attachment.html>