Carl Sirotic
2019-Jul-03 19:39 UTC
[Gluster-users] What is the right way to bring down a Glusterfs server for maintenance?
I have a replica 3 cluster, 3 nodes with bricks and 2 "client" nodes, that run the VMs through a mount of the data on the bricks. Now, one of the bricks need maintenance and I will need to shut it down for about 15 minutes. I didn't find any information on what I am suposed to do. If I get this right, I am suposed to remove the brick completely from the cluster and add them again when the maintenance is finished ? Carl
John Strunk
2019-Jul-03 19:56 UTC
[Gluster-users] What is the right way to bring down a Glusterfs server for maintenance?
Nope. Just: * Ensure all volumes are fully healed so you don't run into split brain * Go ahead and shutdown the server needing maintenance ** If you just want gluster down on that sever node: stop glusterd and kill the glusterfs bricks, then do what you need to do ** If you just want to power off: then shutdown -h as usual (don't worry about stopping gluster) When you bring the server back up, glusterd and the bricks should start, and the bricks should heal from the 2 replicas that remained up during maintenance. -John On Wed, Jul 3, 2019 at 3:48 PM Carl Sirotic <csirotic at evoqarchitecture.com> wrote:> I have a replica 3 cluster, 3 nodes with bricks and 2 "client" nodes, > that run the VMs through a mount of the data on the bricks. > > Now, one of the bricks need maintenance and I will need to shut it down > for about 15 minutes. > > I didn't find any information on what I am suposed to do. > > If I get this right, I am suposed to remove the brick completely from > the cluster and add them again when the maintenance is finished ? > > > Carl > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190703/247affa5/attachment.html>