Udo Giacomozzi
2015-Oct-15 07:26 UTC
[Gluster-users] Unnecessary healing in 3-node replication setup on reboot
Hello everybody, I'm new to this list, apologies if I'm asking something stupid.. ;-) I'm using GlusterFS on three nodes as the foundation for a 3-node high-availability Proxmox cluster. GlusterFS is mostly used to store the HDD images of a number of VMs and is accessed via NFS. My problem is, that every time I reboot one of the nodes, Gluster starts healing all of the files. Since they are quite big, it takes up to ~15-30 minutes to complete. It completes successfully, but I have to be extremely careful not to migrate VMs around because that results in corrupted files. I've already posted this problem in #gluster IRC channel: http://irclog.perlgeek.de/gluster/2015-10-01#i_11302365 and apparently it is a bug that *could* have been resolved in more recent releases of Gluster. I'm currently running the most recent version of the Proxmox 3.4 repository (Gluster 3.5.2; based on Debian Wheezy) . Upgrading Gluster means some work (build from source, probably) and potetial risk, so I'd like to be sure that using Gluster 3.7 will solve this problem and not cause any other problems. Somebody has more detailed information about this bug? Perhaps is there any way to work around it? Thank you very much, Udo
Lindsay Mathieson
2015-Oct-15 08:16 UTC
[Gluster-users] Unnecessary healing in 3-node replication setup onreboot
The gluster.org debian wheezy repo installs 3.6.6 safely on px 3.4, I use it myself Lindsay Mathieson -----Original Message----- From: "Udo Giacomozzi" <udo.giacomozzi at indunet.it> Sent: ?15/?10/?2015 6:34 PM To: "gluster-users at gluster.org" <gluster-users at gluster.org> Subject: [Gluster-users] Unnecessary healing in 3-node replication setup onreboot Hello everybody, I'm new to this list, apologies if I'm asking something stupid.. ;-) I'm using GlusterFS on three nodes as the foundation for a 3-node high-availability Proxmox cluster. GlusterFS is mostly used to store the HDD images of a number of VMs and is accessed via NFS. My problem is, that every time I reboot one of the nodes, Gluster starts healing all of the files. Since they are quite big, it takes up to ~15-30 minutes to complete. It completes successfully, but I have to be extremely careful not to migrate VMs around because that results in corrupted files. I've already posted this problem in #gluster IRC channel: http://irclog.perlgeek.de/gluster/2015-10-01#i_11302365 and apparently it is a bug that *could* have been resolved in more recent releases of Gluster. I'm currently running the most recent version of the Proxmox 3.4 repository (Gluster 3.5.2; based on Debian Wheezy) . Upgrading Gluster means some work (build from source, probably) and potetial risk, so I'd like to be sure that using Gluster 3.7 will solve this problem and not cause any other problems. Somebody has more detailed information about this bug? Perhaps is there any way to work around it? Thank you very much, Udo _______________________________________________ Gluster-users mailing list Gluster-users at gluster.org http://www.gluster.org/mailman/listinfo/gluster-users -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151015/9d20dd15/attachment.html>
Lindsay Mathieson
2015-Oct-16 00:25 UTC
[Gluster-users] Unnecessary healing in 3-node replication setup on reboot
On 15 October 2015 at 17:26, Udo Giacomozzi <udo.giacomozzi at indunet.it> wrote:> My problem is, that every time I reboot one of the nodes, Gluster starts > healing all of the files. Since they are quite big, it takes up to ~15-30 > minutes to complete. It completes successfully, but I have to be extremely > careful not to migrate VMs around because that results in corrupted files.Sorry meant to ask this earlier - when rebooting one node in a replica 3 gluster, then any files written to why the node is rebooting will need to be healed. Given your files are VM running images that will be all of them. So healing all the files sounds like the correct behaviour. -- Lindsay -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151016/79a7e824/attachment.html>