Udo Giacomozzi
2015-Oct-16 14:26 UTC
[Gluster-users] Unnecessary healing in 3-node replication setup on reboot
Am 16.10.2015 um 15:52 schrieb Ivan Rossi:> looks like correct. > > during the reboot, if the vm write anything, at the end the files on > #1 and #2 will be different from thos on #3 that was down. So healing > is NECESSARY. > > Ivan >Ok, I see. :-/ To me this sounds like Gluster is not really suited for big files, like as the main storage for VMs - since they are being modified constantly. Or am I missing something? Perhaps Gluster can be configured to heal only modified parts of the files? Thanks, Udo -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151016/49cd0fa2/attachment.html>
Lindsay Mathieson
2015-Oct-16 14:41 UTC
[Gluster-users] Unnecessary healing in 3-node replication setup on reboot
On 17 October 2015 at 00:26, Udo Giacomozzi <udo.giacomozzi at indunet.it> wrote:> To me this sounds like Gluster is not really suited for big files, like as > the main storage for VMs - since they are being modified constantly. >Depends :) Any replicated storage will have to heal its copies if they are written to when a node is down. So long as the files can still be read/written while being healed and the resource usage (CPU/Network) is not to high then it should be transparent - that's a major whole pint of a replicated filesystem. I'm guessing that like me, you are running your gluster storage on your VM Hosts and you like me are a chronic tweaker, so tend to reboot the hosts more than you should. In that case you might want to consider moving your gluster storage to seperate dedicated nodes that you can leave up.> Or am I missing something? Perhaps Gluster can be configured to heal only > modified parts of the files? >Not that I know of. ceph is pretty good tracking changes and only transferring them - heals form a reboot only generally take a few minutes on my three node setup. But it is a huge headache to set up and administer, and its I/O performance is pretty bad on small setups (< 6 nodes, < 24 disks). But it scales really well and really shines when you get into the hundreds of nodes and disks, but I would not recommend it for small IT setups. -- Lindsay -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20151017/3e555bce/attachment.html>