I created a distributed replicated volume, set the group virt, enabled sharding,
migrated a few VMs to the volume, after that i added more bricks to the volume
and started the rebalance, i checked the VMs and it was corrupted.
And yes, what you suggested about Gluster is on point, i think we need to have
more bug fixes and performance enhancements.
Im going to deploy a test Gluster soon just to test patches and updates and
report back bugs and issues.
--
Respectfully
Mahdi A. Mahdi
________________________________
From: Gandalf Corvotempesta <gandalf.corvotempesta at gmail.com>
Sent: Sunday, February 26, 2017 11:07:04 AM
To: Mahdi Adnan
Cc: gluster-users at gluster.org
Subject: Re: [Gluster-users] Volume rebalance issue
How did you replicate the issue?
Next week I'll spin up a gluster storage and I would like to try the same to
see the corruption and to test any patches from gluster
Il 25 feb 2017 4:31 PM, "Mahdi Adnan" <mahdi.adnan at
outlook.com<mailto:mahdi.adnan at outlook.com>> ha scritto:
Hi,
We have a volume of 4 servers 8x2 bricks (Distributed-Replicate) hosting VMs for
ESXi, i tried expanding the volume with 8 more bricks, and after rebalancing the
volume, the VMs got corrupted.
Gluster version is 3.8.9 and the volume is using the default parameters of group
"virt" plus sharding.
I created a new volume without sharding and got the same issue after the
rebalance.
I checked the reported bugs and the mailing list, and i noticed it's a bug
in Gluster.
Is it affecting all of Gluster versions ? is there any workaround or a volume
setup that is not affected by this issue ?
Thank you.
--
Respectfully
Mahdi A. Mahdi
_______________________________________________
Gluster-users mailing list
Gluster-users at gluster.org<mailto:Gluster-users at gluster.org>
http://lists.gluster.org/mailman/listinfo/gluster-users
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20170226/b2bdf770/attachment.html>