Glad it worked for you. :) Thanks for the confirmation!
-Krutika
On Mon, Feb 27, 2017 at 2:25 PM, Mahdi Adnan <mahdi.adnan at outlook.com>
wrote:
> Many thanks, It worked great.
>
> I created a volume same as before and created a vm and played with it
> while i expanded the volume with two more bricks.
>
> After rebalancing, i rebooted the VM and it worked just fine without any
> issues.
>
>
>
> --
>
> Respectfully
> *Mahdi A. Mahdi*
>
> ------------------------------
> *From:* Krutika Dhananjay <kdhananj at redhat.com>
> *Sent:* Monday, February 27, 2017 8:11:31 AM
> *To:* Mahdi Adnan
> *Cc:* Gandalf Corvotempesta; gluster-users at gluster.org
>
> *Subject:* Re: [Gluster-users] Volume rebalance issue
>
> I've attached the src tarball with the patches that fix this issue,
> applied on top of the head of release-3.8 branch.
>
> -Krutika
>
> On Sun, Feb 26, 2017 at 11:36 PM, Mahdi Adnan <mahdi.adnan at
outlook.com>
> wrote:
>
>>
>> I created a distributed replicated volume, set the group virt, enabled
>> sharding, migrated a few VMs to the volume, after that i added more
bricks
>> to the volume and started the rebalance, i checked the VMs and it was
>> corrupted.
>>
>> And yes, what you suggested about Gluster is on point, i think we need
to
>> have more bug fixes and performance enhancements.
>>
>> Im going to deploy a test Gluster soon just to test patches and updates
>> and report back bugs and issues.
>>
>> --
>>
>> Respectfully
>> *Mahdi A. Mahdi*
>>
>> ------------------------------
>> *From:* Gandalf Corvotempesta <gandalf.corvotempesta at
gmail.com>
>> *Sent:* Sunday, February 26, 2017 11:07:04 AM
>> *To:* Mahdi Adnan
>> *Cc:* gluster-users at gluster.org
>> *Subject:* Re: [Gluster-users] Volume rebalance issue
>>
>> How did you replicate the issue?
>> Next week I'll spin up a gluster storage and I would like to try
the same
>> to see the corruption and to test any patches from gluster
>>
>> Il 25 feb 2017 4:31 PM, "Mahdi Adnan" <mahdi.adnan at
outlook.com> ha
>> scritto:
>>
>> Hi,
>>
>>
>> We have a volume of 4 servers 8x2 bricks (Distributed-Replicate)
hosting
>> VMs for ESXi, i tried expanding the volume with 8 more bricks, and
after
>> rebalancing the volume, the VMs got corrupted.
>>
>> Gluster version is 3.8.9 and the volume is using the default parameters
>> of group "virt" plus sharding.
>>
>> I created a new volume without sharding and got the same issue after
the
>> rebalance.
>>
>> I checked the reported bugs and the mailing list, and i noticed
it's a
>> bug in Gluster.
>>
>> Is it affecting all of Gluster versions ? is there any workaround or a
>> volume setup that is not affected by this issue ?
>>
>>
>> Thank you.
>>
>> --
>>
>> Respectfully
>> *Mahdi A. Mahdi*
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>>
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinfo/gluster-users
>>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://lists.gluster.org/pipermail/gluster-users/attachments/20170227/9288f28d/attachment.html>