I would like to have 3.4.x fixed if possible. I have a plan to upgrade, but
have to review the procedure such as:
1) the sequence - upgrade the client first or the brick first?
2) can one brick taken down for upgrade and bring it back up to have
everything in sync between 3.4.x and 3.5.x before I upgrade the next brick
to 3.5.x?
Thanks,
Adrian
-----Original Message-----
From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com]
Sent: Sunday, November 23, 2014 12:11 AM
To: Adrian Kan; 'Lindsay Mathieson'; gluster-users at gluster.org
Subject: Re: [Gluster-users] Sparse Files and Heal
On 11/22/2014 09:31 PM, Adrian Kan wrote:> I'm currently using 3.4.2
Do you mind upgrading to 3.5.x or want to stay with 3.4.x?
Pranith>
>
> Thanks,
> Adrian
>
> -----Original Message-----
> From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com]
> Sent: Saturday, November 22, 2014 11:57 PM
> To: Adrian Kan; 'Lindsay Mathieson'; gluster-users at gluster.org
> Subject: Re: [Gluster-users] Sparse Files and Heal
>
>
> On 11/22/2014 09:25 PM, Adrian Kan wrote:
>> Thanks a lot Pranith. Could you CC me the bug as well because I am
>> very interested in the status.
>> I'm getting the same issue since the middle of this year
>> (http://gluster.org/pipermail/gluster-users.old/2014-March/016322.htm
>> l
>> ) so I hope this can be fixed.
> Are you using 3.4.x or 3.5.x? There will be different bugs(clones) for
> the two releases. Based on that I will CC
>
> Pranith
>>
>> Thanks,
>> Adrian
>>
>> -----Original Message-----
>> From: Pranith Kumar Karampuri [mailto:pkarampu at redhat.com]
>> Sent: Saturday, November 22, 2014 11:49 PM
>> To: Adrian Kan; 'Lindsay Mathieson'; gluster-users at
gluster.org
>> Subject: Re: [Gluster-users] Sparse Files and Heal
>>
>>
>> On 11/22/2014 01:17 PM, Adrian Kan wrote:
>>> Pranith,
>>>
>>> I'm wondering if this is a better method to take down a brick
for
>>> maintenance purpose and reheal:
>>>
>>> 1) Detach the brick from the cluster - gluster volume remove-brick
>>> datastore1 replica 1 brick1:/mnt/datastore1
>>> 2) Take down the brick1
>>> 3) Do whatever maintenance needed to brick1
>>> 4) Turn the brick1 back on
>>> 5) I'm pretty sure glusterfs would not allow brick1 to be
>>> re-attached to the cluster because there are attributes set in the
>>> volume. The only way is to remove everything in it.
>>> 6) Re-attach brick1 after emptying the directory in brick1 -
gluster
>>> volume add-brick datastore1 replica brick1:/mnt/datastore1
>>> 7) Initiate full heal
>> Best method is just 2), 3), 4). The only bug that is preventing that
>> from happening now is 'full' heal filling sparse regions of the
file,
>> which will be fixed shortly, we even identified the fix.
>>
>> Pranith
>>> Thanks,
>>> Adrian
>>>
>>> -----Original Message-----
>>> From: gluster-users-bounces at gluster.org
>>> [mailto:gluster-users-bounces at gluster.org] On Behalf Of Lindsay
>>> Mathieson
>>> Sent: Saturday, November 22, 2014 3:35 PM
>>> To: gluster-users at gluster.org
>>> Subject: Re: [Gluster-users] Sparse Files and Heal
>>>
>>> On Sat, 22 Nov 2014 12:54:48 PM you wrote:
>>>> Lindsay,
>>>> You said, you restored it from some backup. How did
you do
that?>>>> If you copy the VM image from back up to the location where you
>>>> deleted it from on the brick directly. Then the VM hypervisor
still
>>>> doesn't write to the new file that is copied. Basically we
need to
>>>> make the mount close old fd that was opened on the VM(now
deleted
>>>> on one
>>> of the bricks).
>>>
>>>
>>> I stopped the the VM and the restore creates an image with a new
>>> name, so it should be fine.
>>>
>>> thanks,
>>> --
>>> Lindsay
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://supercolony.gluster.org/mailman/listinfo/gluster-users