Ravi,
Got it thanks. I?ve kicked this off, it seems be doing OK.
I am a little concerned about a slow creep of memory usage:
* swap (64GB) completed filled up on server_1
* general memory usage creeping up slowly over time.
$ free -m
total used free shared buff/cache available
Mem: 128829 55596 614 53 72618 71783
Swap: 61034 61034 0
Similar issue on server_2, though lower starting memory usage:
The ?available? number is slowly going down - at this rate, probably will go to
0 before heal is done.
We are actually running 3.8.6, I?d like to try to pause the heal, upgrade to
3.8.7, and resume. Is this possible heal suspend/resume possible or advisable?
The upgrade idea came from this on bugzilla (not 100% if it will help my leak):
https://bugzilla.redhat.com/show_bug.cgi?id=1400927
<https://bugzilla.redhat.com/show_bug.cgi?id=1400927>
Even without doing the upgrade, I may need to restart glusterfs-server anyway to
reset memory usage.
Thanks,
Jackie
> On Dec 28, 2016, at 9:40 PM, Ravishankar N <ravishankar at
redhat.com> wrote:
>
> On 12/29/2016 10:46 AM, Jackie Tung wrote:
>> Thanks very much for the advice.
>>
>> Would you mind elaborating on the "no io" recommendation?
It's somewhat hard for me to guarantee this without a long maintenance
window.
>>
>> What is the consequence of having IO at point of add-brick, and for the
heal period afterwards?
>
> Sorry I wasn't clear. Since you're running 16 distribute legs
(16x2), a lot of self-heals would be running and there is a chance that clients
might experience slowness due to the self-heals. Other than that it should be
fine.
> Thanks,
> Ravi
>
>>
>>
>>
>> On Dec 28, 2016 8:27 PM, "Ravishankar N" <ravishankar at
redhat.com <mailto:ravishankar at redhat.com>> wrote:
>> On 12/29/2016 07:30 AM, Jackie Tung wrote:
>>> Version is 3.8.7 on Ubuntu xenial.
>>>
>>> On Dec 28, 2016 5:56 PM, "Jackie Tung" <jackie at
drive.ai <mailto:jackie at drive.ai>> wrote:
>>> If someone has experience to share in this area, i'd be
grateful. I have an existing distributed replicated volume, 2x16.
>>>
>>> We have a third server ready to go. Redhat docs say just run add
brick replica 3, then run rebalance.
>>>
>>> The rebalance step feels a bit off to me. Isn't some kind of
heal operation in order rather than rebalance?
>>>
>>> No additional usable space will be introduced, only replica count
increase from 2 to 3.
>>
>> You don't need to run re-balance for increasing the replica count.
Heals should automatically be triggered when you run 'gluster vol add-brick
<volname> replica 3 <list of bricks for the 3rd replica>`. It is
advisable to do this when there is no I/O happening on the volume. You can
verify that files are getting populated in the newly added bricks post running
the command.
>>
>> -Ravi
>>>
>>> Thanks
>>> Jackie
>>>
>>> The information in this email is confidential and may be legally
privileged. It is intended solely for the addressee. Access to this email by
anyone else is unauthorized. If you are not the intended recipient, any
disclosure, copying, distribution or any action taken or omitted to be taken in
reliance on it, is prohibited and may be unlawful.
>>>
>>>
>>>
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org <mailto:Gluster-users at
gluster.org>
>>> http://www.gluster.org/mailman/listinfo/gluster-users
<http://www.gluster.org/mailman/listinfo/gluster-users>
>> The information in this email is confidential and may be legally
privileged. It is intended solely for the addressee. Access to this email by
anyone else is unauthorized. If you are not the intended recipient, any
disclosure, copying, distribution or any action taken or omitted to be taken in
reliance on it, is prohibited and may be unlawful.
>>
>
--
The information in this email is confidential and may be legally
privileged. It is intended solely for the addressee. Access to this email
by anyone else is unauthorized. If you are not the intended recipient, any
disclosure, copying, distribution or any action taken or omitted to be
taken in reliance on it, is prohibited and may be unlawful.
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://www.gluster.org/pipermail/gluster-users/attachments/20161229/32e6940e/attachment.html>