I'm curious if this was ever resolved/commented on? Were you able to
finish the rebalance?
On Tue, Jan 31, 2012 at 4:37 AM, Emir Imamagic <eimamagi at srce.hr>
wrote:> Hello,
>
> we are using glusterfs 3.2.5 and have a distributed volume with over 10M
> directories. We recently added new node and initiated rebalance. After
> several days glusterfsd consumed all the memory and was killed by the
> kernel. At that stage it was still doing the layout rebalance and got over
> 9M directories.
>
> Rebalance reports failed status:
> # gluster volume rebalance vol status
> rebalance failed
>
> In the glusterd log however I found:
> ?I [glusterd-rebalance.c:473:glusterd_defrag_start] 0-rebalance: rebalance
> on /etc/glusterd/mount/vol complete
>
> I stumbled upon the patch on gluster-devel which mentions memory leak
> related rebalance:
> http://dev.gluster.com/pipermail/glusterfs/2011-June/003369.html. But
can't
> figure out if this was included in the 3.2.5 release.
>
> Another question is - is it safe to initiate rebalance of the data?
>
> Thanks in advance
> --
> Emir Imamagic
> Sektor za racunalne sustave
> Sveuciliste u Zagrebu, Sveucilisni racunski centar (Srce),
www.srce.unizg.hr
> Emir.Imamagic at srce.hr, tel: +385 1 616 5809, fax: +385 1 616 5559
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://gluster.org/cgi-bin/mailman/listinfo/gluster-users