oh, my bad... coulb be this one? https://bugzilla.redhat.com/show_bug.cgi?id=1126831 Anyway, on ovirt+gluster w I experienced similar behavior... On Thu, Sep 24, 2015 at 10:32 AM, Oleksandr Natalenko < oleksandr at natalenko.name> wrote:> We use bare GlusterFS installation with no oVirt involved. > > 24.09.2015 10:29, Gabi C wrote: > >> google vdsm memory leak..it's been discussed on list last year and >> earlier this one... >> >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20150924/eeea5c19/attachment.html>
Oleksandr Natalenko
2015-Sep-24 08:14 UTC
[Gluster-users] Memory leak in GlusterFS FUSE client
I've checked statedump of volume in question and haven't found lots of iobuf as mentioned in that bugreport. However, I've noticed that there are lots of LRU records like this: ==[conn.1.bound_xl./bricks/r6sdLV07_vd0_mail/mail.lru.1] gfid=c4b29310-a19d-451b-8dd1-b3ac2d86b595 nlookup=1 fd-count=0 ref=0 ia_type=1 == In fact, there are 16383 of them. I've checked "gluster volume set help" in order to find something LRU-related and have found this: ==Option: network.inode-lru-limit Default Value: 16384 Description: Specifies the maximum megabytes of memory to be used in the inode cache. == Is there error in description stating "maximum megabytes of memory"? Shouldn't it mean "maximum amount of LRU records"? If no, is that true, that inode cache could grow up to 16 GiB for client, and one must lower network.inode-lru-limit value? Another thought: we've enabled write-behind, and the default write-behind-window-size value is 1 MiB. So, one may conclude that with lots of small files written, write-behind buffer could grow up to inode-lru-limit?write-behind-window-size=16 GiB? Who could explain that to me? 24.09.2015 10:42, Gabi C write:> oh, my bad... > coulb be this one? > > https://bugzilla.redhat.com/show_bug.cgi?id=1126831 [2] > Anyway, on ovirt+gluster w I experienced similar behavior...