Oleksandr Natalenko
2016-Jan-23 21:30 UTC
[Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client
OK, now I'm re-performing tests with rsync + GlusterFS v3.7.6 + the following patches: ==Kaleb S KEITHLEY (1): fuse: use-after-free fix in fuse-bridge, revisited Pranith Kumar K (1): mount/fuse: Fix use-after-free crash Soumya Koduri (3): gfapi: Fix inode nlookup counts inode: Retire the inodes from the lru list in inode_table_destroy upcall: free the xdr* allocations == I run rsync from one GlusterFS volume to another. While memory started from under 100 MiBs, it stalled at around 600 MiBs for source volume and does not grow further. As for target volume it is ~730 MiBs, and that is why I'm going to do several rsync rounds to see if it grows more (with no patches bare 3.7.6 could consume more than 20 GiBs). No "kernel notifier loop terminated" message so far for both volumes. Will report more in several days. I hope current patches will be incorporated into 3.7.7. On ????????, 22 ????? 2016 ?. 12:53:36 EET Kaleb S. KEITHLEY wrote:> On 01/22/2016 12:43 PM, Oleksandr Natalenko wrote: > > On ????????, 22 ????? 2016 ?. 12:32:01 EET Kaleb S. KEITHLEY wrote: > >> I presume by this you mean you're not seeing the "kernel notifier loop > >> terminated" error in your logs. > > > > Correct, but only with simple traversing. Have to test under rsync. > > Without the patch I'd get "kernel notifier loop terminated" within a few > minutes of starting I/O. With the patch I haven't seen it in 24 hours > of beating on it. > > >> Hmmm. My system is not leaking. Last 24 hours the RSZ and VSZ are > >> stable: > >> http://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longev > >> ity /client.out > > > > What ops do you perform on mounted volume? Read, write, stat? Is that > > 3.7.6 + patches? > > I'm running an internally developed I/O load generator written by a guy > on our perf team. > > it does, create, write, read, rename, stat, delete, and more.
Mathieu Chateau
2016-Jan-24 08:33 UTC
[Gluster-users] [Gluster-devel] Memory leak in GlusterFS FUSE client
Thanks for all your tests and times, it looks promising :) Cordialement, Mathieu CHATEAU http://www.lotp.fr 2016-01-23 22:30 GMT+01:00 Oleksandr Natalenko <oleksandr at natalenko.name>:> OK, now I'm re-performing tests with rsync + GlusterFS v3.7.6 + the > following > patches: > > ==> Kaleb S KEITHLEY (1): > fuse: use-after-free fix in fuse-bridge, revisited > > Pranith Kumar K (1): > mount/fuse: Fix use-after-free crash > > Soumya Koduri (3): > gfapi: Fix inode nlookup counts > inode: Retire the inodes from the lru list in inode_table_destroy > upcall: free the xdr* allocations > ==> > I run rsync from one GlusterFS volume to another. While memory started from > under 100 MiBs, it stalled at around 600 MiBs for source volume and does > not > grow further. As for target volume it is ~730 MiBs, and that is why I'm > going > to do several rsync rounds to see if it grows more (with no patches bare > 3.7.6 > could consume more than 20 GiBs). > > No "kernel notifier loop terminated" message so far for both volumes. > > Will report more in several days. I hope current patches will be > incorporated > into 3.7.7. > > On ????????, 22 ????? 2016 ?. 12:53:36 EET Kaleb S. KEITHLEY wrote: > > On 01/22/2016 12:43 PM, Oleksandr Natalenko wrote: > > > On ????????, 22 ????? 2016 ?. 12:32:01 EET Kaleb S. KEITHLEY wrote: > > >> I presume by this you mean you're not seeing the "kernel notifier loop > > >> terminated" error in your logs. > > > > > > Correct, but only with simple traversing. Have to test under rsync. > > > > Without the patch I'd get "kernel notifier loop terminated" within a few > > minutes of starting I/O. With the patch I haven't seen it in 24 hours > > of beating on it. > > > > >> Hmmm. My system is not leaking. Last 24 hours the RSZ and VSZ are > > >> stable: > > >> > http://download.gluster.org/pub/gluster/glusterfs/dynamic-analysis/longev > > >> ity /client.out > > > > > > What ops do you perform on mounted volume? Read, write, stat? Is that > > > 3.7.6 + patches? > > > > I'm running an internally developed I/O load generator written by a guy > > on our perf team. > > > > it does, create, write, read, rename, stat, delete, and more. > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://www.gluster.org/pipermail/gluster-users/attachments/20160124/764d48cd/attachment.html>