On 01/13/2016 04:08 PM, Soumya Koduri wrote:> > > On 01/12/2016 12:46 PM, Oleksandr Natalenko wrote: >> Just in case, here is Valgrind output on FUSE client with 3.7.6 + >> API-related patches we discussed before: >> >> https://gist.github.com/cd6605ca19734c1496a4 >> > > Thanks for sharing the results. I made changes to fix one leak reported > there wrt ' client_cbk_cache_invalidation' - > - http://review.gluster.org/#/c/13232/ > > The other inode* related memory reported as lost is mainly (maybe) > because fuse client process doesn't cleanup its memory (doesn't use > fini()) while exiting the process. Hence majority of those allocations > are listed as lost. But most of the inodes should have got purged when > we drop vfs cache. Did you do drop vfs cache before exiting the process? > > I shall add some log statements and check that partAlso please take statedump of the fuse mount process (after dropping vfs cache) when you see high memory usage by issuing the following command - 'kill -USR1 <pid-of-gluster-process>' The statedump will be copied to 'glusterdump.<pid>.dump.tim estamp` file in /var/run/gluster or /usr/local/var/run/gluster. Please refer to [1] for more information. Thanks, Soumya [1] http://review.gluster.org/#/c/8288/1/doc/debugging/statedump.md> > Thanks, > Soumya > >> 12.01.2016 08:24, Soumya Koduri ???????: >>> For fuse client, I tried vfs drop_caches as suggested by Vijay in an >>> earlier mail. Though all the inodes get purged, I still doesn't see >>> much difference in the memory footprint drop. Need to investigate what >>> else is consuming so much memory here. > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://www.gluster.org/mailman/listinfo/gluster-users
Oleksandr Natalenko
2016-Jan-13 12:30 UTC
[Gluster-users] Memory leak in GlusterFS FUSE client
I've applied client_cbk_cache_invalidation leak patch, and here are the results. Launch: ==valgrind --leak-check=full --show-leak-kinds=all --log-file="valgrind_fuse.log" /usr/bin/glusterfs -N --volfile-server=server.example.com --volfile-id=somevolume /mnt/somevolume find /mnt/somevolume -type d == During the traversing, memory RSS value for glusterfs process went from 79M to 644M. Then I performed dropping VFS cache (as I did in previous tests), but RSS value was not affected. Then I did statedump: https://gist.github.com/11c7b11fc99ab123e6e2 Then I unmounted the volume and got Valgrind log: https://gist.github.com/99d2e3c5cb4ed50b091c Leaks reported by Valgrind do not conform by their size to overall runtime memory consumption, so I believe with the latest patch some cleanup is being performed better on exit (unmount), but in runtime there are still some issues. 13.01.2016 12:56, Soumya Koduri ???????:> On 01/13/2016 04:08 PM, Soumya Koduri wrote: >> >> >> On 01/12/2016 12:46 PM, Oleksandr Natalenko wrote: >>> Just in case, here is Valgrind output on FUSE client with 3.7.6 + >>> API-related patches we discussed before: >>> >>> https://gist.github.com/cd6605ca19734c1496a4 >>> >> >> Thanks for sharing the results. I made changes to fix one leak >> reported >> there wrt ' client_cbk_cache_invalidation' - >> - http://review.gluster.org/#/c/13232/ >> >> The other inode* related memory reported as lost is mainly (maybe) >> because fuse client process doesn't cleanup its memory (doesn't use >> fini()) while exiting the process. Hence majority of those >> allocations >> are listed as lost. But most of the inodes should have got purged when >> we drop vfs cache. Did you do drop vfs cache before exiting the >> process? >> >> I shall add some log statements and check that part > > Also please take statedump of the fuse mount process (after dropping > vfs cache) when you see high memory usage by issuing the following > command - > 'kill -USR1 <pid-of-gluster-process>' > > The statedump will be copied to 'glusterdump.<pid>.dump.tim > estamp` file in /var/run/gluster or /usr/local/var/run/gluster. > Please refer to [1] for more information. > > Thanks, > Soumya > [1] http://review.gluster.org/#/c/8288/1/doc/debugging/statedump.md > >> >> Thanks, >> Soumya >> >>> 12.01.2016 08:24, Soumya Koduri ???????: >>>> For fuse client, I tried vfs drop_caches as suggested by Vijay in an >>>> earlier mail. Though all the inodes get purged, I still doesn't see >>>> much difference in the memory footprint drop. Need to investigate >>>> what >>>> else is consuming so much memory here. >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://www.gluster.org/mailman/listinfo/gluster-users
Oleksandr Natalenko
2016-Jan-19 23:13 UTC
[Gluster-users] Memory leak in GlusterFS FUSE client
Here is another RAM usage stats and statedump of GlusterFS mount approaching to just another OOM: ==root 32495 1.4 88.3 4943868 1697316 ? Ssl Jan13 129:18 /usr/sbin/ glusterfs --volfile-server=server.example.com --volfile-id=volume /mnt/volume == https://gist.github.com/86198201c79e927b46bd 1.6G of RAM just for almost idle mount (we occasionally store Asterisk recordings there). Triple OOM for 69 days of uptime. Any thoughts? On ??????, 13 ????? 2016 ?. 16:26:59 EET Soumya Koduri wrote:> kill -USR1
Oleksandr Natalenko
2016-Jan-20 00:11 UTC
[Gluster-users] Memory leak in GlusterFS FUSE client
And another statedump of FUSE mount client consuming more than 7 GiB of RAM: https://gist.github.com/136d7c49193c798b3ade DHT-related leak? On ??????, 13 ????? 2016 ?. 16:26:59 EET Soumya Koduri wrote:> On 01/13/2016 04:08 PM, Soumya Koduri wrote: > > On 01/12/2016 12:46 PM, Oleksandr Natalenko wrote: > >> Just in case, here is Valgrind output on FUSE client with 3.7.6 + > >> API-related patches we discussed before: > >> > >> https://gist.github.com/cd6605ca19734c1496a4 > > > > Thanks for sharing the results. I made changes to fix one leak reported > > there wrt ' client_cbk_cache_invalidation' - > > > > - http://review.gluster.org/#/c/13232/ > > > > The other inode* related memory reported as lost is mainly (maybe) > > because fuse client process doesn't cleanup its memory (doesn't use > > fini()) while exiting the process. Hence majority of those allocations > > are listed as lost. But most of the inodes should have got purged when > > we drop vfs cache. Did you do drop vfs cache before exiting the process? > > > > I shall add some log statements and check that part > > Also please take statedump of the fuse mount process (after dropping vfs > cache) when you see high memory usage by issuing the following command - > 'kill -USR1 <pid-of-gluster-process>' > > The statedump will be copied to 'glusterdump.<pid>.dump.tim > estamp` file in /var/run/gluster or /usr/local/var/run/gluster. > Please refer to [1] for more information. > > Thanks, > Soumya > [1] http://review.gluster.org/#/c/8288/1/doc/debugging/statedump.md > > > Thanks, > > Soumya > > > >> 12.01.2016 08:24, Soumya Koduri ???????: > >>> For fuse client, I tried vfs drop_caches as suggested by Vijay in an > >>> earlier mail. Though all the inodes get purged, I still doesn't see > >>> much difference in the memory footprint drop. Need to investigate what > >>> else is consuming so much memory here. > > > > _______________________________________________ > > Gluster-users mailing list > > Gluster-users at gluster.org > > http://www.gluster.org/mailman/listinfo/gluster-users