Hmm. I just had to jump through lots of issues with a gluster 3.12.9 setup under Ovirt. The mounts are stock fuse.glusterfs. The RAM usage had been climbing and I had to move VMs around, put hosts in maintenance mode, do updates, restart. When the VMs were moved back the memory usage dropped back to normal. The new gluster is 3.12.11 and still using fuse in a replica 3 config. I'm blaming the fuse mount process for the leak (with no data to back it up yet). A different gluster install also using fuse mounts does not show the memory consumption. It does not use virtualization at all so it really is likely an issue with the kvm/qemu. On those system, the fuse mounts get dropped by oomkiller when computation use of memory overload things. Different issue totally. On Wed, 2018-08-01 at 19:57 +0100, lemonnierk at ulrar.net wrote:> Hey, > Is there by any chance a known bug about a memory leak for the > libgfapiin the latests 3.12 releases ?I've migrated a lot of virtual > machines from an old proxmox cluster to anew one, with a newer > gluster (3.12.10) and ever since the virtualmachines have been eating > more and more RAM all the time, without everstopping. I have 8 Gb > machines occupying 40 Gb or ram, which theyweren't doing on the old > cluster. > It could be a proxmox problem, maybe a leak in their qemu, but > sinceno one seems to be reporting that problem I wonder if maybe the > newergluster might have a leak, I believe libgfapi isn't used much.I > tried looking at the bug tracker but I don't see anything obvious, > theonly leak I found seems to be for distributed volumes, but we only > usereplica mode. > Is anyone aware of a way to know if libgfapi is responsible or not > ?Does it have any kind of reporting I could enable ? Worse case I > couldalways boot a VM through the fuse mount instead of libgfapi, but > that'snot ideal, it'd take a while to confirm. > > _______________________________________________Gluster-users mailing > listGluster-users at gluster.orghttps://lists.gluster.org/mailman/listin > fo/gluster-users-- James P. Kinney III Every time you stop a school, you will have to build a jail. What you gain at one end you lose at the other. It's like feeding a dog on his own tail. It won't fatten the dog. - Speech 11/23/1900 Mark Twain http://heretothereideas.blogspot.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180801/239ae3eb/attachment.html>
Darrell Budic
2018-Aug-02 15:02 UTC
[Gluster-users] Memory leak with the libgfapi in 3.12 ?
A couple of us have seen https://bugzilla.redhat.com/show_bug.cgi?id=1593826 on fuse mounts, seems to be present in 3.12.9 and later, client side. Servers seem fine, it looks like a client side leak to me in. Running client 3.12.8 or .6 against some 3.12.11 servers are showing now problems for me.> From: Jim Kinney <jim.kinney at gmail.com> > Subject: Re: [Gluster-users] Memory leak with the libgfapi in 3.12 ? > Date: August 1, 2018 at 4:35:58 PM CDT > To: lemonnierk at ulrar.net, gluster-users at gluster.org > > Hmm. I just had to jump through lots of issues with a gluster 3.12.9 setup under Ovirt. The mounts are stock fuse.glusterfs. The RAM usage had been climbing and I had to move VMs around, put hosts in maintenance mode, do updates, restart. When the VMs were moved back the memory usage dropped back to normal. The new gluster is 3.12.11 and still using fuse in a replica 3 config. I'm blaming the fuse mount process for the leak (with no data to back it up yet). > > A different gluster install also using fuse mounts does not show the memory consumption. It does not use virtualization at all so it really is likely an issue with the kvm/qemu. On those system, the fuse mounts get dropped by oomkiller when computation use of memory overload things. Different issue totally. > > On Wed, 2018-08-01 at 19:57 +0100, lemonnierk at ulrar.net wrote: >> Hey, >> >> Is there by any chance a known bug about a memory leak for the libgfapi >> in the latests 3.12 releases ? >> I've migrated a lot of virtual machines from an old proxmox cluster to a >> new one, with a newer gluster (3.12.10) and ever since the virtual >> machines have been eating more and more RAM all the time, without ever >> stopping. I have 8 Gb machines occupying 40 Gb or ram, which they >> weren't doing on the old cluster. >> >> It could be a proxmox problem, maybe a leak in their qemu, but since >> no one seems to be reporting that problem I wonder if maybe the newer >> gluster might have a leak, I believe libgfapi isn't used much. >> I tried looking at the bug tracker but I don't see anything obvious, the >> only leak I found seems to be for distributed volumes, but we only use >> replica mode. >> >> Is anyone aware of a way to know if libgfapi is responsible or not ? >> Does it have any kind of reporting I could enable ? Worse case I could >> always boot a VM through the fuse mount instead of libgfapi, but that's >> not ideal, it'd take a while to confirm. >> >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org <mailto:Gluster-users at gluster.org> >> https://lists.gluster.org/mailman/listinfo/gluster-users <https://lists.gluster.org/mailman/listinfo/gluster-users>-- > James P. Kinney III > > Every time you stop a school, you will have to build a jail. What you > gain at one end you lose at the other. It's like feeding a dog on his > own tail. It won't fatten the dog. > - Speech 11/23/1900 Mark Twain > > http://heretothereideas.blogspot.com/ > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20180802/5ac72b19/attachment.html>