Mark Johnston
2018-Aug-04 19:47 UTC
All the memory eaten away by ZFS 'solaris' malloc - on 11.1-R amd64
On Sat, Aug 04, 2018 at 08:38:04PM +0200, Mark Martinec wrote:> 2018-08-04 19:01, Mark Johnston wrote: > > I think running "zpool list" is adding a lot of noise to the output. > > Could you retry without doing that? > > No, like I said previously, the "zpool list" (with one defunct > zfs pool) *is* the sole culprit of the zfs memory leak. > With each invocation of "zpool list" the "solaris" malloc > jumps up by the same amount, and never ever drops. Without > running it (like repeatedly under 'telegraf' monitoring > of zfs), the machine runs normally and never runs out of > memory, the "solaris" malloc count no longer grows steadily.Sorry, I missed that message. Given that information, it would be useful to see the output of the following script instead: # dtrace -c "zpool list -Hp" -x temporal=off -n ' dtmalloc::solaris:malloc /pid == $target/{@allocs[stack(), args[3]] = count()} dtmalloc::solaris:free /pid == $target/{@frees[stack(), args[3]] = count();}' This will record all allocations and frees from a single instance of "zpool list".
Mark Martinec
2018-Aug-07 12:58 UTC
All the memory eaten away by ZFS 'solaris' malloc - on 11.1-R amd64
> On Sat, Aug 04, 2018 at 08:38:04PM +0200, Mark Martinec wrote: >> 2018-08-04 19:01, Mark Johnston wrote: >> > I think running "zpool list" is adding a lot of noise to the output. >> > Could you retry without doing that? >> No, like I said previously, the "zpool list" (with one defunct >> zfs pool) *is* the sole culprit of the zfs memory leak. >> With each invocation of "zpool list" the "solaris" malloc >> jumps up by the same amount, and never ever drops. Without >> running it (like repeatedly under 'telegraf' monitoring >> of zfs), the machine runs normally and never runs out of >> memory, the "solaris" malloc count no longer grows steadily.2018-08-04 21:47, Mark Johnston wrote:> Sorry, I missed that message. Given that information, it would be > useful to see the output of the following script instead: > > # dtrace -c "zpool list -Hp" -x temporal=off -n ' > dtmalloc::solaris:malloc > /pid == $target/{@allocs[stack(), args[3]] = count()} > dtmalloc::solaris:free > /pid == $target/{@frees[stack(), args[3]] = count();}' > > This will record all allocations and frees from a single instance of > "zpool list".Collected, here it is: https://www.ijs.si/usr/mark/tmp/dtrace-cmd.out.bz2 Kevin P. Neal wrote:> Was there a mention of a defunct pool?Indeed. Haven't tried yet to destroy it, so it is only my hypothesis that a defunct pool plays a role in this leak.> I've got a machine with 8GB RAM running 11.1-RELEASE-p4 with a single > ZFS > pool. It runs zfs list in a script multiple times a minute, and it has > been doing so for 181 days with no reboot. I have not seen any memory > issues.I have jumped from 10.3 directly to 11.1-RELEASE-p11, so I'm not sure with exactly which version / patch level the problem was introduced. Tried to reproduce the problem on another host running 11.2R, using memory disk (md), created GPT partition on it and a ZFS pool on top, then destroyed the disk, so the pool was left as UNAVAILABLE. Unfortunately this did not reproduce the problem, the "zpool list" on that host does not cause ZFS to leak memory. Must be something specific to that failed disk or pool, which is causing the leak. Mark