12.02.2019 23:34, Mark Johnston wrote:> I suspect that the "leaked" memory is simply being used to cache UMA > items. Note that the values in the FREE column of vmstat -z output are > quite large. The cached items are reclaimed only when the page daemon > wakes up to reclaim memory; if there are no memory shortages, large > amounts of memory may accumulate in UMA caches. In this case, the sum > of the product of columns 2 and 5 gives a total of roughly 4GB cached. > >> as well as "sysctl hw": http://www.grosbein.net/freebsd/leak/sysctl-hw.txt >> and "sysctl vm": http://www.grosbein.net/freebsd/leak/sysctl-vm.txtIt seems page daemon is broken somehow as it did not reclaim several gigs of wired memory despite of long period of vm thrashing: $ sed 's/:/,/' vmstat-z.txt | awk -F, '{printf "%10s %s\n", $2*$5/1024/1024, $1}' | sort -k1,1 -rn | head 1892 abd_chunk 454.629 dnode_t 351.35 zio_buf_512 228.391 zio_buf_16384 173.968 dmu_buf_impl_t 130.25 zio_data_buf_131072 93.6887 VNODE 81.6978 arc_buf_hdr_t_full 74.9368 256 57.4102 4096
On Wed, Feb 13, 2019 at 01:03:37AM +0700, Eugene Grosbein wrote:> 12.02.2019 23:34, Mark Johnston wrote: > > > I suspect that the "leaked" memory is simply being used to cache UMA > > items. Note that the values in the FREE column of vmstat -z output are > > quite large. The cached items are reclaimed only when the page daemon > > wakes up to reclaim memory; if there are no memory shortages, large > > amounts of memory may accumulate in UMA caches. In this case, the sum > > of the product of columns 2 and 5 gives a total of roughly 4GB cached. > > > >> as well as "sysctl hw": http://www.grosbein.net/freebsd/leak/sysctl-hw.txt > >> and "sysctl vm": http://www.grosbein.net/freebsd/leak/sysctl-vm.txt > > It seems page daemon is broken somehow as it did not reclaim several gigs of wired memory > despite of long period of vm thrashing:Depending on the system's workload, it is possible for the caches to grow quite quickly after a reclaim. If you are able to run kgdb on the kernel, you can find the time of the last reclaim by comparing the values of lowmem_uptime and time_uptime.> $ sed 's/:/,/' vmstat-z.txt | awk -F, '{printf "%10s %s\n", $2*$5/1024/1024, $1}' | sort -k1,1 -rn | head > 1892 abd_chunk > 454.629 dnode_t > 351.35 zio_buf_512 > 228.391 zio_buf_16384 > 173.968 dmu_buf_impl_t > 130.25 zio_data_buf_131072 > 93.6887 VNODE > 81.6978 arc_buf_hdr_t_full > 74.9368 256 > 57.4102 4096 >
On 2/12/2019 1:03 PM, Eugene Grosbein wrote:> It seems page daemon is broken somehow as it did not reclaim several > gigs of wired memory > despite of long period of vm thrashing: > > $ sed 's/:/,/' vmstat-z.txt | awk -F, '{printf "%10s %s\n", $2*$5/1024/1024, $1}' | sort -k1,1 -rn | head > 1892 abd_chunk > 454.629 dnode_t > 351.35 zio_buf_512 > 228.391 zio_buf_16384 > 173.968 dmu_buf_impl_t > 130.25 zio_data_buf_131072 > 93.6887 VNODE > 81.6978 arc_buf_hdr_t_full > 74.9368 256 > 57.4102 4096On an nfs server, serving a few large files, my 32G box is showing vmstat -z | sed 's/:/,/' | awk -F, '{printf "%10s %s\n", $2*$5/1024/1024, $1}' | sort -k1,1 -rn | head ?? 11014.3 abd_chunk ??? 2090.5 zio_data_buf_131072 ?? 1142.67 mbuf_jumbo_page ?? 1134.25 zio_buf_131072 ??? 355.28 mbuf_jumbo_9k ??? 233.42 zio_cache ?? 163.099 arc_buf_hdr_t_full ?? 130.738 128 ?? 97.2812 zio_buf_16384 ?? 96.5099 UMA Slabs CPU:? 0.0% user,? 0.0% nice,? 0.0% system,? 0.0% interrupt,? 100% idle Mem: 1348K Active, 98M Inact, 3316K Laundry, 30G Wired, 1022M Free ARC: 11G Total, 7025M MFU, 3580M MRU, 11M Anon, 78M Header, 681M Other ???? 9328M Compressed, 28G Uncompressed, 3.05:1 Ratio Swap: 64G Total, 13M Used, 64G Free Right now its OK, but prior to limiting ARC, I had an issue with memory and the disk thrashing due to swapping pid 643 (devd), uid 0, was killed: out of swap space ??? ---Mike> - > ------------------- > Mike Tancsa, tel +1 519 651 3400 x203 > Sentex Communications, mike at sentex.net > Providing Internet services since 1994 www.sentex.net > Cambridge, Ontario Canada