Hi, This is with stable as of yesterday,but with an un-tunned ZFS box I was still able to generate a kmem exhausted panic. Hard panic, just 3 lines. The box contains 12Gb memory, runs on a 6 core (with HT) xeon. 6* 2T WD black caviar in raidz2 with 2*512Mb mirrored log. The box died while rsyncing 5.8T from its partnering system. (that was the only activity on the box) So the obvious would to conclude that auto-tuning voor ZFS on 8.1-Stable is not yet quite there. So I guess that we still need tuning advice even for 8.1. And thus prevent a hard panic. At the moment trying to 'zfs send | rsh zfs receive' the stuff. Which seems to run at about 40Mb/sec, and is a lot faster than the rsync stuff. --WjW
On Tue, Sep 28, 2010 at 01:24:28PM +0200, Willem Jan Withagen wrote:> This is with stable as of yesterday,but with an un-tunned ZFS box I > was still able to generate a kmem exhausted panic. > Hard panic, just 3 lines. > > The box contains 12Gb memory, runs on a 6 core (with HT) xeon. > 6* 2T WD black caviar in raidz2 with 2*512Mb mirrored log. > > The box died while rsyncing 5.8T from its partnering system. > (that was the only activity on the box)It would help if you could provide output from the following commands (even after the box has rebooted): $ sysctl -a | egrep ^vm.kmem $ sysctl -a | egrep ^vfs.zfs.arc $ sysctl kstat.zfs.misc.arcstats> So the obvious would to conclude that auto-tuning voor ZFS on > 8.1-Stable is not yet quite there. > > So I guess that we still need tuning advice even for 8.1. > And thus prevent a hard panic.Andriy Gapon provides this general recommendation: http://lists.freebsd.org/pipermail/freebsd-stable/2010-September/059114.html The advice I've given for RELENG_8 (as of the time of this writing), 8.1-STABLE, and 8.1-RELEASE, is that for amd64 you'll need to tune: vm.kmem_size vfs.zfs.arc_max An example machine: amd64, with 4GB physical RAM installed (3916MB available for use (verified via dmesg)) uses values: vm.kmem_size="4096M" vfs.zfs.arc_max="3584M" Another example machine: amd64, with 8GB physical RAM installed (7875MB available for use) uses values: vm.kmem_size="8192M" vfs.zfs.arc_max="6144M" I believe the trick -- Andriy, please correct me if I'm wrong -- is the tuning of vfs.zfs.arc_max, which is now a hard limit rather than a "high watermark". However, I believe there have been occasional reports of exhaustion panics despite both of these being set[1]. Those reports are being investigated on an individual basis. I set some other ZFS-related parameters as well (disabling prefetch, adjusting txg.timeout, etc.), but those shouldn't be necessary to gain stability at this point in time. I can't provide tuning advice for i386. [1]: http://lists.freebsd.org/pipermail/freebsd-stable/2010-September/059109.html -- | Jeremy Chadwick jdc@parodius.com | | Parodius Networking http://www.parodius.com/ | | UNIX Systems Administrator Mountain View, CA, USA | | Making life hard for others since 1977. PGP: 4BD6C0CB |
on 28/09/2010 14:50 Jeremy Chadwick said the following:> I believe the trick -- Andriy, please correct me if I'm wrong -- is theWouldn't hurt to CC me, so that I could do it :-)> tuning of vfs.zfs.arc_max, which is now a hard limit rather than a "high > watermark".Not sure what you mean here. What is hard limit, what is high watermark, what is the difference and when is "now"? :-) I believe that "the trick" is to set vm.kmem_size high enough, eitehr using this tunable or vm.kmem_size_scale.> However, I believe there have been occasional reports of exhaustion > panics despite both of these being set[1]. Those reports are being > investigated on an individual basis.I don't believe that the report that you quote actually demonstrates what you say it does. Two quotes from it: "During these panics no tuning or /boot/loader.conf values where present." "Only after hitting this behaviour yesterday i created boot/loader.conf"> > [1]: http://lists.freebsd.org/pipermail/freebsd-stable/2010-September/059109.html >-- Andriy Gapon
>> Thanks for the clarification. I just wish I knew how vm.kmem_size_scale >> fit into the picture (meaning what it does, etc.). The sysctl >> description isn't very helpful. Again, my lack of VM knowledge... >> >Roughly, vm.kmem_size would get set to <available memory> divided by >vm.kmem_size_scale.http://lists.freebsd.org/pipermail/freebsd-stable/2010-September/059114.html Thanks again for the explication, i was amiss after the post above. So increasing kmem_size_scale will reduce the resulting kmem_size. /*correct me if i'm wrong - "divided by" triggered this post*/