As has been pointed out on wikis, mailing lists, and even on IRC, ZFS
requires a bit of tuning -- specifically in regards to vm.kmem_size and
vm.kmem_size_max. The opinion is: ZFS is memory-hungry.
On my home RELENG_7 amd64 box (2GB RAM), I could panic the system with
heavy I/O due to kmem_size being too small, until I used the following
values:
vm.kmem_size="1536M"
vm.kmem_size_max="1536M"
I decided to upgrade the box to 4GB of RAM, since I was worried about
memory exhaustion under even higher loads (during heavy I/O with ZFS,
I'd often see the "Wired" value in top reach 1.3-1.4GB).
I received the RAM today, installed it, works fine. I then chose to
adjust the vm.kmem_size and kmem_size_max settings to something larger,
which seemed like the logical choice. I went with:
vm.kmem_size="3584M"
vm.kmem_size_max="3584M"
Upon reboot, the kernel immediately panic'd with the following message:
kmem_suballoc(): bad status return of 3.
I then chose smaller values (going with 2048M); same panic.
Can someone shed some light on this? I'm guessing it's intentional;
from what I've found online, it seems to indicate that when the
kmem_size value is set too large, there isn't enough memory available
for allocation in other pieces of the kernel, hence the panic.
I'm worried that there's a limit of some sort being hit, and that
inadvertantly systems with lots of ZFS usage (multiple zpools comes to
mind), one will not be able to increase kmem_size past ~1.5GB, despite
how much memory is physically installed.
--
| Jeremy Chadwick jdc at parodius.com |
| Parodius Networking http://www.parodius.com/ |
| UNIX Systems Administrator Mountain View, CA, USA |
| Making life hard for others since 1977. PGP: 4BD6C0CB |