Hello, quick and stupid question: I''m breaking my head over how to tunz zfs_arc_min on a running system. There must be some magic word to pipe into mdb -kw but I forgot it. I tried /etc/system but it''s still at the old value after reboot: ZFS Tunables (/etc/system): set zfs:zfs_arc_min = 0x200000 set zfs:zfs_arc_meta_limit=0x100000000 ARC Size: Current Size: 1314 MB (arcsize) Target Size (Adaptive): 5102 MB (c) Min Size (Hard Limit): 2048 MB (zfs_arc_min) Max Size (Hard Limit): 5102 MB (zfs_arc_max) I could use the memory now since I''m running out of it, trying to delete a large snapshot :-/ -- No part of this copyright message may be reproduced, read or seen, dead or alive or by any means, including but not limited to telepathy without the benevolence of the author.
The value of zfs_arc_min specified in /etc/system must be over 64MB (0x4000000). Otherwise the setting is ignored. The value is in bytes not pages. Jim --- n 10/ 6/11 05:19 AM, Frank Van Damme wrote:> Hello, > > quick and stupid question: I''m breaking my head over how to tunz > zfs_arc_min on a running system. There must be some magic word to pipe > into mdb -kw but I forgot it. I tried /etc/system but it''s still at the > old value after reboot: > > ZFS Tunables (/etc/system): > set zfs:zfs_arc_min = 0x200000 > set zfs:zfs_arc_meta_limit=0x100000000 > > ARC Size: > Current Size: 1314 MB (arcsize) > Target Size (Adaptive): 5102 MB (c) > Min Size (Hard Limit): 2048 MB (zfs_arc_min) > Max Size (Hard Limit): 5102 MB (zfs_arc_max) > > > I could use the memory now since I''m running out of it, trying to delete > a large snapshot :-/ >
2011/10/8 James Litchfield <jim.litchfield at oracle.com>:> The value of zfs_arc_min specified in /etc/system must be over 64MB > (0x4000000). > Otherwise the setting is ignored. The value is in bytes not pages.wel I''ve now set it to 0x8000000 and it stubbornly stays at 2048 MB... -- Frank Van Damme No part of this copyright message may be reproduced, read or seen, dead or alive or by any means, including but not limited to telepathy without the benevolence of the author.
On Oct 6, 2011, at 5:19 AM, Frank Van Damme <frank.vandamme at gmail.com> wrote:> Hello, > > quick and stupid question: I''m breaking my head over how to tunz > zfs_arc_min on a running system. There must be some magic word to pipe > into mdb -kw but I forgot it. I tried /etc/system but it''s still at the > old value after reboot: > > ZFS Tunables (/etc/system): > set zfs:zfs_arc_min = 0x200000 > set zfs:zfs_arc_meta_limit=0x100000000It is not uncommon to tune arc meta limit. But I''ve not seen a case where tuning arc min is justified, especially for a storage server. Can you explain your reasoning? -- richard> > ARC Size: > Current Size: 1314 MB (arcsize) > Target Size (Adaptive): 5102 MB (c) > Min Size (Hard Limit): 2048 MB (zfs_arc_min) > Max Size (Hard Limit): 5102 MB (zfs_arc_max) > > > I could use the memory now since I''m running out of it, trying to delete > a large snapshot :-/ > > -- > No part of this copyright message may be reproduced, read or seen, > dead or alive or by any means, including but not limited to telepathy > without the benevolence of the author. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2011/10/11 Richard Elling <richard.elling at gmail.com>:>> ZFS Tunables (/etc/system): >> ? ? ? ? set zfs:zfs_arc_min = 0x200000 >> ? ? ? ? set zfs:zfs_arc_meta_limit=0x100000000 > > It is not uncommon to tune arc meta limit. But I''ve not seen a case > where tuning arc min is justified, especially for a storage server. Can > you explain your reasoning?Honestly? I don''t remember. might be a "leftover" setting from a year ago. by now, I figured out I need to "update the boot archive" in order for the new setting to have effect at boot time which apparently involves booting in safe mode. -- Frank Van Damme No part of this copyright message may be reproduced, read or seen, dead or alive or by any means, including but not limited to telepathy without the benevolence of the author.
On Oct 11, 2011, at 2:03 PM, Frank Van Damme wrote:> 2011/10/11 Richard Elling <richard.elling at gmail.com>: >>> ZFS Tunables (/etc/system): >>> set zfs:zfs_arc_min = 0x200000 >>> set zfs:zfs_arc_meta_limit=0x100000000 >> >> It is not uncommon to tune arc meta limit. But I''ve not seen a case >> where tuning arc min is justified, especially for a storage server. Can >> you explain your reasoning? > > > Honestly? I don''t remember. might be a "leftover" setting from a year > ago. by now, I figured out I need to "update the boot archive" in > order for the new setting to have effect at boot time which apparently > involves booting in safe mode.The archive should be updated when you reboot. Or you can run bootadm update-archive anytime. At boot, the zfs_arc_min is copied into arc_c_min overriding the default setting. You can see the current value via kstat: kstat -p zfs:0:arcstats:c_min zfs:0:arcstats:c_min 389202432 This is the smallest size that the ARC will shrink to, when asked to shrink because other applications need memory. -- richard -- ZFS and performance consulting http://www.RichardElling.com VMworld Copenhagen, October 17-20 OpenStorage Summit, San Jose, CA, October 24-27 LISA ''11, Boston, MA, December 4-9
Op 12-10-11 02:27, Richard Elling schreef:> On Oct 11, 2011, at 2:03 PM, Frank Van Damme wrote: >> Honestly? I don''t remember. might be a "leftover" setting from a year >> ago. by now, I figured out I need to "update the boot archive" in >> order for the new setting to have effect at boot time which apparently >> involves booting in safe mode. > > The archive should be updated when you reboot. Or you can run > bootadm update-archive > anytime. > > At boot, the zfs_arc_min is copied into arc_c_min overriding the default > setting. You can see the current value via kstat: > kstat -p zfs:0:arcstats:c_min > zfs:0:arcstats:c_min 389202432 > > This is the smallest size that the ARC will shrink to, when asked to shrink > because other applications need memory.The root of the problem seems to be that that process never completes. 9 /lib/svc/bin/svc.startd 332 /sbin/sh /lib/svc/method/boot-archive-update 347 /sbin/bootadm update-archive Can''t kill it and run from the cmdline either, it simply ignores SIGKILL. (Which shouldn''t even be possible). -- No part of this copyright message may be reproduced, read or seen, dead or alive or by any means, including but not limited to telepathy without the benevolence of the author.
2011-10-12 11:56, Frank Van Damme ?????:> > The root of the problem seems to be that that process never completes. > > 9 /lib/svc/bin/svc.startd > 332 /sbin/sh /lib/svc/method/boot-archive-update > 347 /sbin/bootadm update-archive > > Can''t kill it and run from the cmdline either, it simply ignores > SIGKILL. (Which shouldn''t even be possible). >I guess it is possible when things lock up in kernel calls, waiting for them to complete. It has happened on me a number of times, usually related to ZFS pool being too busy working or repairing to do anything else, and this per se often lead to system crashing (see i.e. my adventures this spring reported on the forums). I had hit a number of problems generally leading to the whole zfs subsystem "running away to a happy place". As an indication of this you can try running something as simple as "zpool list" in the background (otherwise your shell locks up too) and see if it ever completes: # zpool list & Earlier there were bugs related to inaccessible snapshots (marked for deletion, but not actually deletable until you mount and unmount the parent dataset) - these mostly fired in zfs-auto-snap auto-deletions, but also happened to influence bootadm. I am not sure in what way bootadm relies on zfs/zpool, but empirically - it does. You might work around the problem by: * exporting "data" zfs pools before updating the bootarchive (bootadm update-archive); if you''re rebooting the system anyway - stop the zones and services manually, and give this a try. * booting from another media like a Failsafe Boot (SXCE, Sol10) or LiveCD (Indiana) and importing your rootpool to "/a", then run # bootadm update-archive -R /a * booting into single-user mode, making the root RW if needed, and updating the archive. ** You''re likely to go this way anyway if your boot is interrupted due to an outdated boot archive (SMF failure - requires a repair shell interaction). When the archive is updated, you need to clear the service (svcadm clear boot-archive) and exit the repair shell in order to continue booting the OS. * brute force - updating the bootarchive (/platform/i86pc/boot_archive and /platform/i86pc/amd64/boot_archive ) manually as an FS image, with files listed in /boot/solaris/filelist.ramdisk. Usually failure on boot is related to updating of some config files in /etc... //Jim