We have a production server which does nothing but nfs from zfs. This particular machine has plenty of free memory. Blogs and Documentation state that zfs will use as much memory as is "necessary" but how is "necessary" calculated? If the memory is free and unused would it not be beneficial to increase the relative "necessary" size calculation of the arc even if the extra cache isn''t likely to get hit often? When an L2ARC is attached does it get used if there is no memory pressure? Thanks, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090930/86dcd4b7/attachment.html>
> zfs will use as much memory as is "necessary" but how is "necessary" calculated?using arc_summary.pl from http://www.cuddletech.com/blog/pivot/entry.php?id=979 my tiny system shows: Current Size: 4206 MB (arcsize) Target Size (Adaptive): 4207 MB (c) Min Size (Hard Limit): 894 MB (zfs_arc_min) Max Size (Hard Limit): 7158 MB (zfs_arc_max) so arcsize is close to the desired c, no pressure here but it would be nice to know how c is calculated as its much smaller than zfs_arc_max on a system like yours with nothing else on it. > When an L2ARC is attached does it get used if there is no memory pressure? My guess is no. for the same reason an L2ARC takes sooooo long to fill. arc_summary.pl from the same system is Most Recently Used Ghost: 0% 9367837 (mru_ghost) [ Return Customer Evicted, Now Back ] Most Frequently Used Ghost: 0% 11138758 (mfu_ghost) [ Frequent Customer Evicted, Now Back ] so with no ghosts, this system wouldn''t benefit from an L2ARC even if added In review: (audit welcome) if arcsize = c and is much less than zfs_arc_max, there is no point in adding system ram in hopes of increase arc. if m?u_ghost is a small %, there is no point in adding an L2ARC. if you do add a L2ARC, one must have ram between c and zfs_arc_max for its pointers. Rob
On Fri, Oct 2, 2009 at 1:45 PM, Rob Logan <Rob at logan.com> wrote:>> zfs will use as much memory as is "necessary" but how is "necessary" >> calculated? > > using arc_summary.pl from > http://www.cuddletech.com/blog/pivot/entry.php?id=979 > my tiny system shows: > ? ? ? ? Current Size: ? ? ? ? ? ? 4206 MB (arcsize) > ? ? ? ? Target Size (Adaptive): ? 4207 MB (c)That looks a lot like ~ 4 * 1024 MB. Is this a 64-bit capable system that you have booted from a 32-bit kernel? -- Mike Gerdts http://mgerdts.blogspot.com/
On Oct 2, 2009, at 11:45 AM, Rob Logan wrote:> > zfs will use as much memory as is "necessary" but how is > "necessary" calculated? > > using arc_summary.pl from http://www.cuddletech.com/blog/pivot/entry.php?id=979 > my tiny system shows: > Current Size: 4206 MB (arcsize) > Target Size (Adaptive): 4207 MB (c) > Min Size (Hard Limit): 894 MB (zfs_arc_min) > Max Size (Hard Limit): 7158 MB (zfs_arc_max) > > so arcsize is close to the desired c, no pressure here but it would > be nice to know > how c is calculated as its much smaller than zfs_arc_max on a system > like yours with nothing else on it.c is the current size the ARC. c will change dynamically, as memory pressure and demand change.> > When an L2ARC is attached does it get used if there is no memory > pressure? > > My guess is no. for the same reason an L2ARC takes sooooo long to > fill. > arc_summary.pl from the same system isYou want to cache stuff closer to where it is being used. Expect the L2ARC to contain ARC evictions.> Most Recently Used Ghost: 0% 9367837 (mru_ghost) [ Return > Customer Evicted, Now Back ] > Most Frequently Used Ghost: 0% 11138758 (mfu_ghost) [ Frequent > Customer Evicted, Now Back ] > > so with no ghosts, this system wouldn''t benefit from an L2ARC even > if added > > In review: (audit welcome) > > if arcsize = c and is much less than zfs_arc_max, > there is no point in adding system ram in hopes of increase arc.If you add RAM arc_c_max will change unless you limit it by setting zfs_arc_max. In other words, c will change dynamically between the limits: arc_c_min <= c <= arc_c_max. By default for 64-bit machines, the arc_c_max is the greater of 3/4 of physical memory or all but 1GB. If zfs_arc_max is set and is less than arc_c_max and greater than 64 MB, then arc_c_max is set to zfs_arc_max. This allows you to reasonably cap arc_c_max. Note: if you pick an unreasonable value for zfs_arc_max, you will not be notified -- check current values with kstat -n arcstats> if m?u_ghost is a small %, there is no point in adding an L2ARC.Yes, to the first order. Ghosts are those whose data is evicted, but whose pointer remains.> if you do add a L2ARC, one must have ram between c and zfs_arc_max > for its pointers.No. The pointers are part of c. Herein lies the rub. If you have a very large L2ARC and limited RAM, then you could waste L2ARC space because the pointers run out of space. SWAG pointers at 200 bytes each per record. For example, suppose you use a Seagate 2 TB disk for L2ARC: + Disk size = 3,907,029,168 512-byte sectors - 4.5 MB for labels and reserve + workload uses 8KB fixed record size (eg Oracle OLTP database) + RAM needed to support this L2ARC on this workload is approximately: 1 GB + Application space + ((3,907,029,168 - 9,232) * 200 / 16) or at least 48 GBytes, practically speaking Do not underestimate the amount of RAM needed to address lots of stuff :-) -- richard
On Fri, Oct 2, 2009 at 10:57 PM, Richard Elling <richard.elling at gmail.com>wrote:> > c is the current size the ARC. c will change dynamically, as memory > pressure > and demand change.How is the relative greediness of c determined? Is there a way to make it more greedy on systems with lots of free memory?> > > > When an L2ARC is attached does it get used if there is no memory >> pressure? >> >> My guess is no. for the same reason an L2ARC takes sooooo long to fill. >> arc_summary.pl from the same system is >> > > You want to cache stuff closer to where it is being used. Expect the L2ARC > to contain ARC evictions. >If c is much smaller than zfs_arc_max and there is no memory pressure can we reasonably expect that the L2ARC is not likely to be used often? Do items get evicted from the L2ARC before the L2ARC is full? Thanks, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091003/f8d12cd0/attachment.html>
On Oct 3, 2009, at 10:26 AM, Chris Banal wrote:> On Fri, Oct 2, 2009 at 10:57 PM, Richard Elling <richard.elling at gmail.com > > wrote: > > c is the current size the ARC. c will change dynamically, as memory > pressure > and demand change. > > How is the relative greediness of c determined? Is there a way to > make it more greedy on systems with lots of free memory?AFAIK, there is no throttle on the ARC, so c will increase as the I/O demand dictates. The L2ARC has a fill throttle because those IOPS can compete with the other devices on the system.> > When an L2ARC is attached does it get used if there is no memory > pressure?> My guess is no. for the same reason an L2ARC takes sooooo long to > fill. > arc_summary.pl from the same system is > > You want to cache stuff closer to where it is being used. Expect the > L2ARC > to contain ARC evictions. > > > If c is much smaller than zfs_arc_max and there is no memory > pressure can we reasonably expect that the L2ARC is not likely to be > used often? Do items get evicted from the L2ARC before the L2ARC is > full?Yes, but I''m not exactly sure when this arrived. Rather than repeat, the description is fairly well documented in the source (note: line number subject to change, look for the comments describing the L2ARC). http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c#3638 -- richard
On Sat, Oct 3, 2009 at 11:33 AM, Richard Elling <richard.elling at gmail.com>wrote:> On Oct 3, 2009, at 10:26 AM, Chris Banal wrote: > > On Fri, Oct 2, 2009 at 10:57 PM, Richard Elling <richard.elling at gmail.com> >> wrote: >> >> c is the current size the ARC. c will change dynamically, as memory >> pressure >> and demand change. >> >> How is the relative greediness of c determined? Is there a way to make it >> more greedy on systems with lots of free memory? >> > > AFAIK, there is no throttle on the ARC, so c will increase as the I/O > demand > dictates. The L2ARC has a fill throttle because those IOPS can compete > with the other devices on the system. >Other then memory pressure what would cause c to decrease? On a system that does nightly backups which are many times the amount of physical memory and does nothing but nfs. Why would we see c well below zfs_arc_max and plenty of free memory? Thanks, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20091004/51abdf1d/attachment.html>