I''ve got an opensolaris snv_118 machine that does nothing except serve up NFS and ISCSI. The machine has 8G of ram, and I''ve got an 80G SSD as L2ARC. The ARC on this machine is currently sitting at around 2G, the kernel is using around 5G, and I''ve got about 1G free. I''ve pulled this from a combination of arc_summary.pl, and ''echo "::memstat" | mdb -k'' It''s my understanding that the kernel will use a certain amount of ram for managing the L2Arc, and that how much is needed is dependent on the size of the L2Arc and the recordsize of the zfs filesystems I have some questions that I''m hoping the group can answer... Given that I don''t believe there is any other memory pressure on the system, why isn''t the ARC using that last 1G of ram? Is there some way to see how much ram is used for L2Arc management? Is that what the l2_hdr_size kstat measures? Is it possible to see it via ''echo "::kmastat" | mdb -k ''? Thanks everyone. Tristan
On Dec 20, 2009, at 12:25 PM, Tristan Ball wrote:> I''ve got an opensolaris snv_118 machine that does nothing except > serve up NFS and ISCSI. > > The machine has 8G of ram, and I''ve got an 80G SSD as L2ARC. > The ARC on this machine is currently sitting at around 2G, the > kernel is using around 5G, and I''ve got about 1G free.Yes, the ARC max is set by default to 3/4 of memory or memory - 1GB, whichever is greater. http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c#3426> I''ve pulled this from a combination of arc_summary.pl, and ''echo > "::memstat" | mdb -k''IMHO, it is easier to look at the c_max in kstat. kstat -n arcstats -s c_max> It''s my understanding that the kernel will use a certain amount of > ram for managing the L2Arc, and that how much is needed is > dependent on the size of the L2Arc and the recordsize of the zfs > filesystemsYes.> I have some questions that I''m hoping the group can answer... > > Given that I don''t believe there is any other memory pressure on the > system, why isn''t the ARC using that last 1G of ram?Simon says, "don''t do that" ? ;-)> Is there some way to see how much ram is used for L2Arc management? > Is that what the l2_hdr_size kstat measures?> Is it possible to see it via ''echo "::kmastat" | mdb -k ''?I don''t think so. OK, so why are you interested in tracking this? Capacity planning? From what I can tell so far, DDT is a much more difficult beast to measure and has a more direct impact on performance :-( -- richard
On Sun, 20 Dec 2009, Richard Elling wrote:>> >> Given that I don''t believe there is any other memory pressure on the >> system, why isn''t the ARC using that last 1G of ram? > > Simon says, "don''t do that" ? ;-)Yes, primarily since if there is no more memory immediately available, performance when starting new processes would suck. You need to reserve some working space for processes and short term requirements. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Bob Friesenhahn wrote:> On Sun, 20 Dec 2009, Richard Elling wrote: >>> >>> Given that I don''t believe there is any other memory pressure on the >>> system, why isn''t the ARC using that last 1G of ram? >> >> Simon says, "don''t do that" ? ;-) > > Yes, primarily since if there is no more memory immediately available, > performance when starting new processes would suck. You need to > reserve some working space for processes and short term requirements. >Why is that a given? There are several systems that steal from cache under memory pressure. Earlier versions of solaris that I''ve dealt with a little managed with quite a bit less that 1G free. On this system, "lotsfree" is sitting at 127mb, which seems reasonable, and isn''t it "lotsfree" and the related variables and page-reclaim logic that maintain that pool of free memory for new allocations? Regards, Tristan.
On Mon, 21 Dec 2009, Tristan Ball wrote:>> >> Yes, primarily since if there is no more memory immediately available, >> performance when starting new processes would suck. You need to reserve >> some working space for processes and short term requirements. >> > Why is that a given? There are several systems that steal from cache under > memory pressure. Earlier versions of solaris that I''ve dealt with a little > managed with quite a bit less that 1G free. On this system, "lotsfree" is > sitting at 127mb, which seems reasonable, and isn''t it "lotsfree" and the > related variables and page-reclaim logic that maintain that pool of free > memory for new allocations?It ain''t necessarily so but any time you need to run "reclaim" logic, there is CPU time expended and the CPU caches tend to get thrashed. Without constraints, the cache would expand to the total amount of file data encountered. It is much better to avoid any thrashing. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
I am interested in this as well. My machine is with 5 gb ram, and will soon have an 80gb SSD device. My free memory hovers around 750 Mb, and the arc around 3GB. This machine doesn''t do anything other than iSCSI/CIFS, I wouldn''t mind using some extra 500 Mb for caching. And this becomes especially important if the kernel will need to consume such large amounts of memory for managing the l2arc. CPU cache trashing although an important topic is of no importance in such cases IMO. i.e. I don''t mind my CPU caches to be trashed if I fire up a gnome desktop occasionally. But I do mind having 750 Mb of RAM sitting unused. -- This message posted from opensolaris.org