Oops, should have sent to the list...
Richard Elling wrote:>
> On Dec 20, 2009, at 12:25 PM, Tristan Ball wrote:
>
>> I''ve got an opensolaris snv_118 machine that does nothing
except
>> serve up NFS and ISCSI.
>>
>> The machine has 8G of ram, and I''ve got an 80G SSD as L2ARC.
>> The ARC on this machine is currently sitting at around 2G, the kernel
>> is using around 5G, and I''ve got about 1G free.
>
> Yes, the ARC max is set by default to 3/4 of memory or memory - 1GB,
> whichever
> is greater.
>
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common
/fs/zfs/arc.c#3426 >
So I''ve read, but the ARC is using considerably less than 3/4 of
memory,
and 1G free is less that 1/4! On this box, c_max is about 7 gig (Which
is more than 3/4 anyway)?
>
>> I''ve pulled this from a combination of arc_summary.pl, and
''echo
>> "::memstat" | mdb -k''
>
> IMHO, it is easier to look at the c_max in kstat.
> kstat -n arcstats -s c_max
You''re probably right. I''ve been looking at those too -
actually, I''ve
just started graphing them in munin through some slightly modified munin
plugins that someone wrote for BSD. :-)
>
>> It''s my understanding that the kernel will use a certain
amount of
>> ram for managing the L2Arc, and that how much is needed is dependent
>> on the size of the L2Arc and the recordsize of the zfs filesystems
>
> Yes.
>
>> I have some questions that I''m hoping the group can answer...
>>
>> Given that I don''t believe there is any other memory pressure
on the
>> system, why isn''t the ARC using that last 1G of ram?
> Simon says, "don''t do that" ? ;-)
Simon Says lots of things. :-) It strikes me that 1G sitting free is
quite a lot.
I guess what I''m really asking, is that given that 1G free
doesn''t
appear to be the 1/4 of ram that the ARC will never touch, and that
"c"
is so much less than "c_max", why is "c" so small? :-)
>
>> Is there some way to see how much ram is used for L2Arc management?
>> Is that what the l2_hdr_size kstat measures?
>
>> Is it possible to see it via ''echo "::kmastat" | mdb
-k ''?
>
> I don''t think so.
>
> OK, so why are you interested in tracking this? Capacity planning?
> From what I can tell so far, DDT is a much more difficult beast to
> measure
> and has a more direct impact on performance :-(
> -- richard
Firstly, what''s DDT? :-)
Secondly, it''s because I''m replacing the system. The existing
one was
proof of concept, essentially built with decommissioned parts. I''ve got
a new box,with 32G of ram, with a little bit of money left in the
budget.
For that money, I could get an extra 80-200G of SSD for L2ARC, or an
extra 12G of ram, or perhaps both would be a waste money. Given the box
will be awkward to touch once it''s in, I''m going to err on the
side of
adding hardware now.
What I''m trying to find out is is my ARC relatively small because...
1) ZFS has decided that that''s all it needs (the workload is fairly
random), and that adding more wont gain me anything..
2) The system is using so much ram for tracking the L2ARC, that the ARC
is being shrunk (we''ve got an 8K record size)
3) There''s some other memory pressure on the system that I''m
not aware
of that is periodically chewing up then freeing the ram.
4) There''s some other memory management feature that''s
insisting on that
1G free.
Actually, because it''ll be easier to add SSD''s later than to
add RAM
later, I might just add the RAM now and be done with it. :-) It''s not
very scientific, but I don''t think I''ve ever had a system
where 2 or 3
years into it''s life I''ve not wished that I''d put
more ram in to start
with!
But I really am still interested in figuring out how much RAM is used
for the L2ARC management in our system, because while our workload is
fairly random, there''s some moderately well defined hotter spots - so
it
might be that in 12 months a feasible upgrade to the system is to add 4
x 256G SSD''s as L2ARC. It would take a while for the L2ARC to warm up,
but once it was, most of those hotter areas would come from cache.
However it may be too much L2 for the system to track efficiently.
Regards,
Tristan/>
>
> ______________________________________________________________________
> This email has been scanned by the MessageLabs Email Security System.
> For more information please visit
>
http://www.messagelabs.com/email________________________________________
______________________________ >