Do you have the zfs primarycache property on this release ?
if so, you could set it to ''metadata'' or none.
primarycache=all | none | metadata
Controls what is cached in the primary cache (ARC). If
this property is set to "all", then both user data and
metadata is cached. If this property is set to "none",
then neither user data nor metadata is cached. If this
property is set to "metadata", then only metadata is
cached. The default value is "all".
-r
Udo Grabowski writes:
> Hi,
> we''ve capped Arcsize via set zfs:zfs_arc_max = 0x20000000 in
/etc/system to 512 MB, since ARC
> still does not release memory when applications need it (this is another
bug). But this hard limit is
> not obeyed, instead, when traversing all files in a large and deep
directory, we see the values below
> (arc started with 300 MB). After a while, machine (Ultra 20 M2 with 6GB)
swaps and then, hours later, freezes completely (even no reaction on quick push
power button, no ping, no mouse, have to hard
> reset). Arc summary shows clearly that limits are not what they supposed
to be. If this is working as
> intended, then the intention must be changed. As poorly as ARC is working
now, it''s absolutely
> necessary that a hard limit is indeed a hard limit for ARC. Please fix
this. Is there anything I can do to
> really limit or switch off the ARC completely ? It''s breaking our
production work often since we''ve
> installed OSol (we came from SXDE 1.08 which worked better), we must find
a way to stop this
> problem as fast as possible !
>
> arcstat:
> Time read miss miss% dmis dm% pmis pm% mmis mm% arcsz c
> 13:22:16 95M 23M 24 10M 14 12M 64 22M 24 963M 536M
> 13:22:17 2K 256 10 79 6 177 15 222 9 965M 536M
> 13:22:18 2K 490 22 119 10 371 38 482 22 970M 536M
> 13:22:19 4K 214 4 150 6 64 3 140 3 971M 536M
> 13:22:20 2K 427 19 57 4 370 37 419 19 971M 536M
> 13:22:21 1K 208 19 103 17 105 21 202 19 971M 536M
> ....
> 13:23:16 1K 481 27 80 8 401 47 478 27 1G 536M
> 13:23:17 2K 255 11 125 10 130 13 218 10 1G 536M
> and counting...
> arc_summary:
> System Memory:
> Physical RAM: 6134 MB
> Free Memory : 1739 MB
> LotsFree: 95 MB
>
> ZFS Tunables (/etc/system):
> set zfs:zfs_arc_max = 0x20000000
>
> ARC Size:
> Current Size: 1357 MB (arcsize)
> Target Size (Adaptive): 512 MB (c)
> Min Size (Hard Limit): 191 MB (zfs_arc_min)
> Max Size (Hard Limit): 512 MB (zfs_arc_max)
>
> ARC Size Breakdown:
> Most Recently Used Cache Size: 93% 479 MB (p)
> Most Frequently Used Cache Size: 6% 32 MB (c-p)
>
> ARC Efficency:
> Cache Access Total: 97131108
> Cache Hit Ratio: 75% 73244441 [Defined State for
buffer]
> Cache Miss Ratio: 24% 23886667 [Undefined State
for Buffer]
> REAL Hit Ratio: 67% 65874421 [MRU/MFU Hits
Only]
>
> Data Demand Efficiency: 66%
> Data Prefetch Efficiency: 8%
>
> CACHE HITS BY CACHE LIST:
> Anon: --% Counter Rolled.
> Most Recently Used: 15% 11463028 (mru) [
Return Customer ]
> Most Frequently Used: 74% 54411393 (mfu) [
Frequent Customer ]
> Most Recently Used Ghost: 10% 7537123 (mru_ghost) [
Return Customer Evicted, Now Back ]
> Most Frequently Used Ghost: 19% 14619417 (mfu_ghost) [
Frequent Customer Evicted, Now Back ]
> CACHE HITS BY DATA TYPE:
> Demand Data: 3% 2716192
> Prefetch Data: 0% 3506
> Demand Metadata: 86% 63089419
> Prefetch Metadata: 10% 7435324
> CACHE MISSES BY DATA TYPE:
> Demand Data: 5% 1365132
> Prefetch Data: 0% 36544
> Demand Metadata: 40% 9664064
> Prefetch Metadata: 53% 12820927
> --
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss