I''m not showing any data being populated in the L2ARC or ZIL SSDs with
a J4500 (48 - 500GB SATA drives).
# zpool iostat -v
capacity operations bandwidth
pool used avail read write read write
------------------------- ----- ----- ----- ----- ----- -----
POOL 2.71T 4.08T 35 492 1.06M 5.67M
mirror 185G 279G 2 30 72.5K 327K
c10t5000C5001A3B6695d0 - - 0 4 24.5K 327K
c10t5000C5001A3CED7Fd0 - - 0 4 24.5K 327K
c10t5000C5001A5A45C1d0 - - 0 5 24.5K 327K
mirror 185G 279G 2 30 72.8K 327K
c10t5000C5001A6B2300d0 - - 0 5 24.6K 327K
c10t5000C5001A6BC6C6d0 - - 0 5 24.6K 327K
c10t5000C5001A6C3439d0 - - 0 5 24.6K 327K
mirror 185G 279G 2 30 72.6K 327K
c10t5000C5001A6F177Bd0 - - 0 4 24.4K 327K
c10t5000C5001A6FDB0Bd0 - - 0 4 24.7K 327K
c10t5000C5001A6FFF86d0 - - 0 5 24.5K 327K
mirror 185G 279G 2 30 72.4K 327K
c10t5000C5001A39D7BEd0 - - 0 4 24.6K 327K
c10t5000C5001A60BED0d0 - - 0 4 24.6K 327K
c10t5000C5001A70D8AAd0 - - 0 4 24.4K 327K
mirror 185G 279G 2 30 72.5K 327K
c10t5000C5001A70D9B0d0 - - 0 5 24.6K 327K
c10t5000C5001A70D89Ed0 - - 0 5 24.6K 327K
c10t5000C5001A70D719d0 - - 0 5 24.5K 327K
mirror 185G 279G 2 30 72.5K 327K
c10t5000C5001A700E07d0 - - 0 4 24.7K 327K
c10t5000C5001A701A12d0 - - 0 5 24.5K 327K
c10t5000C5001A701CD0d0 - - 0 5 24.4K 327K
mirror 185G 279G 2 30 72.4K 327K
c10t5000C5001A702c10Ed0 - - 0 4 24.4K 327K
c10t5000C5001A702C8Ed0 - - 0 4 24.5K 327K
c10t5000C5001A703D23d0 - - 0 4 24.6K 327K
mirror 185G 279G 2 30 72.4K 327K
c10t5000C5001A703FADd0 - - 0 4 24.4K 327K
c10t5000C5001A707D86d0 - - 0 4 24.5K 327K
c10t5000C5001A707EDCd0 - - 0 4 24.5K 327K
mirror 185G 279G 2 30 72.7K 327K
c10t5000C5001A7013D4d0 - - 0 4 24.5K 327K
c10t5000C5001A7013E6d0 - - 0 4 24.6K 327K
c10t5000C5001A7013FDd0 - - 0 4 24.5K 327K
mirror 185G 279G 2 30 72.6K 327K
c10t5000C5001A7021ADd0 - - 0 4 24.6K 327K
c10t5000C5001A7028B6d0 - - 0 4 24.5K 327K
c10t5000C5001A7029A2d0 - - 0 4 24.5K 327K
mirror 185G 279G 2 30 72.6K 327K
c10t5000C5001A7036F4d0 - - 0 4 24.5K 327K
c10t5000C5001A7053ADd0 - - 0 5 24.5K 327K
c10t5000C5001A7069CAd0 - - 0 5 24.6K 327K
mirror 185G 279G 2 30 72.5K 327K
c10t5000C5001A70104Dd0 - - 0 4 24.6K 327K
c10t5000C5001A70126Fd0 - - 0 4 24.5K 327K
c10t5000C5001A70183Cd0 - - 0 5 24.5K 327K
mirror 185G 279G 2 30 72.7K 327K
c10t5000C5001A70296Cd0 - - 0 4 24.6K 327K
c10t5000C5001A70395Ed0 - - 0 5 24.5K 327K
c10t5000C5001A70587Dd0 - - 0 5 24.7K 327K
mirror 186G 278G 2 30 72.2K 327K
c10t5000C5001A70704Ad0 - - 0 4 24.4K 327K
c10t5000C5001A70830Ed0 - - 0 4 24.5K 327K
c10t5000C5001A701563d0 - - 0 5 24.3K 327K
mirror 185G 279G 2 30 72.2K 327K
c10t5000C5001A702542d0 - - 0 4 24.5K 327K
c10t5000C5001A702625d0 - - 0 4 24.4K 327K
c10t5000C5001A703374d0 - - 0 4 24.4K 327K
mirror 236K 29.5G 0 37 0 909K
c1t3d0 - - 0 37 0 909K
c1t4d0 - - 0 37 0 909K
cache - - - - - -
c1t1d0 29.7G 8M 6 21 175K 1.13M
c1t2d0 29.7G 8M 6 21 175K 1.13M
------------------------- ----- ----- ----- ----- ----- -----
The test box has 48GB of memory - current the arc cache is limited to 32GB and
the size is 19GB:
System Memory:
Physical RAM: 49143 MB
Free Memory : 5857 MB
LotsFree: 747 MB
ZFS Tunables (/etc/system):
set zfs:zfs_arc_max = 33554432000
set zfs:zfs_prefetch_disable = 1
ARC Size:
Current Size: 19672 MB (arcsize)
Target Size (Adaptive): 19672 MB (c)
Min Size (Hard Limit): 4000 MB (zfs_arc_min)
Max Size (Hard Limit): 32000 MB (zfs_arc_max)
ARC Size Breakdown:
Most Recently Used Cache Size: 63% 12439 MB (p)
Most Frequently Used Cache Size: 36% 7232 MB (c-p)
ARC Efficency:
Cache Access Total: 166912183
Cache Hit Ratio: 76% 127512355 [Defined State for
buffer]
Cache Miss Ratio: 23% 39399828 [Undefined State for
Buffer]
REAL Hit Ratio: 76% 127512355 [MRU/MFU Hits Only]
Data Demand Efficiency: 75%
Data Prefetch Efficiency: DISABLED (zfs_prefetch_disable)
CACHE HITS BY CACHE LIST:
Anon: --% Counter Rolled.
Most Recently Used: 27% 35206970 (mru) [ Return
Customer ]
Most Frequently Used: 72% 92305385 (mfu) [
Frequent Customer ]
Most Recently Used Ghost: 1% 1736417 (mru_ghost) [ Return
Customer Evicted, Now Back ]
Most Frequently Used Ghost: 5% 7274209 (mfu_ghost) [
Frequent Customer Evicted, Now Back ]
CACHE HITS BY DATA TYPE:
Demand Data: 94% 120867086
Prefetch Data: 0% 0
Demand Metadata: 5% 6645269
Prefetch Metadata: 0% 0
CACHE MISSES BY DATA TYPE:
Demand Data: 99% 39119029
Prefetch Data: 0% 0
Demand Metadata: 0% 280799
Prefetch Metadata: 0% 0
---------------------------------------------
We''re running Solaris 10 10/09 with Oracle 10G - in our previous
configs data was clearing shown in the L2ARC and ZIL but then again we
didn''t have 48GB (16GB previous tests) and a jbod. Thoughts?
--
This message posted from opensolaris.org
On Sat, 24 Apr 2010, Brad wrote:>> We''re running Solaris 10 10/09 with Oracle 10G - in our previous > configs data was clearing shown in the L2ARC and ZIL but then again > we didn''t have 48GB (16GB previous tests) and a jbod. Thoughts?Clearly this is a read-optimized system. Sweet! My primary thought is that your working set is currently smaller than the available RAM. Notice that this particular ''zpool iostat'' is not showing any reads. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Hmm so that means read requests are hitting/fulfilled by the arc cache? Am I correct in assuming that because the ARC cache is fulfilling read requests, the zpool and l2arc is barely touched? -- This message posted from opensolaris.org
On Sat, 24 Apr 2010, Brad wrote:> Hmm so that means read requests are hitting/fulfilled by the arc cache? > > Am I correct in assuming that because the ARC cache is fulfilling > read requests, the zpool and l2arc is barely touched?That is the state of nirvana you are searching for, no? Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/