Hi all
I ran a workload that reads & writes within 10 files each file is 256M, ie,
(10 * 256M = 2.5GB total Dataset Size).
I have set the ARC max size to 1 GB on etc/system file
In the worse case, let us assume that the whole dataset is hot, meaning my
workingset size= 2.5GB
My SSD flash size = 8GB and being used for L2ARC
No slog is used in the pool
My File system record size = 8K , meaning 2.5% of 8GB is used for L2ARC
Directory in ARC. which ultimately mean that available ARC is 1024M - 204.8M
= 819.2M Available ARC (Am I Right ?)
Now the Question ...
After running the workload for 75 minutes, I have noticed that L2ARC device
has grown to 6 GB !!!!!!! ????
What is in L2ARC beyond my 2.5GB Workingset ?? something else is has been
added to L2ARC !!!!
*Here is a 5 minute interval of Zpool iostat *
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
hyb 2.52G 925G 3 61 197K 366K
c10t0d0 2.52G 925G 3 61 197K 366K
cache - - - - - -
c9t0d0 394K 7.45G 2 8 75.2K 231K
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
hyb 2.52G 925G 91 5 985K 210K
c10t0d0 2.52G 925G 91 5 985K 210K
cache - - - - - -
c9t0d0 235M 7.22G 0 10 0 802K
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
hyb 2.52G 925G 73 15 590K 671K
c10t0d0 2.52G 925G 73 15 590K 671K
cache - - - - - -
c9t0d0 495M 6.97G 0 10 0 848K
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
hyb 2.52G 925G 50 29 417K 1.24M
c10t0d0 2.52G 925G 50 29 417K 1.24M
cache - - - - - -
c9t0d0 774M 6.70G 0 10 0 902K
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
hyb 2.52G 925G 83 24 680K 2.39M
c10t0d0 2.52G 925G 83 24 680K 2.39M
cache - - - - - -
c9t0d0 1.24G 6.21G 6 16 49.6K 1.62M
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
hyb 2.52G 925G 80 30 668K 3.05M
c10t0d0 2.52G 925G 80 30 668K 3.05M
cache - - - - - -
c9t0d0 1.84G 5.61G 29 19 236K 1.93M
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
hyb 2.52G 925G 67 38 557K 3.48M
c10t0d0 2.52G 925G 67 38 557K 3.48M
cache - - - - - -
c9t0d0 2.44G 5.01G 57 19 462K 2.00M
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
hyb 2.52G 925G 34 43 284K 2.68M
c10t0d0 2.52G 925G 34 43 284K 2.68M
cache - - - - - -
c9t0d0 2.87G 4.58G 52 13 419K 1.37M
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
hyb 2.52G 925G 26 42 214K 2.47M
c10t0d0 2.52G 925G 26 42 214K 2.47M
cache - - - - - -
c9t0d0 3.22G 4.23G 57 12 464K 1.15M
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
hyb 2.52G 925G 44 53 374K 5.02M
c10t0d0 2.52G 925G 44 53 374K 5.02M
cache - - - - - -
c9t0d0 3.92G 3.54G 137 22 1.08M 2.33M
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
hyb 2.52G 925G 31 55 261K 4.36M
c10t0d0 2.52G 925G 31 55 261K 4.36M
cache - - - - - -
c9t0d0 4.53G 2.93G 115 18 925K 2.00M
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
hyb 2.52G 925G 10 48 84.6K 1.92M
c10t0d0 2.52G 925G 10 48 84.6K 1.92M
cache - - - - - -
c9t0d0 4.76G 2.69G 53 8 430K 797K
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
hyb 2.53G 925G 13 57 124K 3.53M
c10t0d0 2.53G 925G 13 57 124K 3.53M
cache - - - - - -
c9t0d0 5.18G 2.27G 104 13 842K 1.40M
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
hyb 2.53G 925G 9 48 84.4K 2.40M
c10t0d0 2.53G 925G 9 48 84.4K 2.40M
cache - - - - - -
c9t0d0 5.47G 1.98G 63 11 514K 994K
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
hyb 2.53G 925G 6 53 56.0K 2.02M
c10t0d0 2.53G 925G 6 53 56.0K 2.02M
cache - - - - - -
c9t0d0 5.71G 1.75G 58 9 473K 797K
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
hyb 2.53G 925G 6 53 64.5K 2.03M
c10t0d0 2.53G 925G 6 53 64.5K 2.03M
cache - - - - - -
c9t0d0 5.94G 1.51G 53 9 430K 782K
---------- ----- ----- ----- ----- ----- -----
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Also, a FULL Kstat ZFS for 5 minutes Interval
module: zfs instance: 0
name: arcstats class: misc
c 1073741824
c_max 1073741824
c_min 134217728
crtime 28.083178473
data_size 39272960
deleted 337691
demand_data_hits 95260
demand_data_misses 5493
demand_metadata_hits 58667
demand_metadata_misses 2940
evict_skip 1835
hash_chain_max 8
hash_chains 318
hash_collisions 267022
hash_elements 6602
hash_elements_max 222651
hdr_size 47073960
hits 177390
l2_abort_lowmem 0
l2_cksum_bad 0
l2_evict_lock_retry 0
l2_evict_reading 0
l2_feeds 2
l2_free_on_write 1
l2_hdr_size 0
l2_hits 0
l2_io_error 0
l2_misses 40
l2_read_bytes 0
l2_rw_clash 0
l2_size 244224
l2_write_bytes 403968
l2_writes_done 2
l2_writes_error 0
l2_writes_hdr_miss 0
l2_writes_sent 2
memory_throttle_count 0
mfu_ghost_hits 323
mfu_hits 90820
misses 11661
mru_ghost_hits 776
mru_hits 63201
mutex_miss 1
other_size 35203416
p 989022720
prefetch_data_hits 8481
prefetch_data_misses 919
prefetch_metadata_hits 14982
prefetch_metadata_misses 2309
recycle_miss 3406
size 121550336
snaptime 506.187774733
module: zfs instance: 0
name: vdev_cache_stats class: misc
crtime 28.08322032
delegations 2675
hits 2646
misses 1894
snaptime 506.188702309
module: zfs instance: 0
name: arcstats class: misc
c 1073741824
c_max 1073741824
c_min 134217728
crtime 28.083178473
data_size 365630464
deleted 337813
demand_data_hits 98040
demand_data_misses 31673
demand_metadata_hits 61463
demand_metadata_misses 5543
evict_skip 1835
hash_chain_max 8
hash_chains 7722
hash_collisions 277056
hash_elements 39099
hash_elements_max 222651
hdr_size 47073960
hits 183182
l2_abort_lowmem 0
l2_cksum_bad 0
l2_evict_lock_retry 0
l2_evict_reading 0
l2_feeds 302
l2_free_on_write 1
l2_hdr_size 0
l2_hits 0
l2_io_error 0
l2_misses 29616
l2_read_bytes 0
l2_rw_clash 0
l2_size 238528000
l2_write_bytes 246908416
l2_writes_done 280
l2_writes_error 0
l2_writes_hdr_miss 0
l2_writes_sent 280
memory_throttle_count 0
mfu_ghost_hits 326
mfu_hits 93076
misses 41237
mru_ghost_hits 776
mru_hits 66521
mutex_miss 1
other_size 41414768
p 989015040
prefetch_data_hits 8697
prefetch_data_misses 1712
prefetch_metadata_hits 14982
prefetch_metadata_misses 2309
recycle_miss 3406
size 454119192
snaptime 806.19131607
module: zfs instance: 0
name: vdev_cache_stats class: misc
crtime 28.08322032
delegations 3133
hits 3180
misses 3086
snaptime 806.19133724
module: zfs instance: 0
name: arcstats class: misc
c 1073741824
c_max 1073741824
c_min 134217728
crtime 28.083178473
data_size 623777792
deleted 337963
demand_data_hits 109134
demand_data_misses 54356
demand_metadata_hits 61912
demand_metadata_misses 5547
evict_skip 1835
hash_chain_max 8
hash_chains 18970
hash_collisions 301150
hash_elements 69930
hash_elements_max 222651
hdr_size 47073960
hits 195308
l2_abort_lowmem 0
l2_cksum_bad 0
l2_evict_lock_retry 0
l2_evict_reading 0
l2_feeds 602
l2_free_on_write 1
l2_hdr_size 0
l2_hits 0
l2_io_error 0
l2_misses 52376
l2_read_bytes 0
l2_rw_clash 0
l2_size 488949248
l2_write_bytes 511075840
l2_writes_done 563
l2_writes_error 0
l2_writes_hdr_miss 0
l2_writes_sent 563
memory_throttle_count 0
mfu_ghost_hits 328
mfu_hits 99557
misses 63997
mru_ghost_hits 777
mru_hits 71583
mutex_miss 1
other_size 47466456
p 989095936
prefetch_data_hits 9280
prefetch_data_misses 1785
prefetch_metadata_hits 14982
prefetch_metadata_misses 2309
recycle_miss 3406
size 718318208
snaptime 1106.201091971
module: zfs instance: 0
name: vdev_cache_stats class: misc
crtime 28.08322032
delegations 3133
hits 3180
misses 3090
snaptime 1106.201114171
module: zfs instance: 0
name: arcstats class: misc
c 1073741824
c_max 1073741824
c_min 134217728
crtime 28.083178473
data_size 863521280
deleted 338350
demand_data_hits 134430
demand_data_misses 68636
demand_metadata_hits 62726
demand_metadata_misses 5549
evict_skip 1835
hash_chain_max 8
hash_chains 29297
hash_collisions 338374
hash_elements 98925
hash_elements_max 222651
hdr_size 47073960
hits 222523
l2_abort_lowmem 0
l2_cksum_bad 0
l2_evict_lock_retry 0
l2_evict_reading 0
l2_feeds 902
l2_free_on_write 2
l2_hdr_size 0
l2_hits 0
l2_io_error 0
l2_misses 67139
l2_read_bytes 0
l2_rw_clash 0
l2_size 722830848
l2_write_bytes 778118656
l2_writes_done 798
l2_writes_error 0
l2_writes_hdr_miss 0
l2_writes_sent 798
memory_throttle_count 0
mfu_ghost_hits 328
mfu_hits 117209
misses 78760
mru_ghost_hits 777
mru_hits 80042
mutex_miss 1
other_size 53033696
p 989095936
prefetch_data_hits 10385
prefetch_data_misses 2266
prefetch_metadata_hits 14982
prefetch_metadata_misses 2309
recycle_miss 3406
size 963628936
snaptime 1406.211262945
module: zfs instance: 0
name: vdev_cache_stats class: misc
crtime 28.08322032
delegations 3133
hits 3181
misses 3091
snaptime 1406.211286198
module: zfs instance: 0
name: arcstats class: misc
c 1073741824
c_max 1073741824
c_min 134217728
crtime 28.083178473
data_size 970283008
deleted 342148
demand_data_hits 176341
demand_data_misses 94505
demand_metadata_hits 63392
demand_metadata_misses 5560
evict_skip 1835
hash_chain_max 10
hash_chains 42505
hash_collisions 408410
hash_elements 145566
hash_elements_max 222651
hdr_size 47073960
hits 269706
l2_abort_lowmem 0
l2_cksum_bad 0
l2_evict_lock_retry 0
l2_evict_reading 0
l2_feeds 1197
l2_free_on_write 19
l2_hdr_size 0
l2_hits 1320
l2_io_error 0
l2_misses 91971
l2_read_bytes 10813440
l2_rw_clash 0
l2_size 1106384384
l2_write_bytes 1255269888
l2_writes_done 1088
l2_writes_error 0
l2_writes_hdr_miss 0
l2_writes_sent 1088
memory_throttle_count 0
mfu_ghost_hits 1542
mfu_hits 136775
misses 104912
mru_ghost_hits 1236
mru_hits 103054
mutex_miss 10
other_size 56400720
p 984877056
prefetch_data_hits 14991
prefetch_data_misses 2537
prefetch_metadata_hits 14982
prefetch_metadata_misses 2310
recycle_miss 3816
size 1073757688
snaptime 1706.221407655
module: zfs instance: 0
name: vdev_cache_stats class: misc
crtime 28.08322032
delegations 3133
hits 3183
misses 3101
snaptime 1706.221430719
module: zfs instance: 0
name: arcstats class: misc
c 1073741824
c_max 1073741824
c_min 134217728
crtime 28.083178473
data_size 970248192
deleted 363491
demand_data_hits 211638
demand_data_misses 127211
demand_metadata_hits 63624
demand_metadata_misses 5563
evict_skip 1835
hash_chain_max 13
hash_chains 50940
hash_collisions 506611
hash_elements 193958
hash_elements_max 222651
hdr_size 47073960
hits 308050
l2_abort_lowmem 0
l2_cksum_bad 0
l2_evict_lock_retry 0
l2_evict_reading 0
l2_feeds 1486
l2_free_on_write 67
l2_hdr_size 0
l2_hits 9060
l2_io_error 0
l2_misses 117396
l2_read_bytes 74227712
l2_rw_clash 0
l2_size 1516054016
l2_write_bytes 1871398400
l2_writes_done 1375
l2_writes_error 0
l2_writes_hdr_miss 21
l2_writes_sent 1376
memory_throttle_count 0
mfu_ghost_hits 5392
mfu_hits 141614
misses 138077
mru_ghost_hits 6535
mru_hits 133744
mutex_miss 18
other_size 56480312
p 996531200
prefetch_data_hits 17806
prefetch_data_misses 2993
prefetch_metadata_hits 14982
prefetch_metadata_misses 2310
recycle_miss 3961
size 1073802464
snaptime 2006.231252288
module: zfs instance: 0
name: vdev_cache_stats class: misc
crtime 28.08322032
delegations 3133
hits 3185
misses 3101
snaptime 2006.231275969
module: zfs instance: 0
name: arcstats class: misc
c 1073741824
c_max 1073741824
c_min 134217728
crtime 28.083178473
data_size 985795584
deleted 436075
demand_data_hits 253333
demand_data_misses 164707
demand_metadata_hits 63850
demand_metadata_misses 5563
evict_skip 52989
hash_chain_max 16
hash_chains 55942
hash_collisions 615507
hash_elements 235937
hash_elements_max 236911
hdr_size 41424288
hits 353831
l2_abort_lowmem 0
l2_cksum_bad 0
l2_evict_lock_retry 0
l2_evict_reading 0
l2_feeds 1774
l2_free_on_write 94
l2_hdr_size 6510288
l2_hits 25186
l2_io_error 0
l2_misses 139255
l2_read_bytes 206331904
l2_rw_clash 0
l2_size 1851565568
l2_write_bytes 2501834240
l2_writes_done 1659
l2_writes_error 0
l2_writes_hdr_miss 21
l2_writes_sent 1660
memory_throttle_count 0
mfu_ghost_hits 13020
mfu_hits 147772
misses 176062
mru_ghost_hits 14415
mru_hits 169507
mutex_miss 110
other_size 56863704
p 1006622720
prefetch_data_hits 21666
prefetch_data_misses 3482
prefetch_metadata_hits 14982
prefetch_metadata_misses 2310
recycle_miss 6801
size 1090027752
snaptime 2306.241155215
module: zfs instance: 0
name: vdev_cache_stats class: misc
crtime 28.08322032
delegations 3133
hits 3185
misses 3101
snaptime 2306.241177787
module: zfs instance: 0
name: arcstats class: misc
c 1073741824
c_max 1073741824
c_min 134217728
crtime 28.083178473
data_size 966387200
deleted 499104
demand_data_hits 287230
demand_data_misses 192347
demand_metadata_hits 64210
demand_metadata_misses 5566
evict_skip 57346
hash_chain_max 16
hash_chains 59266
hash_collisions 706581
hash_elements 261247
hash_elements_max 261250
hdr_size 40895088
hits 390573
l2_abort_lowmem 0
l2_cksum_bad 0
l2_evict_lock_retry 0
l2_evict_reading 0
l2_feeds 2069
l2_free_on_write 106
l2_hdr_size 10880656
l2_hits 41096
l2_io_error 0
l2_misses 151470
l2_read_bytes 336666624
l2_rw_clash 0
l2_size 2061747712
l2_write_bytes 2982704640
l2_writes_done 1900
l2_writes_error 0
l2_writes_hdr_miss 21
l2_writes_sent 1901
memory_throttle_count 0
mfu_ghost_hits 20890
mfu_hits 153878
misses 204187
mru_ghost_hits 19167
mru_hits 197659
mutex_miss 242
other_size 56475856
p 988060160
prefetch_data_hits 24151
prefetch_data_misses 3964
prefetch_metadata_hits 14982
prefetch_metadata_misses 2310
recycle_miss 7749
size 1073692656
snaptime 2606.256473626
module: zfs instance: 0
name: vdev_cache_stats class: misc
crtime 28.08322032
delegations 3133
hits 3185
misses 3104
snaptime 2606.256498112
module: zfs instance: 0
name: arcstats class: misc
c 1073741824
c_max 1073741824
c_min 134217728
crtime 28.083178473
data_size 969561600
deleted 523802
demand_data_hits 319132
demand_data_misses 217592
demand_metadata_hits 64618
demand_metadata_misses 5622
evict_skip 63194
hash_chain_max 16
hash_chains 58796
hash_collisions 781517
hash_elements 276604
hash_elements_max 276609
hdr_size 45421848
hits 425945
l2_abort_lowmem 0
l2_cksum_bad 0
l2_evict_lock_retry 0
l2_evict_reading 0
l2_feeds 2369
l2_free_on_write 108
l2_hdr_size 8680200
l2_hits 58252
l2_io_error 0
l2_misses 160057
l2_read_bytes 477200384
l2_rw_clash 1
l2_size 2187392512
l2_write_bytes 3342788096
l2_writes_done 2125
l2_writes_error 0
l2_writes_hdr_miss 21
l2_writes_sent 2125
memory_throttle_count 0
mfu_ghost_hits 29766
mfu_hits 161929
misses 229930
mru_ghost_hits 22201
mru_hits 221918
mutex_miss 244
other_size 56540208
p 983820800
prefetch_data_hits 27213
prefetch_data_misses 4375
prefetch_metadata_hits 14982
prefetch_metadata_misses 2341
recycle_miss 9336
size 1079449056
snaptime 2906.261081145
module: zfs instance: 0
name: vdev_cache_stats class: misc
crtime 28.08322032
delegations 3137
hits 3203
misses 3139
snaptime 2906.261103724
module: zfs instance: 0
name: arcstats class: misc
c 1073741824
c_max 1073741824
c_min 134217728
crtime 28.083178473
data_size 960409600
deleted 607114
demand_data_hits 378326
demand_data_misses 260850
demand_metadata_hits 65085
demand_metadata_misses 5649
evict_skip 80882
hash_chain_max 18
hash_chains 60599
hash_collisions 894802
hash_elements 294406
hash_elements_max 294408
hdr_size 46252944
hits 489364
l2_abort_lowmem 0
l2_cksum_bad 0
l2_evict_lock_retry 0
l2_evict_reading 0
l2_feeds 2662
l2_free_on_write 144
l2_hdr_size 11836904
l2_hits 90085
l2_io_error 0
l2_misses 172134
l2_read_bytes 737972224
l2_rw_clash 6
l2_size 2335032832
l2_write_bytes 3922236928
l2_writes_done 2384
l2_writes_error 0
l2_writes_hdr_miss 21
l2_writes_sent 2384
memory_throttle_count 0
mfu_ghost_hits 47945
mfu_hits 179546
misses 273840
mru_ghost_hits 26115
mru_hits 263962
mutex_miss 248
other_size 56372496
p 960315904
prefetch_data_hits 30971
prefetch_data_misses 4857
prefetch_metadata_hits 14982
prefetch_metadata_misses 2484
recycle_miss 12099
size 1073842648
snaptime 3206.271142034
module: zfs instance: 0
name: vdev_cache_stats class: misc
crtime 28.08322032
delegations 3153
hits 3217
misses 3264
snaptime 3206.271164764
module: zfs instance: 0
name: arcstats class: misc
c 1073741824
c_max 1073741824
c_min 134217728
crtime 28.083178473
data_size 957937152
deleted 710464
demand_data_hits 475999
demand_data_misses 318851
demand_metadata_hits 65439
demand_metadata_misses 5668
evict_skip 82383
hash_chain_max 18
hash_chains 62119
hash_collisions 1043707
hash_elements 307039
hash_elements_max 310457
hdr_size 47195088
hits 593854
l2_abort_lowmem 0
l2_cksum_bad 0
l2_evict_lock_retry 0
l2_evict_reading 0
l2_feeds 2943
l2_free_on_write 164
l2_hdr_size 13516640
l2_hits 135794
l2_io_error 0
l2_misses 185251
l2_read_bytes 1112424448
l2_rw_clash 7
l2_size 2460792320
l2_write_bytes 4712674816
l2_writes_done 2641
l2_writes_error 0
l2_writes_hdr_miss 21
l2_writes_sent 2641
memory_throttle_count 0
mfu_ghost_hits 71494
mfu_hits 229015
misses 332666
mru_ghost_hits 34556
mru_hits 312534
mutex_miss 412
other_size 56242464
p 899623936
prefetch_data_hits 37434
prefetch_data_misses 5663
prefetch_metadata_hits 14982
prefetch_metadata_misses 2484
recycle_miss 12541
size 1073715984
snaptime 3506.281084851
module: zfs instance: 0
name: vdev_cache_stats class: misc
crtime 28.08322032
delegations 3153
hits 3226
misses 3272
snaptime 3506.28110707
module: zfs instance: 0
name: arcstats class: misc
c 1073741824
c_max 1073741824
c_min 134217728
crtime 28.083178473
data_size 917709312
deleted 758071
demand_data_hits 529622
demand_data_misses 339150
demand_metadata_hits 65835
demand_metadata_misses 5684
evict_skip 82546
hash_chain_max 18
hash_chains 62648
hash_collisions 1110329
hash_elements 316006
hash_elements_max 316011
hdr_size 45408072
hits 649495
l2_abort_lowmem 0
l2_cksum_bad 0
l2_evict_lock_retry 0
l2_evict_reading 0
l2_feeds 3242
l2_free_on_write 166
l2_hdr_size 15789224
l2_hits 152398
l2_io_error 0
l2_misses 189204
l2_read_bytes 1248456704
l2_rw_clash 7
l2_size 2522699264
l2_write_bytes 5016077824
l2_writes_done 2829
l2_writes_error 0
l2_writes_hdr_miss 21
l2_writes_sent 2829
memory_throttle_count 0
mfu_ghost_hits 80057
mfu_hits 266116
misses 353223
mru_ghost_hits 37508
mru_hits 329458
mutex_miss 420
other_size 55364800
p 880790528
prefetch_data_hits 39056
prefetch_data_misses 5905
prefetch_metadata_hits 14982
prefetch_metadata_misses 2484
recycle_miss 12647
size 1032898432
snaptime 3806.291102494
module: zfs instance: 0
name: vdev_cache_stats class: misc
crtime 28.08322032
delegations 3153
hits 3234
misses 3277
snaptime 3806.291125385
module: zfs instance: 0
name: arcstats class: misc
c 1073741824
c_max 1073741824
c_min 134217728
crtime 28.083178473
data_size 956804608
deleted 789689
demand_data_hits 596620
demand_data_misses 362084
demand_metadata_hits 66318
demand_metadata_misses 5684
evict_skip 82546
hash_chain_max 18
hash_chains 62950
hash_collisions 1177673
hash_elements 319764
hash_elements_max 319764
hdr_size 47277744
hits 719259
l2_abort_lowmem 0
l2_cksum_bad 0
l2_evict_lock_retry 0
l2_evict_reading 0
l2_feeds 3542
l2_free_on_write 168
l2_hdr_size 14652288
l2_hits 172320
l2_io_error 0
l2_misses 192543
l2_read_bytes 1411657728
l2_rw_clash 9
l2_size 2552419840
l2_write_bytes 5310313984
l2_writes_done 3035
l2_writes_error 0
l2_writes_hdr_miss 21
l2_writes_sent 3035
memory_throttle_count 0
mfu_ghost_hits 88631
mfu_hits 315416
misses 376484
mru_ghost_hits 41905
mru_hits 347657
mutex_miss 421
other_size 56420944
p 861406208
prefetch_data_hits 41339
prefetch_data_misses 6232
prefetch_metadata_hits 14982
prefetch_metadata_misses 2484
recycle_miss 12721
size 1073881472
snaptime 4106.301259124
module: zfs instance: 0
name: vdev_cache_stats class: misc
crtime 28.08322032
delegations 3153
hits 3234
misses 3277
snaptime 4106.301282485
module: zfs instance: 0
name: arcstats class: misc
c 1073741824
c_max 1073741824
c_min 134217728
crtime 28.083178473
data_size 956094464
deleted 854387
demand_data_hits 670111
demand_data_misses 395034
demand_metadata_hits 67029
demand_metadata_misses 5689
evict_skip 82546
hash_chain_max 18
hash_chains 62589
hash_collisions 1271045
hash_elements 324311
hash_elements_max 324325
hdr_size 47095464
hits 797015
l2_abort_lowmem 0
l2_cksum_bad 0
l2_evict_lock_retry 0
l2_evict_reading 0
l2_feeds 3838
l2_free_on_write 176
l2_hdr_size 15611296
l2_hits 201267
l2_io_error 0
l2_misses 197066
l2_read_bytes 1648807936
l2_rw_clash 12
l2_size 2590553600
l2_write_bytes 5744576000
l2_writes_done 3280
l2_writes_error 0
l2_writes_hdr_miss 21
l2_writes_sent 3280
memory_throttle_count 0
mfu_ghost_hits 100756
mfu_hits 365365
misses 409954
mru_ghost_hits 48384
mru_hits 371923
mutex_miss 423
other_size 56269408
p 830906880
prefetch_data_hits 44893
prefetch_data_misses 6747
prefetch_metadata_hits 14982
prefetch_metadata_misses 2484
recycle_miss 12770
size 1073713128
snaptime 4406.311084432
module: zfs instance: 0
name: vdev_cache_stats class: misc
crtime 28.08322032
delegations 3153
hits 3236
misses 3278
snaptime 4406.311107391
module: zfs instance: 0
name: arcstats class: misc
c 1073741824
c_max 1073741824
c_min 134217728
crtime 28.083178473
data_size 955840000
deleted 893802
demand_data_hits 730379
demand_data_misses 415223
demand_metadata_hits 67618
demand_metadata_misses 5700
evict_skip 82548
hash_chain_max 18
hash_chains 61871
hash_collisions 1331850
hash_elements 326446
hash_elements_max 326446
hdr_size 46844304
hits 860329
l2_abort_lowmem 0
l2_cksum_bad 0
l2_evict_lock_retry 0
l2_evict_reading 0
l2_feeds 4138
l2_free_on_write 179
l2_hdr_size 16175440
l2_hits 219250
l2_io_error 0
l2_misses 199647
l2_read_bytes 1796190208
l2_rw_clash 13
l2_size 2607597056
l2_write_bytes 6005036544
l2_writes_done 3554
l2_writes_error 0
l2_writes_hdr_miss 21
l2_writes_sent 3554
memory_throttle_count 0
mfu_ghost_hits 107910
mfu_hits 411730
misses 430518
mru_ghost_hits 52701
mru_hits 386424
mutex_miss 487
other_size 56393056
p 807624192
prefetch_data_hits 47350
prefetch_data_misses 7111
prefetch_metadata_hits 14982
prefetch_metadata_misses 2484
recycle_miss 12822
size 1073846240
snaptime 4706.321330987
module: zfs instance: 0
name: vdev_cache_stats class: misc
crtime 28.08322032
delegations 3153
hits 3238
misses 3279
snaptime 4706.321354194
module: zfs instance: 0
name: arcstats class: misc
c 1073741824
c_max 1073741824
c_min 134217728
crtime 28.083178473
data_size 859694080
deleted 939934
demand_data_hits 788460
demand_data_misses 435410
demand_metadata_hits 68083
demand_metadata_misses 5729
evict_skip 82548
hash_chain_max 18
hash_chains 62235
hash_collisions 1388868
hash_elements 328023
hash_elements_max 328075
hdr_size 45208320
hits 920449
l2_abort_lowmem 0
l2_cksum_bad 0
l2_evict_lock_retry 0
l2_evict_reading 0
l2_feeds 4438
l2_free_on_write 180
l2_hdr_size 18279480
l2_hits 237554
l2_io_error 0
l2_misses 201823
l2_read_bytes 1946243072
l2_rw_clash 13
l2_size 2620732928
l2_write_bytes 6248830464
l2_writes_done 3845
l2_writes_error 0
l2_writes_hdr_miss 21
l2_writes_sent 3845
memory_throttle_count 0
mfu_ghost_hits 115204
mfu_hits 456382
misses 450998
mru_ghost_hits 57083
mru_hits 400328
mutex_miss 490
other_size 54053472
p 783745024
prefetch_data_hits 48924
prefetch_data_misses 7375
prefetch_metadata_hits 14982
prefetch_metadata_misses 2484
recycle_miss 13055
size 975645832
snaptime 5006.331242426
module: zfs instance: 0
name: vdev_cache_stats class: misc
crtime 28.08322032
delegations 3153
hits 3244
misses 3289
snaptime 5006.331266366
module: zfs instance: 0
name: arcstats class: misc
c 1073741824
c_max 1073741824
c_min 134217728
crtime 28.083178473
data_size 955407360
deleted 966956
demand_data_hits 843880
demand_data_misses 452182
demand_metadata_hits 68572
demand_metadata_misses 5737
evict_skip 82548
hash_chain_max 18
hash_chains 61732
hash_collisions 1444874
hash_elements 329553
hash_elements_max 329561
hdr_size 46553328
hits 978241
l2_abort_lowmem 0
l2_cksum_bad 0
l2_evict_lock_retry 0
l2_evict_reading 0
l2_feeds 4738
l2_free_on_write 184
l2_hdr_size 17024784
l2_hits 252839
l2_io_error 0
l2_misses 203767
l2_read_bytes 2071482368
l2_rw_clash 13
l2_size 2632226304
l2_write_bytes 6486009344
l2_writes_done 4127
l2_writes_error 0
l2_writes_hdr_miss 21
l2_writes_sent 4127
memory_throttle_count 0
mfu_ghost_hits 120524
mfu_hits 500516
misses 468227
mru_ghost_hits 61398
mru_hits 412112
mutex_miss 511
other_size 56325712
p 775528448
prefetch_data_hits 50804
prefetch_data_misses 7819
prefetch_metadata_hits 14985
prefetch_metadata_misses 2489
recycle_miss 13096
size 1073830768
snaptime 5306.341302079
module: zfs instance: 0
name: vdev_cache_stats class: misc
crtime 28.08322032
delegations 3153
hits 3246
misses 3297
snaptime 5306.341328862
--
Abdullah Al-Dahlawi
PhD Candidate
George Washington University
Department. Of Electrical & Computer Engineering
----
Check The Fastest 500 Super Computers Worldwide
http://www.top500.org/list/2009/11/100
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100402/6247caf1/attachment.html>
On 02 April, 2010 - Abdullah Al-Dahlawi sent me these 128K bytes:> Hi all > > I ran a workload that reads & writes within 10 files each file is 256M, ie, > (10 * 256M = 2.5GB total Dataset Size). > > I have set the ARC max size to 1 GB on etc/system file > > In the worse case, let us assume that the whole dataset is hot, meaning my > workingset size= 2.5GB > > My SSD flash size = 8GB and being used for L2ARC > > No slog is used in the pool > > My File system record size = 8K , meaning 2.5% of 8GB is used for L2ARC > Directory in ARC. which ultimately mean that available ARC is 1024M - 204.8M > = 819.2M Available ARC (Am I Right ?)Seems about right.> Now the Question ... > > After running the workload for 75 minutes, I have noticed that L2ARC device > has grown to 6 GB !!!!!!! ????No, 6GB of the area has been touched by Copy on Write, not all of it is in use anymore though.> What is in L2ARC beyond my 2.5GB Workingset ?? something else is has been > added to L2ARC !!!![ snip lots of data ] This is your last one:> module: zfs instance: 0 > name: arcstats class: misc > c 1073741824 > c_max 1073741824 > c_min 134217728[...]> l2_size 2632226304 > l2_write_bytes 6486009344Roughly 6GB has been written to the device, and slightly less than 2.5GB is actually in use.> p 775528448/Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
Hi Tomas Thanks for the clarification. If I understood you right , you mean that 6 GB (including my 2.5GB files) has been written to the device and still occupy space on the device !!! This is fair enough for this case since most of my files ended up in L2ARC .... Great ... But this brings two related questions 1. What are really in L2ARC ... is it my old workingset data files that have been updated but still in L2ARC ? or something else ? Metadata ? 2. More importantly, what if my workingset was larger that 2.5GB (Say 5GB), I guess my L2ARC device will be filled completely before all my workingset transfer to the L2ARC device !!! Thanks .... On Sat, Apr 3, 2010 at 4:31 PM, Tomas ?gren <stric at acc.umu.se> wrote:> On 02 April, 2010 - Abdullah Al-Dahlawi sent me these 128K bytes: > > > Hi all > > > > I ran a workload that reads & writes within 10 files each file is 256M, > ie, > > (10 * 256M = 2.5GB total Dataset Size). > > > > I have set the ARC max size to 1 GB on etc/system file > > > > In the worse case, let us assume that the whole dataset is hot, meaning > my > > workingset size= 2.5GB > > > > My SSD flash size = 8GB and being used for L2ARC > > > > No slog is used in the pool > > > > My File system record size = 8K , meaning 2.5% of 8GB is used for L2ARC > > Directory in ARC. which ultimately mean that available ARC is 1024M - > 204.8M > > = 819.2M Available ARC (Am I Right ?) > > Seems about right. > > > Now the Question ... > > > > After running the workload for 75 minutes, I have noticed that L2ARC > device > > has grown to 6 GB !!!!!!! ???? > > No, 6GB of the area has been touched by Copy on Write, not all of it is > in use anymore though. > > > What is in L2ARC beyond my 2.5GB Workingset ?? something else is has been > > added to L2ARC !!!! > > [ snip lots of data ] > > This is your last one: > > > module: zfs instance: 0 > > name: arcstats class: misc > > c 1073741824 > > c_max 1073741824 > > c_min 134217728 > [...] > > l2_size 2632226304 > > l2_write_bytes 6486009344 > > Roughly 6GB has been written to the device, and slightly less than 2.5GB > is actually in use. > > > p 775528448 > > /Tomas > -- > Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/<http://www.acc.umu.se/%7Estric/> > |- Student at Computing Science, University of Ume? > `- Sysadmin at {cs,acc}.umu.se >-- Abdullah Al-Dahlawi PhD Candidate George Washington University Department. Of Electrical & Computer Engineering ---- Check The Fastest 500 Super Computers Worldwide http://www.top500.org/list/2009/11/100 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100403/8c62a99d/attachment.html>
On Apr 1, 2010, at 9:41 PM, Abdullah Al-Dahlawi wrote:> Hi all > > I ran a workload that reads & writes within 10 files each file is 256M, ie, (10 * 256M = 2.5GB total Dataset Size). > > I have set the ARC max size to 1 GB on etc/system file > > In the worse case, let us assume that the whole dataset is hot, meaning my workingset size= 2.5GB > > My SSD flash size = 8GB and being used for L2ARC > > No slog is used in the pool > > My File system record size = 8K , meaning 2.5% of 8GB is used for L2ARC Directory in ARC. which ultimately mean that available ARC is 1024M - 204.8M = 819.2M Available ARC (Am I Right ?)this is worst case> Now the Question ... > > After running the workload for 75 minutes, I have noticed that L2ARC device has grown to 6 GB !!!!!!! ????You''re not interpreting the values properly, see below.> What is in L2ARC beyond my 2.5GB Workingset ?? something else is has been added to L2ARC !!!!ZFS is COW, so modified data is written to disk and the L2ARC.> Here is a 5 minute interval of Zpool iostat[snip]> Also, a FULL Kstat ZFS for 5 minutes Interval[snip]> module: zfs instance: 0 > name: arcstats class: misc > c 1073741824 > c_max 1073741824Max ARC size is limited to 1GB> c_min 134217728 > crtime 28.083178473 > data_size 955407360 > deleted 966956 > demand_data_hits 843880 > demand_data_misses 452182 > demand_metadata_hits 68572 > demand_metadata_misses 5737 > evict_skip 82548 > hash_chain_max 18 > hash_chains 61732 > hash_collisions 1444874 > hash_elements 329553 > hash_elements_max 329561 > hdr_size 46553328 > hits 978241 > l2_abort_lowmem 0 > l2_cksum_bad 0 > l2_evict_lock_retry 0 > l2_evict_reading 0 > l2_feeds 4738 > l2_free_on_write 184 > l2_hdr_size 17024784size of L2ARC headers is approximately 17MB> l2_hits 252839 > l2_io_error 0 > l2_misses 203767 > l2_read_bytes 2071482368 > l2_rw_clash 13 > l2_size 2632226304currently, there is approximately 2.5GB in the L2ARC> l2_write_bytes 6486009344total amount of data written to L2ARC since boot is 6+ GB> l2_writes_done 4127 > l2_writes_error 0 > l2_writes_hdr_miss 21 > l2_writes_sent 4127 > memory_throttle_count 0 > mfu_ghost_hits 120524 > mfu_hits 500516 > misses 468227 > mru_ghost_hits 61398 > mru_hits 412112 > mutex_miss 511 > other_size 56325712 > p 775528448 > prefetch_data_hits 50804 > prefetch_data_misses 7819 > prefetch_metadata_hits 14985 > prefetch_metadata_misses 2489 > recycle_miss 13096 > size 1073830768ARC size is 1GB The best way to understand these in detail is to look at the source which is nicely commented. L2ARC design is commented near http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c#3590 -- richard ZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com
Hi Richard Thanks for your comments. OK ZFS is COW, I understand, but, this also means a waste of valuable space of my L2ARC SSD device, more than 60% of the space is consumed by COW !!!. I do not get it ? On Sat, Apr 3, 2010 at 11:35 PM, Richard Elling <richard.elling at gmail.com>wrote:> On Apr 1, 2010, at 9:41 PM, Abdullah Al-Dahlawi wrote: > > > Hi all > > > > I ran a workload that reads & writes within 10 files each file is 256M, > ie, (10 * 256M = 2.5GB total Dataset Size). > > > > I have set the ARC max size to 1 GB on etc/system file > > > > In the worse case, let us assume that the whole dataset is hot, meaning > my workingset size= 2.5GB > > > > My SSD flash size = 8GB and being used for L2ARC > > > > No slog is used in the pool > > > > My File system record size = 8K , meaning 2.5% of 8GB is used for L2ARC > Directory in ARC. which ultimately mean that available ARC is 1024M - 204.8M > = 819.2M Available ARC (Am I Right ?) > > this is worst case > > > Now the Question ... > > > > After running the workload for 75 minutes, I have noticed that L2ARC > device has grown to 6 GB !!!!!!! ???? > > You''re not interpreting the values properly, see below. > > > What is in L2ARC beyond my 2.5GB Workingset ?? something else is has been > added to L2ARC !!!! > > ZFS is COW, so modified data is written to disk and the L2ARC. > > > Here is a 5 minute interval of Zpool iostat > > [snip] > > Also, a FULL Kstat ZFS for 5 minutes Interval > > [snip] > > module: zfs instance: 0 > > name: arcstats class: misc > > c 1073741824 > > c_max 1073741824 > > Max ARC size is limited to 1GB > > > c_min 134217728 > > crtime 28.083178473 > > data_size 955407360 > > deleted 966956 > > demand_data_hits 843880 > > demand_data_misses 452182 > > demand_metadata_hits 68572 > > demand_metadata_misses 5737 > > evict_skip 82548 > > hash_chain_max 18 > > hash_chains 61732 > > hash_collisions 1444874 > > hash_elements 329553 > > hash_elements_max 329561 > > hdr_size 46553328 > > hits 978241 > > l2_abort_lowmem 0 > > l2_cksum_bad 0 > > l2_evict_lock_retry 0 > > l2_evict_reading 0 > > l2_feeds 4738 > > l2_free_on_write 184 > > l2_hdr_size 17024784 > > size of L2ARC headers is approximately 17MB > > > l2_hits 252839 > > l2_io_error 0 > > l2_misses 203767 > > l2_read_bytes 2071482368 > > l2_rw_clash 13 > > l2_size 2632226304 > > currently, there is approximately 2.5GB in the L2ARC > > > l2_write_bytes 6486009344 > > total amount of data written to L2ARC since boot is 6+ GB > > > l2_writes_done 4127 > > l2_writes_error 0 > > l2_writes_hdr_miss 21 > > l2_writes_sent 4127 > > memory_throttle_count 0 > > mfu_ghost_hits 120524 > > mfu_hits 500516 > > misses 468227 > > mru_ghost_hits 61398 > > mru_hits 412112 > > mutex_miss 511 > > other_size 56325712 > > p 775528448 > > prefetch_data_hits 50804 > > prefetch_data_misses 7819 > > prefetch_metadata_hits 14985 > > prefetch_metadata_misses 2489 > > recycle_miss 13096 > > size 1073830768 > > ARC size is 1GB > > The best way to understand these in detail is to look at the source which > is nicely commented. L2ARC design is commented near > > http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/arc.c#3590 > > -- richard > > ZFS storage and performance consulting at http://www.RichardElling.com<http://www.richardelling.com/> > ZFS training on deduplication, NexentaStor, and NAS performance > Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com > > > > > >-- Abdullah Al-Dahlawi PhD Candidate George Washington University Department. Of Electrical & Computer Engineering ---- Check The Fastest 500 Super Computers Worldwide http://www.top500.org/list/2009/11/100 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100408/527374af/attachment.html>
On 08 April, 2010 - Abdullah Al-Dahlawi sent me these 12K bytes:> Hi Richard > > Thanks for your comments. OK ZFS is COW, I understand, but, this also means > a waste of valuable space of my L2ARC SSD device, more than 60% of the space > is consumed by COW !!!. I do not get it ?The rest can and will be used if L2ARC needs it. It''s not wasted, it''s just a number that doesn''t match what you think it should be. /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
On Apr 8, 2010, at 3:23 PM, Tomas ?gren wrote:> On 08 April, 2010 - Abdullah Al-Dahlawi sent me these 12K bytes: > >> Hi Richard >> >> Thanks for your comments. OK ZFS is COW, I understand, but, this also means >> a waste of valuable space of my L2ARC SSD device, more than 60% of the space >> is consumed by COW !!!. I do not get it ? > > The rest can and will be used if L2ARC needs it. It''s not wasted, it''s > just a number that doesn''t match what you think it should be.Another way to look at it is: all cache space is "wasted" by design. If the backing store for the cache were performant, there wouldn''t be a cache. So caches waste space to gain performance. Space, dependability, performance: pick two. -- richard ZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com