Hi all
I ran an OLTP-Filebench workload
I set Arc max size = 2 gb
l2arc ssd device size = 32gb
workingset(dataset) = 10gb , 10 files , 1gb each
after running the workload for 6 hours and monitoring kstat , I have noticed
that l2_size from kstat has reached 10gb which is great . however, l2_size
started to drop all the way to 7gb !!!! which means that the workload will
go back to the HDD to retirive some data that are no longer on l2arc device
.
I understand that l2arc size reflected by zpool iostat is much larger
becuase of COW and l2_size from kstat is the actual size of l2arc data.
so can any one tell me why I am loosing my workingset from l2_size actual
data !!!
here a copy of my kstat & zppol
kstat *******
l2_size 56832
l2_size 328063488
l2_size 779794944
l2_size 1354787328
l2_size 1930713600
l2_size 2455841280
l2_size 2968873472
l2_size 3490916864
l2_size 3973593600
l2_size 4464867840
l2_size 4936317440
l2_size 5397862912
l2_size 5798283776
l2_size 6284609536
l2_size 6719334400
l2_size 7115446784
l2_size 7478888960
l2_size 7824894464
l2_size 8199109120
l2_size 8547932672
l2_size 8882055680
l2_size 9143912960
l2_size 9405434368
l2_size 9589115392
l2_size 9793055232
l2_size 9947593216
l2_size 10077579776
l2_size 10177542656
l2_size 10236250624
l2_size 10363714048
l2_size 10405505536
l2_size 9461303808
l2_size 9211787776
l2_size 8871764480
l2_size 8693268992
l2_size 8734097920
l2_size 8538903040
l2_size 8259551744
l2_size 7984349696
l2_size 7858135552
l2_size 7729111552
l2_size 7832486400
l2_size 7676416512
l2_size 7613940224
l2_size 7503409664
l2_size 7400632832
l2_size 7296352768
l2_size 7234888192
l2_size 7274947072
l2_size 7197770240
l2_size 7367848448
l2_size 7386595840
l2_size 7368700416
l2_size 7402328576
l2_size 7281926656
l2_size 7201276416
l2_size 7230919168
l2_size 7558078976
l2_size 7546552832
l2_size 7368802816
l2_size 7312437248
l2_size 7202963456
l2_size 7373578240
l2_size 7438184448
l2_size 7240036352
l2_size 7408721920
l2_size 7306350592
l2_size 7216246784
l2_size 7517110272
l2_size 7336427520
l2_size 7386693632
l2_size 7367741440
l2_size 7457832960
l2_size 7296126976
l2_size 7176265728
l2_size 6986084352
l2_size 7133356032
l2_size 7126814720
l2_size 7047786496
l2_size 7396147200
l2_size 7543431168
l2_size 7586426880
l2_size 7466901504
l2_size 7337201664
l2_size 7446921216
l2_size 7418322944
l2_size 7399378944
l2_size 7741894656
l2_size 7599542272
l2_size 7580000256
l2_size 7774928896
l2_size 7613992960
l2_size 7509766144
l2_size 7370416128
l2_size 7292846080
l2_size 7182528512
l2_size 7496609792
l2_size 7328550912
l2_size 7348113408
l2_size 7259160576
l2_size 7421779968
zpool (just printing the cache size )
c8t1d0 216K 29.8G 10 25 353K 622K
c8t1d0 325M 29.5G 0 16 0 1.50M
c8t1d0 834M 29.0G 0 23 0 2.26M
c8t1d0 1.39G 28.4G 0 26 0 2.65M
c8t1d0 1.99G 27.8G 2 27 19.5K 2.72M
c8t1d0 2.57G 27.2G 12 26 101K 2.66M
c8t1d0 3.19G 26.6G 22 27 180K 2.83M
c8t1d0 3.85G 25.9G 36 29 291K 3.02M
c8t1d0 4.41G 25.4G 60 25 483K 2.58M
c8t1d0 5.05G 24.7G 71 28 567K 2.95M
c8t1d0 5.74G 24.1G 85 30 677K 3.20M
c8t1d0 6.41G 23.4G 102 29 812K 3.04M
c8t1d0 7.22G 22.6G 140 35 1.09M 3.70M
c8t1d0 7.99G 21.8G 150 34 1.16M 3.51M
c8t1d0 8.78G 21.0G 170 35 1.32M 3.64M
c8t1d0 9.61G 20.2G 233 39 1.82M 3.89M
c8t1d0 10.5G 19.3G 253 41 1.97M 4.29M
c8t1d0 11.5G 18.3G 304 45 2.37M 4.42M
c8t1d0 12.6G 17.2G 334 49 2.58M 4.97M
c8t1d0 13.7G 16.1G 380 49 2.94M 4.99M
c8t1d0 14.9G 14.9G 455 52 3.53M 5.43M
c8t1d0 16.2G 13.6G 464 55 3.58M 5.83M
c8t1d0 17.5G 12.3G 532 58 4.12M 6.04M
c8t1d0 18.8G 11.0G 530 55 4.11M 5.82M
c8t1d0 19.9G 9.85G 499 50 3.85M 5.49M
c8t1d0 21.2G 8.58G 574 54 4.45M 5.96M
c8t1d0 22.5G 7.31G 561 53 4.36M 5.86M
c8t1d0 23.9G 5.90G 619 59 4.81M 6.50M
c8t1d0 25.4G 4.39G 733 63 5.71M 7.03M
c8t1d0 27.0G 2.81G 721 65 5.61M 7.25M
c8t1d0 27.9G 1.86G 809 40 6.30M 4.38M
c8t1d0 29.1G 653M 857 57 6.68M 5.76M
c8t1d0 29.8G 7.49M 660 34 5.14M 3.41M
c8t1d0 29.8G 7.99M 580 42 4.52M 4.31M
c8t1d0 29.8G 8M 534 47 4.16M 4.88M
c8t1d0 29.8G 7.99M 536 40 4.17M 4.14M
c8t1d0 29.8G 8M 459 28 3.58M 2.75M
c8t1d0 29.8G 7.94M 396 26 3.08M 2.54M
c8t1d0 29.8G 8M 389 28 3.03M 2.73M
c8t1d0 29.8G 6.67M 379 27 2.95M 2.61M
c8t1d0 29.8G 8M 372 37 2.90M 3.71M
c8t1d0 29.8G 6.97M 348 26 2.72M 2.53M
c8t1d0 29.8G 2.88M 374 30 2.91M 3.00M
c8t1d0 29.8G 7.95M 343 25 2.66M 2.40M
c8t1d0 29.8G 5.40M 345 27 2.68M 2.70M
c8t1d0 29.8G 7.42M 319 23 2.48M 2.26M
c8t1d0 29.8G 6.95M 317 23 2.47M 2.20M
c8t1d0 29.8G 7.64M 302 32 2.35M 3.17M
c8t1d0 29.8G 6.92M 306 26 2.38M 2.38M
c8t1d0 29.8G 4.16M 339 32 2.64M 3.19M
c8t1d0 29.8G 7.75M 333 37 2.59M 3.69M
c8t1d0 29.8G 7.98M 320 24 2.49M 2.31M
c8t1d0 29.8G 6.48M 340 29 2.65M 2.90M
c8t1d0 29.8G 7.27M 320 23 2.50M 2.23M
c8t1d0 29.8G 7.99M 298 22 2.32M 2.12M
c8t1d0 29.8G 8M 315 36 2.45M 3.56M
c8t1d0 29.8G 7.98M 358 48 2.78M 4.81M
c8t1d0 29.8G 7.99M 333 23 2.60M 2.20M
c8t1d0 29.8G 7.95M 334 24 2.60M 2.29M
c8t1d0 29.8G 7.54M 312 24 2.43M 2.31M
c8t1d0 29.8G 4.28M 305 23 2.38M 2.19M
c8t1d0 29.8G 3.43M 334 42 2.60M 4.17M
c8t1d0 29.8G 1.25M 328 25 2.56M 2.43M
c8t1d0 29.8G 5.91M 296 21 2.31M 2.00M
c8t1d0 29.8G 4.95M 312 27 2.43M 2.66M
c8t1d0 29.8G 7.95M 320 28 2.49M 2.70M
c8t1d0 29.8G 7.23M 332 33 2.58M 3.27M
c8t1d0 29.8G 5.53M 323 27 2.51M 2.63M
c8t1d0 29.8G 8M 345 40 2.69M 3.97M
c8t1d0 29.8G 8M 345 26 2.69M 2.48M
c8t1d0 29.8G 8M 314 23 2.44M 2.22M
c8t1d0 29.8G 8M 328 26 2.55M 2.55M
c8t1d0 29.8G 7.93M 305 22 2.38M 2.12M
c8t1d0 29.8G 7.99M 293 17 2.28M 1.54M
c8t1d0 29.8G 4.10M 290 34 2.26M 3.36M
c8t1d0 29.8G 8M 286 22 2.23M 2.02M
c8t1d0 29.8G 8M 296 26 2.31M 2.47M
c8t1d0 29.8G 7.73M 304 39 2.37M 3.92M
c8t1d0 29.8G 8M 358 44 2.79M 4.36M
c8t1d0 29.8G 6.31M 336 25 2.61M 2.42M
c8t1d0 29.8G 7.33M 344 27 2.68M 2.66M
c8t1d0 29.8G 7.01M 319 24 2.49M 2.25M
c8t1d0 29.8G 7.98M 333 34 2.58M 3.35M
c8t1d0 29.8G 8M 330 28 2.57M 2.75M
c8t1d0 29.8G 8M 318 24 2.47M 2.35M
c8t1d0 29.8G 7.57M 362 46 2.81M 4.69M
c8t1d0 29.8G 7.84M 374 28 2.91M 2.71M
c8t1d0 29.8G 7.74M 370 30 2.88M 3.02M
c8t1d0 29.8G 6.82M 356 38 2.77M 3.80M
c8t1d0 29.8G 7.79M 351 24 2.73M 2.36M
c8t1d0 29.8G 8M 342 26 2.66M 2.48M
c8t1d0 29.8G 7.99M 332 25 2.59M 2.37M
c8t1d0 29.8G 5.20M 307 23 2.39M 2.18M
c8t1d0 29.8G 4.72M 308 23 2.40M 2.17M
c8t1d0 29.8G 8M 325 37 2.52M 3.79M
c8t1d0 29.8G 0 316 27 2.46M 2.63M
c8t1d0 29.8G 832K 330 28 2.57M 2.75M
c8t1d0 29.8G 4.75M 327 25 2.55M 2.44M
c8t1d0 29.8G 7.98M 333 39 2.59M 3.88M
Thanks ***********
--
Abdullah Al-Dahlawi
PhD Candidate
George Washington University
Department. Of Electrical & Computer Engineering
----
Check The Fastest 500 Super Computers Worldwide
http://www.top500.org/list/2009/11/100
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100409/a0018fc9/attachment.html>
On 09 April, 2010 - Abdullah Al-Dahlawi sent me these 27K bytes:> Hi all > > I ran an OLTP-Filebench workload > > I set Arc max size = 2 gb > l2arc ssd device size = 32gb > workingset(dataset) = 10gb , 10 files , 1gb each > > after running the workload for 6 hours and monitoring kstat , I have noticed > that l2_size from kstat has reached 10gb which is great . however, l2_size > started to drop all the way to 7gb !!!! which means that the workload will > go back to the HDD to retirive some data that are no longer on l2arc device > . > > I understand that l2arc size reflected by zpool iostat is much larger > becuase of COW and l2_size from kstat is the actual size of l2arc data. > > so can any one tell me why I am loosing my workingset from l2_size actual > data !!!Maybe the data in the l2arc was invalidated, because the original data was rewritten? /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
Hi Tomas I understand from previous post http://www.mail-archive.com/zfs-discuss at opensolaris.org/msg36914.html that if the data gets invalidated, the l2arc size that is shown by zpool iostat is the one that changed (always growing because of COW) not the actual size shown by kstat which represent the size of the up to date data in l2arc. My only conclusion here to this fluctuation in kstat l2_size is the fact that data has indeed invalidated and did not made it back to l2arc from the tail of ARC !!! Am I right ???? On Fri, Apr 9, 2010 at 4:33 PM, Tomas ?gren <stric at acc.umu.se> wrote:> On 09 April, 2010 - Abdullah Al-Dahlawi sent me these 27K bytes: > > > Hi all > > > > I ran an OLTP-Filebench workload > > > > I set Arc max size = 2 gb > > l2arc ssd device size = 32gb > > workingset(dataset) = 10gb , 10 files , 1gb each > > > > after running the workload for 6 hours and monitoring kstat , I have > noticed > > that l2_size from kstat has reached 10gb which is great . however, > l2_size > > started to drop all the way to 7gb !!!! which means that the workload > will > > go back to the HDD to retirive some data that are no longer on l2arc > device > > . > > > > I understand that l2arc size reflected by zpool iostat is much larger > > becuase of COW and l2_size from kstat is the actual size of l2arc data. > > > > so can any one tell me why I am loosing my workingset from l2_size actual > > data !!! > > Maybe the data in the l2arc was invalidated, because the original data > was rewritten? > > /Tomas > -- > Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/<http://www.acc.umu.se/%7Estric/> > |- Student at Computing Science, University of Ume? > `- Sysadmin at {cs,acc}.umu.se >-- Abdullah Al-Dahlawi PhD Candidate George Washington University Department. Of Electrical & Computer Engineering ---- Check The Fastest 500 Super Computers Worldwide http://www.top500.org/list/2009/11/100 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100409/99a8ebab/attachment.html>
On 09 April, 2010 - Abdullah Al-Dahlawi sent me these 5,3K bytes:> Hi Tomas > > > I understand from previous post > http://www.mail-archive.com/zfs-discuss at opensolaris.org/msg36914.html > > that if the data gets invalidated, the l2arc size that is shown by zpool > iostat is the one that changed (always growing because of COW) not the > actual size shown by kstat which represent the size of the up to date data > in l2arc. > > My only conclusion here to this fluctuation in kstat l2_size is the fact > that data has indeed invalidated and did not made it back to l2arc from the > tail of ARC !!! > > Am I right ????Sounds plausible. /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se