>
> What does vmstat look like ?
> Also zpool iostat 1.
>
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
tank 291M 9.65G 0 11 110K 694K
tank 301M 9.64G 0 32 0 87.9K
tank 301M 9.64G 0 0 0 0
tank 301M 9.64G 31 0 3.96M 0
tank 301M 9.64G 0 88 0 4.91M
tank 311M 9.63G 16 77 2.05M 2.64M
tank 311M 9.63G 31 0 3.88M 0
tank 311M 9.63G 0 0 0 0
tank 311M 9.63G 31 62 3.96M 3.88M
tank 321M 9.62G 15 101 1.90M 3.08M
tank 321M 9.62G 0 0 0 0
tank 321M 9.62G 31 0 3.96M 0
tank 321M 9.62G 0 88 0 4.47M
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr dd s1 -- -- in sy cs us sy id
0 0 0 8395576 67320 0 69 224 0 0 0 0 104 0 0 0 578 3463 2210 16 17 67
13 0 0 8395456 67192 1 109 16 0 0 0 0 70 0 0 0 466 1176 1055 7 73 20
0 0 0 8395416 67112 0 21 16 0 0 0 0 2 0 0 0 327 809 452 2 2 96
0 0 0 8395416 67112 0 3 0 0 0 0 0 0 0 0 0 370 1947 818 6 4 90
0 0 0 8395416 67112 0 2 0 0 0 0 0 0 0 0 0 306 1358 672 8 3 89
0 0 0 8395416 67112 0 4 0 0 0 0 0 0 0 0 0 338 822 409 1 1 98
1 0 0 8395416 67112 0 10 0 0 0 0 0 0 0 0 0 320 3152 1415 20 8 72
0 0 0 8396568 68200 0 16 0 0 0 0 0 12 0 0 0 381 1273 633 5 5 90
0 0 0 8396568 68200 0 6 8 0 0 0 0 1 0 0 0 320 1613 620 4 3 93
0 0 0 8396568 68192 0 0 0 0 0 0 0 0 0 0 0 352 1198 595 5 2 93
0 0 0 8396568 68192 0 1 0 0 0 0 0 0 0 0 0 292 843 413 2 2 96
0 0 0 8396568 68192 0 0 0 0 0 0 0 0 0 0 0 343 818 405 1 1 98
0 0 0 8396568 68192 0 0 0 0 0 0 0 0 0 0 0 308 803 412 1 1 98
0 0 0 8396568 68192 0 0 0 0 0 0 0 0 0 0 0 345 1236 471 2 3 95
0 0 0 8396568 68192 0 0 0 0 0 0 0 0 0 0 0 296 1570 709 6 2 92
0 0 0 8396568 68192 13 142 0 0 0 0 0 0 0 0 0 380 3134 1182 14 6 80
0 0 0 8396568 68192 0 4 8 0 0 0 0 1 0 0 0 301 1034 536 5 4 91
0 0 0 8396568 68184 0 0 0 0 0 0 0 0 0 0 0 343 811 417 1 2 97
0 0 0 8396568 68184 0 0 0 0 0 0 0 0 0 0 0 310 1220 452 1 2 97
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr dd s1 -- -- in sy cs us sy id
0 0 0 8396568 68176 0 0 0 0 0 0 0 1 0 0 0 373 1715 651 4 2 94
0 0 0 8396568 68176 0 0 0 0 0 0 0 0 0 0 0 336 1739 647 3 2 95
0 0 0 8396160 67272 51 334 565 0 0 0 0 60 0 0 0 558 4029 1651 10 14 76
0 0 0 8396776 68184 3 99 0 0 0 0 0 0 0 0 0 357 1204 577 4 3 93
0 0 0 8396776 68184 0 8 8 0 0 0 0 1 0 0 0 356 3497 1353 16 7 77
0 0 0 8396776 68176 0 0 0 0 0 0 0 0 0 0 0 311 1128 477 2 1 97
0 0 0 8396776 68176 0 6 0 0 0 0 0 0 0 0 0 357 1259 518 3 2 95
0 0 0 8396776 68176 0 1 0 0 0 0 0 0 0 0 0 312 1166 495 2 1 97
0 0 0 8396776 68176 0 50 71 0 0 0 0 9 0 0 0 366 1207 540 25 3 72
> Do you have any disk based swap ?
>
Yes, there is an 8GB swap partition on the system and 2GB of RAM.
> One best practice we probably will be coming out with
> is to
> configure at least physmem of swap with ZFS (at least
> as of
> this release).
>
> The partly hung system could be this :
>
> http://bugs.opensolaris.org/bugdatabase/view_bug.do?b
> ug_id=6429205
> 6429205 each zpool needs to monitor it''s throughput
> t and throttle heavy writers
>
> The fix state is "in-progress".
>
I will look at this.
> What throughput do you get for the full untar
> (untared size / elapse time) ?
# tar xf thunderbird-1.5.0.4-source.tar 2.77s user 35.36s system 33% cpu
1:54.19
260M/114 =~ 2.28 MB/s on this IDE disk
This message posted from opensolaris.org