Having recently upgraded from snv_57 to snv_73 I''ve noticed some
strange behaviour with the -v option to zpool iostat.
Without the -v option on an idle pool things look reasonable.
bash-3.00# zpool iostat 1
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
raidpool 165G 2.92T 109 36 12.5M 3.74M
raidpool 165G 2.92T 0 0 0 0
raidpool 165G 2.92T 0 0 0 0
raidpool 165G 2.92T 0 0 0 0
raidpool 165G 2.92T 0 0 0 0
raidpool 165G 2.92T 0 0 0 0
raidpool 165G 2.92T 0 0 0 0
bash-3.00# iostat -x 1
extended device statistics
device r/s w/s kr/s kw/s wait actv svc_t %w %b
cmdk0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd1 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd3 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd4 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd7 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd8 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd9 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd10 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd11 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd12 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd13 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
sd14 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0
But with the -v option on the idle pool this seems to constantly generate a mix
of reads / writes
bash-3.00# zpool iostat -v 1
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
raidpool 165G 2.92T 111 35 12.7M 3.80M
raidz1 55.4G 1.03T 37 11 4.27M 1.27M
c1d0 - - 20 7 1.41M 448K
c2d0 - - 20 7 1.40M 448K
c3d0 - - 20 7 1.41M 448K
c4d0 - - 20 6 1.40M 448K
raidz1 54.9G 1.03T 37 11 4.23M 1.27M
c7t0d0 - - 19 6 1.39M 434K
c7t1d0 - - 19 6 1.39M 434K
c7t6d0 - - 20 7 1.40M 434K
c7t7d0 - - 20 6 1.40M 434K
raidz1 54.8G 873G 37 11 4.22M 1.27M
c7t2d0 - - 19 7 1.39M 434K
c7t3d0 - - 20 8 1.39M 434K
c7t4d0 - - 20 8 1.40M 434K
c7t5d0 - - 20 6 1.40M 433K
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
raidpool 165G 2.92T 108 35 12.3M 3.68M
raidz1 55.4G 1.03T 36 11 4.13M 1.23M
c1d0 - - 19 7 1.36M 437K
c2d0 - - 19 7 1.36M 437K
c3d0 - - 19 7 1.36M 437K
c4d0 - - 20 6 1.36M 437K
raidz1 54.9G 1.03T 36 11 4.09M 1.23M
c7t0d0 - - 19 6 1.35M 421K
c7t1d0 - - 19 6 1.35M 421K
c7t6d0 - - 19 7 1.36M 421K
c7t7d0 - - 19 6 1.36M 420K
raidz1 54.8G 873G 36 11 4.09M 1.23M
c7t2d0 - - 19 7 1.35M 421K
c7t3d0 - - 19 8 1.35M 421K
c7t4d0 - - 19 8 1.35M 421K
c7t5d0 - - 20 6 1.35M 420K
---------- ----- ----- ----- ----- ----- -----
and so on. This is also seen in the standard iostat, and there is certainly
audible disk activity on the machine.
extended device statistics
device r/s w/s kr/s kw/s wait actv svc_t %w %b
cmdk0 0.0 10.0 0.0 79.6 0.0 0.0 0.2 0 0
sd1 0.0 175.1 0.0 265.7 0.0 0.0 0.2 0 3
sd2 0.0 175.1 0.0 265.7 0.0 0.0 0.2 0 3
sd3 0.0 175.1 0.0 265.7 0.0 0.0 0.2 0 2
sd4 0.0 173.1 0.0 265.7 0.0 0.0 0.2 0 3
sd7 0.0 178.1 0.0 267.7 0.0 0.1 0.6 0 5
sd8 0.0 128.1 0.0 214.7 0.0 0.0 0.2 0 2
sd9 0.0 188.1 0.0 267.7 0.0 0.0 0.3 0 3
sd10 0.0 128.1 0.0 214.7 0.0 0.0 0.2 0 2
sd11 0.0 196.1 0.0 1605.7 0.0 0.0 0.2 1 3
sd12 4.0 197.1 17.0 1605.7 0.0 0.0 0.2 1 3
sd13 4.0 203.2 17.0 1606.7 0.1 0.1 0.8 4 7
sd14 4.0 143.1 17.0 1554.2 0.1 0.1 1.0 5 7
The zpool iostat -v 1 output is very slow, especially when i/o is done to the
pool, and the values for the statistics don''t change.
For example during a big write from /dev/zero :-
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
raidpool 169G 2.91T 106 40 12.1M 3.81M
raidz1 56.7G 1.03T 35 13 4.04M 1.27M
c1d0 - - 19 8 1.33M 464K
c2d0 - - 19 8 1.33M 464K
c3d0 - - 19 8 1.33M 464K
c4d0 - - 19 7 1.33M 463K
raidz1 56.2G 1.03T 35 13 4.01M 1.27M
c7t0d0 - - 18 8 1.32M 436K
c7t1d0 - - 18 8 1.32M 436K
c7t6d0 - - 19 8 1.33M 436K
c7t7d0 - - 19 7 1.33M 435K
raidz1 56.1G 872G 35 13 4.00M 1.27M
c7t2d0 - - 18 8 1.32M 437K
c7t3d0 - - 18 10 1.32M 437K
c7t4d0 - - 19 10 1.33M 436K
c7t5d0 - - 19 7 1.32M 435K
---------- ----- ----- ----- ----- ----- -----
whereas zpool iostat 1 shows the writes nicely.
bash-3.00# zpool iostat 1
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
raidpool 167G 2.92T 105 53 12.0M 5.32M
raidpool 167G 2.92T 0 2.62K 0 331M
raidpool 167G 2.92T 0 3.97K 0 496M
raidpool 167G 2.92T 0 3.73K 0 474M
raidpool 169G 2.91T 0 531 0 51.2M
raidpool 169G 2.91T 0 3.94K 0 499M
raidpool 169G 2.91T 0 3.88K 0 491M
raidpool 169G 2.91T 3 2.71K 8.85K 325M
raidpool 170G 2.91T 0 1.24K 0 157M
raidpool 170G 2.91T 0 3.91K 0 496M
raidpool 170G 2.91T 0 3.67K 0 464M
This is certainly different to the snv_57 behaviour, and was the same after I
had upgraded the pool to version 8. Has anyone else seen this on their systems?
Cheers,
Alan
This message posted from opensolaris.org