I have a 5 drive raidz2 pool which I have a iscsi share on. While
backing up a MacOS drive to it I noticed some very strange access
patterns, and wanted to know if what I am seeing is normal, or not.
There are times when all five drives are accessed equally, and there
are times when only three of them are seeing any load.
Is this normal? A side effect of the iscsi? A problem?
charles at home-sun:~$ uname -a
SunOS home-sun 5.11 snv_101a i86pc i386 i86pc Solaris
charles at home-sun:~$ zfs get all main_pool/iscsi/cdm_mac
NAME PROPERTY VALUE SOURCE
main_pool/iscsi/cdm_mac type volume -
main_pool/iscsi/cdm_mac creation Sun Nov 16 10:54 2008 -
main_pool/iscsi/cdm_mac used 245G -
main_pool/iscsi/cdm_mac available 2.16T -
main_pool/iscsi/cdm_mac referenced 245G -
main_pool/iscsi/cdm_mac compressratio 1.04x -
main_pool/iscsi/cdm_mac reservation none default
main_pool/iscsi/cdm_mac volsize 1T -
main_pool/iscsi/cdm_mac volblocksize 8K -
main_pool/iscsi/cdm_mac checksum on default
main_pool/iscsi/cdm_mac compression on local
main_pool/iscsi/cdm_mac readonly off default
main_pool/iscsi/cdm_mac shareiscsi on
inherited from main_pool/iscsi
main_pool/iscsi/cdm_mac copies 1 default
main_pool/iscsi/cdm_mac refreservation none default
main_pool/iscsi/cdm_mac primarycache all default
main_pool/iscsi/cdm_mac secondarycache all default
main_pool/iscsi/cdm_mac usedbysnapshots 0 -
main_pool/iscsi/cdm_mac usedbydataset 245G -
main_pool/iscsi/cdm_mac usedbychildren 0 -
main_pool/iscsi/cdm_mac usedbyrefreservation 0 -
charles at home-sun:~$ zpool iostat -v main_pool 3
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
main_pool 852G 3.70T 280 967 2.93M 7.73M
raidz2 852G 3.70T 280 967 2.93M 7.73M
c5t5d0 - - 142 377 712K 2.69M
c5t3d0 - - 142 377 714K 2.69M
c5t4d0 - - 143 376 714K 2.69M
c5t2d0 - - 145 377 712K 2.69M
c5t1d0 - - 144 378 713K 2.69M
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
main_pool 852G 3.70T 361 1.30K 2.78M 10.1M
raidz2 852G 3.70T 361 1.30K 2.78M 10.1M
c5t5d0 - - 180 502 1.25M 3.57M
c5t3d0 - - 205 330 1.30M 2.73M
c5t4d0 - - 239 489 1.43M 2.81M
c5t2d0 - - 205 17 1.25M 26.1K
c5t1d0 - - 248 13 1.41M 25.1K
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
main_pool 852G 3.70T 10 2.02K 77.7K 15.8M
raidz2 852G 3.70T 10 2.02K 77.7K 15.8M
c5t5d0 - - 2 921 109K 6.52M
c5t3d0 - - 9 691 108K 5.63M
c5t4d0 - - 9 962 105K 5.97M
c5t2d0 - - 9 1.30K 167K 8.50M
c5t1d0 - - 2 1.23K 150K 8.54M
---------- ----- ----- ----- ----- ----- -----
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
main_pool 852G 3.70T 306 1.11K 2.36M 8.61M
raidz2 852G 3.70T 306 1.11K 2.36M 8.61M
c5t5d0 - - 216 332 1.27M 2.06M
c5t3d0 - - 198 562 1.21M 2.94M
c5t4d0 - - 173 437 1.18M 2.52M
c5t2d0 - - 160 13 1.06M 26.1K
c5t1d0 - - 168 15 1.08M 26.9K
---------- ----- ----- ----- ----- ----- -----
charles at home-sun:~$ iostat -Xn 3
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t0d0
0.0 0.0 0.2 0.0 0.0 0.0 0.0 2.2 0 0 c5t0d0
142.8 373.3 704.1 2719.5 4.1 1.4 8.0 2.8 19 21 c5t1d0
143.9 373.0 703.6 2719.7 3.9 1.6 7.6 3.0 18 21 c5t2d0
140.9 372.4 705.1 2719.7 12.2 6.0 23.8 11.8 56 60 c5t3d0
141.4 371.8 705.4 2720.0 12.1 6.3 23.6 12.4 57 61 c5t4d0
140.6 372.4 702.8 2719.7 12.1 6.5 23.5 12.6 57 61 c5t5d0
5.0 0.9 355.5 3.7 0.1 0.0 16.0 1.8 1 1 c3t0d0
2.3 2.6 176.4 110.4 0.1 0.0 18.4 1.8 1 1 c3t1d0
1.8 2.2 140.0 109.9 0.1 0.0 31.7 1.9 1 1 c3t2d0
1.4 1.9 112.4 109.4 0.2 0.0 47.7 2.0 1 1 c3t3d0
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c5t0d0
1.0 1071.5 43.9 8101.0 14.4 0.4 13.4 0.4 43 43 c5t1d0
1.3 1219.6 44.6 8144.5 15.6 0.5 12.8 0.4 47 47 c5t2d0
0.0 962.5 0.0 6174.6 34.0 1.0 35.3 1.0 100 100 c5t3d0
0.0 591.4 0.0 3460.8 34.0 1.0 57.5 1.7 100 100 c5t4d0
0.0 846.8 0.0 5818.8 32.0 3.0 37.8 3.5 100 100 c5t5d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c3t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c3t1d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c3t2d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c3t3d0
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c5t0d0
13.0 0.0 39.0 0.0 0.0 0.0 0.0 0.8 0 1 c5t1d0
0.3 0.0 0.8 0.0 0.0 0.0 0.0 0.1 0 0 c5t2d0
15.3 300.7 100.3 2311.1 10.7 0.4 33.9 1.1 34 35 c5t3d0
0.0 514.4 0.0 3572.0 34.0 1.0 66.1 1.9 100 100 c5t4d0
14.0 360.4 76.0 2705.1 16.8 1.6 44.9 4.4 53 55 c5t5d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c3t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c3t1d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c3t2d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c3t3d0
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c5t0d0
205.2 20.2 1295.9 56.7 0.4 0.1 1.7 0.7 14 15 c5t1d0
161.9 20.9 1186.6 58.1 0.2 0.2 1.1 1.3 9 13 c5t2d0
159.3 18.3 1080.3 45.8 0.2 0.1 1.1 0.8 9 15 c5t3d0
161.5 301.9 1167.0 1477.7 17.4 0.7 37.5 1.4 64 65 c5t4d0
201.4 18.9 1245.0 46.2 0.1 0.2 0.6 0.9 8 17 c5t5d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c3t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c3t1d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c3t2d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c3t3d0
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c5t0d0
0.7 1300.0 23.0 8263.1 16.7 0.5 12.9 0.4 50 50 c5t1d0
1.0 1124.4 45.2 8353.2 14.6 0.4 13.0 0.4 44 44 c5t2d0
0.0 1021.1 0.0 6676.0 33.9 1.0 33.2 1.0 100 100 c5t3d0
0.0 1017.3 0.0 6769.4 33.9 1.0 33.4 1.0 100 100 c5t4d0
0.0 769.6 0.0 5308.2 33.9 1.0 44.1 1.3 100 100 c5t5d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c3t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c3t1d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c3t2d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c3t3d0
Thanks,
Charles
On Fri, Nov 21, 2008 at 14:35, Charles Menser <charles.menser at gmail.com> wrote:> I have a 5 drive raidz2 pool which I have a iscsi share on. While > backing up a MacOS drive to it I noticed some very strange access > patterns, and wanted to know if what I am seeing is normal, or not. > > There are times when all five drives are accessed equally, and there > are times when only three of them are seeing any load.What does "zpool status" say? How are the drives connected? To what controller(s)? This could just be some degree of asynchronicity showing up. Take a look at these two: capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- main_pool 852G 3.70T 361 1.30K 2.78M 10.1M raidz2 852G 3.70T 361 1.30K 2.78M 10.1M c5t5d0 - - 180 502 1.25M 3.57M c5t3d0 - - 205 330 1.30M 2.73M c5t4d0 - - 239 489 1.43M 2.81M c5t2d0 - - 205 17 1.25M 26.1K c5t1d0 - - 248 13 1.41M 25.1K ---------- ----- ----- ----- ----- ----- ----- capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- main_pool 852G 3.70T 10 2.02K 77.7K 15.8M raidz2 852G 3.70T 10 2.02K 77.7K 15.8M c5t5d0 - - 2 921 109K 6.52M c5t3d0 - - 9 691 108K 5.63M c5t4d0 - - 9 962 105K 5.97M c5t2d0 - - 9 1.30K 167K 8.50M c5t1d0 - - 2 1.23K 150K 8.54M ---------- ----- ----- ----- ----- ----- ----- For c5t5d0, a total of 3.57+6.52 MB of IO happen: 10.09 MB; For c5t3d0, a total of 2.73+5.63 MB of IO happen: 8.36 MB; For c5t4d0, a total of 2.81+5.97 MB of IO happen: 8.78 MB; For c5t2d0, a total of (~0)+8.50 MB of IO happen: 8.50 MB; and for c5t1d0, a total of (~0) + 8.54 MB of IO happen: 8.54 MB. So over time, the amount written to each drive is approximately the same. This being the case, I don''t think I''d worry about it too much... but a scrub is a fairly cheap way to get peace of mind. Will
The drives are all connected to the motherboard''s (Intel S3210SHLX)
SATA ports.
I''ve scrubbed the pool several times in the last two days, no errors:
charles at home-sun:~# zpool status -v
pool: main_pool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
main_pool ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c5t5d0 ONLINE 0 0 0
c5t3d0 ONLINE 0 0 0
c5t4d0 ONLINE 0 0 0
c5t2d0 ONLINE 0 0 0
c5t1d0 ONLINE 0 0 0
errors: No known data errors
I appreciate your feedback, I had not thought to aggregate the stats
and check the aggregate.
Thanks,
Charles
On Fri, Nov 21, 2008 at 3:24 PM, Will Murnane <will.murnane at gmail.com>
wrote:> On Fri, Nov 21, 2008 at 14:35, Charles Menser <charles.menser at
gmail.com> wrote:
>> I have a 5 drive raidz2 pool which I have a iscsi share on. While
>> backing up a MacOS drive to it I noticed some very strange access
>> patterns, and wanted to know if what I am seeing is normal, or not.
>>
>> There are times when all five drives are accessed equally, and there
>> are times when only three of them are seeing any load.
> What does "zpool status" say? How are the drives connected? To
what
> controller(s)?
>
> This could just be some degree of asynchronicity showing up. Take a
> look at these two:
> capacity operations bandwidth
> pool used avail read write read write
> ---------- ----- ----- ----- ----- ----- -----
> main_pool 852G 3.70T 361 1.30K 2.78M 10.1M
> raidz2 852G 3.70T 361 1.30K 2.78M 10.1M
> c5t5d0 - - 180 502 1.25M 3.57M
> c5t3d0 - - 205 330 1.30M 2.73M
> c5t4d0 - - 239 489 1.43M 2.81M
> c5t2d0 - - 205 17 1.25M 26.1K
> c5t1d0 - - 248 13 1.41M 25.1K
> ---------- ----- ----- ----- ----- ----- -----
>
> capacity operations bandwidth
> pool used avail read write read write
> ---------- ----- ----- ----- ----- ----- -----
> main_pool 852G 3.70T 10 2.02K 77.7K 15.8M
> raidz2 852G 3.70T 10 2.02K 77.7K 15.8M
> c5t5d0 - - 2 921 109K 6.52M
> c5t3d0 - - 9 691 108K 5.63M
> c5t4d0 - - 9 962 105K 5.97M
> c5t2d0 - - 9 1.30K 167K 8.50M
> c5t1d0 - - 2 1.23K 150K 8.54M
> ---------- ----- ----- ----- ----- ----- -----
>
> For c5t5d0, a total of 3.57+6.52 MB of IO happen: 10.09 MB;
> For c5t3d0, a total of 2.73+5.63 MB of IO happen: 8.36 MB;
> For c5t4d0, a total of 2.81+5.97 MB of IO happen: 8.78 MB;
> For c5t2d0, a total of (~0)+8.50 MB of IO happen: 8.50 MB;
> and for c5t1d0, a total of (~0) + 8.54 MB of IO happen: 8.54 MB.
>
> So over time, the amount written to each drive is approximately the
> same. This being the case, I don''t think I''d worry about
it too
> much... but a scrub is a fairly cheap way to get peace of mind.
>
> Will
>