While looking over iostats from various programs, I see that my OS HDD is busy writing, about 2Mb/sec stream all the time (at least while the "dcpool" import/recovery attempts are underway, but also now during a mere zdb walk). According to "iostat" this load stands out greatly: extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 25.0 0.0 100.0 0.0 0.0 0.3 0.0 11.6 0 29 c7t0d0 10.0 0.0 40.0 0.0 0.0 0.1 0.0 8.4 0 8 c7t1d0 2.0 0.0 8.0 0.0 0.0 0.0 0.0 13.6 0 3 c7t2d0 32.0 0.0 188.0 0.0 0.0 0.3 0.0 9.8 0 31 c7t3d0 14.0 0.0 116.0 0.0 0.0 0.1 0.0 10.3 0 14 c7t4d0 2.0 0.0 8.0 0.0 0.0 0.0 0.0 19.0 0 4 c7t5d0 0.0 327.0 0.0 2947.6 0.2 0.1 0.5 0.2 5 5 c4t1d0 59.0 0.0 125.5 0.0 0.0 0.7 0.0 12.4 0 73 c0t600144F09844CF0000004D8376AE0002d0 "zpool iostat" confirmed it is rpool and not some other partition: ---------- ----- ----- ----- ----- ----- ----- rpool 17.1G 2.77G 0 271 1.08K 2.20M c4t1d0s0 17.1G 2.77G 0 271 1.08K 2.20M ---------- ----- ----- ----- ----- ----- ----- For a while I thought it might be some swapping IO (despite the fact that vmstat and top show no swap-area usage at the moment, and no PI/PO operations in vmstat). I disabled swap in rpool volumes, but the 2Mb/s stream is still there, both with TXG sync times bumped to 30 sec and reduced to 1 sec. So far I did not find a DTraceToolkit-0.99 utility which would show me what that would be: # /export/home/jim/DTraceToolkit-0.99/rwsnoop | egrep -v ''/proc|/dev|<unkn'' UID PID CMD D BYTES FILE 0 1251 freeram-watc W 78 /var/log/freeram-watchdog.log.1307796483 0 1251 freeram-watc W 78 /var/log/freeram-watchdog.log.1307796483 0 1251 freeram-watc W 78 /var/log/freeram-watchdog.log.1307796483 0 394 nscd R 13492 /etc/security/prof_attr 0 1251 freeram-watc W 78 /var/log/freeram-watchdog.log.1307796483 0 1251 freeram-watc W 78 /var/log/freeram-watchdog.log.1307796483 0 677 utmpd R 4 /var/adm/wtmpx 0 1251 freeram-watc W 156 /var/log/freeram-watchdog.log.1307796483 These lines appear about once per second, so I gather that file-based IO is not heavy. There are many "<unknown>" entries which seem to be associated with "grep" and other pipes I have running, but they shouldn''t go through disk? "iosnoop" also shows little detail, except also pointing that all writes on the system at this moment go to rpool: 0 6 W 32081558 16384 zpool-rpool <none> 0 6 W 32092916 16384 zpool-rpool <none> 0 6 W 32116495 16384 zpool-rpool <none> 0 6 W 32117832 16384 zpool-rpool <none> 0 6 W 32125173 16384 zpool-rpool <none> 0 0 R 3025883400 4096 sched <none> 0 6 W 32156861 16384 zpool-rpool <none> 0 6 W 32206221 8192 zpool-rpool <none> 0 6 W 16701011 16384 zpool-rpool <none> 0 6 W 16702547 4096 zpool-rpool <none> 0 6 W 5281714 16384 zpool-rpool <none> 0 6 W 5462106 4096 zpool-rpool <none> 0 6 W 5251672 16384 zpool-rpool <none> 0 6 W 5253790 4096 zpool-rpool <none> 0 6 W 5257100 16384 zpool-rpool <none> 0 6 W 5408779 16384 zpool-rpool <none> 0 6 W 5431113 16384 zpool-rpool <none> 0 6 W 5433800 16384 zpool-rpool <none> 0 6 W 5438181 16384 zpool-rpool <none> 0 6 W 5447201 16384 zpool-rpool <none> 0 6 W 5462114 4096 zpool-rpool <none> 0 6 W 5503260 2048 zpool-rpool <none> 0 6 W 5510618 16384 zpool-rpool <none> 0 6 W 16633532 2048 zpool-rpool <none> 0 6 W 16640398 16384 zpool-rpool <none> 0 6 W 16648096 16384 zpool-rpool <none> 0 6 W 16650717 16384 zpool-rpool <none> 0 6 W 16651864 16384 zpool-rpool <none> 0 6 W 16658841 16384 zpool-rpool <none> 0 6 W 16658883 16384 zpool-rpool <none> 0 6 W 16662945 16384 zpool-rpool <none> I have little idea which dataset it could be landing on, or which process/task generates such a stream? As I wrote above, the system is busy trying to import or zdbwalk "dcpool" which resides in a volume on another separate "pool", system is otherwise idle, and all other processes which write to files account to a few bytes per second on average... I had a hunch this may be related to having dedup(=verify) on the root pool as well, but disabling it and waiting about 5 minutes did not change "iostat" substantially. True, at some seconds the writes went down to about 80 write IOPS ~ 300-500k/sec, but they went back up to 200-300 wIOPS ~ 2-3Mb/sec just afterwards. So I''m still wondering... what is being written at such rate and does not deplete my pool''s free space? ;) Thanks for hints, //Jim
Does this reveal anything; dtrace -n ''syscall::*write:entry /fds[arg0].fi_fs == "zfs"/ { @[execname,fds[arg0].fi_pathname]=count(); }'' On Jun 11, 2011, at 9:32 AM, Jim Klimov wrote:> While looking over iostats from various programs, I see that > my OS HDD is busy writing, about 2Mb/sec stream all the time > (at least while the "dcpool" import/recovery attempts are > underway, but also now during a mere zdb walk). > > According to "iostat" this load stands out greatly: > extended device statistics > r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device > 25.0 0.0 100.0 0.0 0.0 0.3 0.0 11.6 0 29 c7t0d0 > 10.0 0.0 40.0 0.0 0.0 0.1 0.0 8.4 0 8 c7t1d0 > 2.0 0.0 8.0 0.0 0.0 0.0 0.0 13.6 0 3 c7t2d0 > 32.0 0.0 188.0 0.0 0.0 0.3 0.0 9.8 0 31 c7t3d0 > 14.0 0.0 116.0 0.0 0.0 0.1 0.0 10.3 0 14 c7t4d0 > 2.0 0.0 8.0 0.0 0.0 0.0 0.0 19.0 0 4 c7t5d0 > 0.0 327.0 0.0 2947.6 0.2 0.1 0.5 0.2 5 5 c4t1d0 > 59.0 0.0 125.5 0.0 0.0 0.7 0.0 12.4 0 73 c0t600144F09844CF0000004D8376AE0002d0 > > "zpool iostat" confirmed it is rpool and not some other > partition: > ---------- ----- ----- ----- ----- ----- ----- > rpool 17.1G 2.77G 0 271 1.08K 2.20M > c4t1d0s0 17.1G 2.77G 0 271 1.08K 2.20M > ---------- ----- ----- ----- ----- ----- ----- > > For a while I thought it might be some swapping IO (despite > the fact that vmstat and top show no swap-area usage at the > moment, and no PI/PO operations in vmstat). I disabled swap > in rpool volumes, but the 2Mb/s stream is still there, both > with TXG sync times bumped to 30 sec and reduced to 1 sec. > > So far I did not find a DTraceToolkit-0.99 utility which > would show me what that would be: > > # /export/home/jim/DTraceToolkit-0.99/rwsnoop | egrep -v ''/proc|/dev|<unkn'' > UID PID CMD D BYTES FILE > 0 1251 freeram-watc W 78 /var/log/freeram-watchdog.log.1307796483 > 0 1251 freeram-watc W 78 /var/log/freeram-watchdog.log.1307796483 > 0 1251 freeram-watc W 78 /var/log/freeram-watchdog.log.1307796483 > 0 394 nscd R 13492 /etc/security/prof_attr > 0 1251 freeram-watc W 78 /var/log/freeram-watchdog.log.1307796483 > 0 1251 freeram-watc W 78 /var/log/freeram-watchdog.log.1307796483 > 0 677 utmpd R 4 /var/adm/wtmpx > 0 1251 freeram-watc W 156 /var/log/freeram-watchdog.log.1307796483 > > These lines appear about once per second, so I gather that > file-based IO is not heavy. There are many "<unknown>" > entries which seem to be associated with "grep" and other > pipes I have running, but they shouldn''t go through disk? > > "iosnoop" also shows little detail, except also pointing > that all writes on the system at this moment go to rpool: > 0 6 W 32081558 16384 zpool-rpool <none> > 0 6 W 32092916 16384 zpool-rpool <none> > 0 6 W 32116495 16384 zpool-rpool <none> > 0 6 W 32117832 16384 zpool-rpool <none> > 0 6 W 32125173 16384 zpool-rpool <none> > 0 0 R 3025883400 4096 sched <none> > 0 6 W 32156861 16384 zpool-rpool <none> > 0 6 W 32206221 8192 zpool-rpool <none> > 0 6 W 16701011 16384 zpool-rpool <none> > 0 6 W 16702547 4096 zpool-rpool <none> > 0 6 W 5281714 16384 zpool-rpool <none> > 0 6 W 5462106 4096 zpool-rpool <none> > 0 6 W 5251672 16384 zpool-rpool <none> > 0 6 W 5253790 4096 zpool-rpool <none> > 0 6 W 5257100 16384 zpool-rpool <none> > 0 6 W 5408779 16384 zpool-rpool <none> > 0 6 W 5431113 16384 zpool-rpool <none> > 0 6 W 5433800 16384 zpool-rpool <none> > 0 6 W 5438181 16384 zpool-rpool <none> > 0 6 W 5447201 16384 zpool-rpool <none> > 0 6 W 5462114 4096 zpool-rpool <none> > 0 6 W 5503260 2048 zpool-rpool <none> > 0 6 W 5510618 16384 zpool-rpool <none> > 0 6 W 16633532 2048 zpool-rpool <none> > 0 6 W 16640398 16384 zpool-rpool <none> > 0 6 W 16648096 16384 zpool-rpool <none> > 0 6 W 16650717 16384 zpool-rpool <none> > 0 6 W 16651864 16384 zpool-rpool <none> > 0 6 W 16658841 16384 zpool-rpool <none> > 0 6 W 16658883 16384 zpool-rpool <none> > 0 6 W 16662945 16384 zpool-rpool <none> > > > I have little idea which dataset it could be landing on, > or which process/task generates such a stream? > > As I wrote above, the system is busy trying to import > or zdbwalk "dcpool" which resides in a volume on another > separate "pool", system is otherwise idle, and all other > processes which write to files account to a few bytes > per second on average... > > I had a hunch this may be related to having dedup(=verify) > on the root pool as well, but disabling it and waiting > about 5 minutes did not change "iostat" substantially. > True, at some seconds the writes went down to about > 80 write IOPS ~ 300-500k/sec, but they went back up to > 200-300 wIOPS ~ 2-3Mb/sec just afterwards. > > So I''m still wondering... what is being written at such > rate and does not deplete my pool''s free space? ;) > > Thanks for hints, > //Jim > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
2011-06-11 19:16, Jim Mauro ?????:> Does this reveal anything; > > dtrace -n ''syscall::*write:entry /fds[arg0].fi_fs == "zfs"/ { @[execname,fds[arg0].fi_pathname]=count(); }''Alas, not much. # time dtrace -n ''syscall::*write:entry /fds[arg0].fi_fs == "zfs"/ { @[execname,fds[arg0].fi_pathname]=count(); }'' dtrace: description ''syscall::*write:entry '' matched 2 probes ^C freeram-watchdog /var/log/freeram-watchdog.log.1307796483 57 real 1m0.635s user 0m1.436s sys 0m0.361s So during a minute of running I had about 3 seconds of DTrace script init and appends to the log file every second (as well as TXG Syncs every second). Strangely, no other files showed up, though previous rwsnoop runs showed regular IOs (once a minute or more often) to a few other files. Thanks, //Jim
Well we may have missed something, because that dtrace will only capture write(2) and pwrite(2) - whatever is generating the writes may be using another interface (writev(2) for example). What about taking it down a layer: dtrace -n ''fsinfo:::write /args[0]->fi_fs == "zfs"/ { @[execname,args[0]->fi_pathname] = count(); }'' On Jun 11, 2011, at 12:34 PM, Jim Klimov wrote:> 2011-06-11 19:16, Jim Mauro ?????: >> Does this reveal anything; >> >> dtrace -n ''syscall::*write:entry /fds[arg0].fi_fs == "zfs"/ { @[execname,fds[arg0].fi_pathname]=count(); }'' > > Alas, not much. > > # time dtrace -n ''syscall::*write:entry /fds[arg0].fi_fs == "zfs"/ { @[execname,fds[arg0].fi_pathname]=count(); }'' > dtrace: description ''syscall::*write:entry '' matched 2 probes > ^C > > freeram-watchdog /var/log/freeram-watchdog.log.1307796483 57 > > real 1m0.635s > user 0m1.436s > sys 0m0.361s > > So during a minute of running I had about 3 seconds of DTrace script init > and appends to the log file every second (as well as TXG Syncs every > second). Strangely, no other files showed up, though previous rwsnoop > runs showed regular IOs (once a minute or more often) to a few other files. > > Thanks, > //Jim >
2011-06-11 20:34, Jim Klimov ?????:> time dtrace -n ''syscall::*write:entry /fds[arg0].fi_fs == "zfs"/ { > @[execname,fds[arg0].fi_pathname]=count(); }''This time I gave it more time, and used the system a bit - this dtrace works indeed, but there are still too few file accesses: # time dtrace -n ''syscall::*write:entry /fds[arg0].fi_fs == "zfs"/ { @[execname,fds[arg0].fi_pathname]=count(); }'' dtrace: description ''syscall::*write:entry '' matched 2 probes ^C svc.startd /var/svc/log/network-iscsi-initiator-dcpool:default.log 3 svc.startd /var/svc/log/network-iscsi-target-dcpool:default.log 3 bash /root/.bash_history 4 sshd /var/adm/utmpx 4 syslogd /var/adm/messages 6 sshd /var/adm/lastlog 8 sshd /var/adm/wtmpx 8 fmd /var/fm/fmd/infolog_hival 14 svc.configd /etc/svc/repository.db 17 svc.configd /etc/svc/repository.db-journal 22 freeram-watchdog /var/log/freeram-watchdog.log.1307810438 177 real 2m59.175s user 0m1.474s sys 0m0.349s However I''ve also rebooted the system and did not import the iSCSI device nor the "dcpool" I''m trying to repair - to rule this out - these 2Mb/s writes were still there. Then I played with TXG sync times, and it seems evident now that one TXG sync on my rpool takes 2-3Mb of written data with whatever frequency I''ve set. Kind of quite a lot to update a couple of blocks or so!.. //Jim
2011-06-11 20:42, Jim Mauro ?????:> Well we may have missed something, because that dtrace will > only capture write(2) and pwrite(2) - whatever is generating the writes > may be using another interface (writev(2) for example). > > What about taking it down a layer: > > dtrace -n ''fsinfo:::write /args[0]->fi_fs == "zfs"/ { @[execname,args[0]->fi_pathname] = count(); }''Seems similar (to my successful last run) - when I do something with the system, a few files are shown: # time dtrace -n ''fsinfo:::write /args[0]->fi_fs == "zfs"/ { @[execname,args[0]->fi_pathname] = count(); }'' dtrace: description ''fsinfo:::write '' matched 1 probe ^C grep /var/svc/log/network-iscsi-target-dcpool:default.log 1 fmd /etc/devices/snapshot_cache.tmp 2 svc.startd /var/svc/log/network-iscsi-initiator-dcpool:default.log 3 svc.startd /var/svc/log/network-iscsi-target-dcpool:default.log 3 syslogd /var/adm/messages 7 sbdadm /var/svc/log/network-iscsi-target-dcpool:default.log 22 fmd /var/fm/fmd/infolog_hival 24 Xvnc /root/.vnc/bofh-sol:64.log 93 freeram-watchdog /var/log/freeram-watchdog.log.1307810438 104 svc.configd /etc/svc/repository.db 232 svc.configd /etc/svc/repository.db-journal 352 real 1m45.585s user 0m1.439s sys 0m0.330s
Hmmm....so coming back around to the problem we''re trying to solve - You have iostat data and "zpool iostat" data that shows a steady stream of writes to one or more of your zpools, correct? You wish to identify the source of those writes, correct? Try saving this as a file and running it, and please run it in conjunction with iostat or zpool iostat data, so we can determine what the disparity is in what is being reported. ***** WARNING ****** I am NOT a ZFS expert. I''m fumbling my way around. I''m not sure if my use of uio->resid in the script below is a reasonable way to answer the "how many bytes of write IO am I generating" question. I''m still chasing that. -------------- zfsw.d ------------------------ #!/usr/sbin/dtrace -s #pragma D option quiet fbt:zfs:zfs_write:entry { @[execname,stringof(args[0]->v_path),args[1]->uio_resid] = count(); } tick-1sec { printf("%-16s %-40s %-8s %-8s\n","EXEC","PATH","BYTES","COUNT"); printa("%-16s %-40s %-8d %- at 8d\n",@); trunc(@); printf("\n"); } tick-120sec { exit(0); } When this is executed in parallel with the other utilities that are reporting writes, what do the numbers look like? What is the disparity in IOPS and/or bytes being written? On Jun 11, 2011, at 1:00 PM, Jim Klimov wrote:> 2011-06-11 20:42, Jim Mauro ?????: >> Well we may have missed something, because that dtrace will >> only capture write(2) and pwrite(2) - whatever is generating the writes >> may be using another interface (writev(2) for example). >> >> What about taking it down a layer: >> >> dtrace -n ''fsinfo:::write /args[0]->fi_fs == "zfs"/ { @[execname,args[0]->fi_pathname] = count(); }'' > > Seems similar (to my successful last run) - when I do > something with the system, a few files are shown: > > # time dtrace -n ''fsinfo:::write /args[0]->fi_fs == "zfs"/ { @[execname,args[0]->fi_pathname] = count(); }'' > dtrace: description ''fsinfo:::write '' matched 1 probe > ^C > > grep /var/svc/log/network-iscsi-target-dcpool:default.log 1 > fmd /etc/devices/snapshot_cache.tmp 2 > svc.startd /var/svc/log/network-iscsi-initiator-dcpool:default.log 3 > svc.startd /var/svc/log/network-iscsi-target-dcpool:default.log 3 > syslogd /var/adm/messages 7 > sbdadm /var/svc/log/network-iscsi-target-dcpool:default.log 22 > fmd /var/fm/fmd/infolog_hival 24 > Xvnc /root/.vnc/bofh-sol:64.log 93 > freeram-watchdog /var/log/freeram-watchdog.log.1307810438 104 > svc.configd /etc/svc/repository.db 232 > svc.configd /etc/svc/repository.db-journal 352 > > real 1m45.585s > user 0m1.439s > sys 0m0.330s > >
This may be interesting also (still fumbling...); dtrace -n ''fbt:zfs:zio_write:entry, fbt:zfs:zio_rewrite:entry,fbt:zfs:zio_write_override:entry { @[probefunc,stack()] = count(); }'' On Jun 11, 2011, at 1:00 PM, Jim Klimov wrote:> 2011-06-11 20:42, Jim Mauro ?????: >> Well we may have missed something, because that dtrace will >> only capture write(2) and pwrite(2) - whatever is generating the writes >> may be using another interface (writev(2) for example). >> >> What about taking it down a layer: >> >> dtrace -n ''fsinfo:::write /args[0]->fi_fs == "zfs"/ { @[execname,args[0]->fi_pathname] = count(); }'' > > Seems similar (to my successful last run) - when I do > something with the system, a few files are shown: > > # time dtrace -n ''fsinfo:::write /args[0]->fi_fs == "zfs"/ { @[execname,args[0]->fi_pathname] = count(); }'' > dtrace: description ''fsinfo:::write '' matched 1 probe > ^C > > grep /var/svc/log/network-iscsi-target-dcpool:default.log 1 > fmd /etc/devices/snapshot_cache.tmp 2 > svc.startd /var/svc/log/network-iscsi-initiator-dcpool:default.log 3 > svc.startd /var/svc/log/network-iscsi-target-dcpool:default.log 3 > syslogd /var/adm/messages 7 > sbdadm /var/svc/log/network-iscsi-target-dcpool:default.log 22 > fmd /var/fm/fmd/infolog_hival 24 > Xvnc /root/.vnc/bofh-sol:64.log 93 > freeram-watchdog /var/log/freeram-watchdog.log.1307810438 104 > svc.configd /etc/svc/repository.db 232 > svc.configd /etc/svc/repository.db-journal 352 > > real 1m45.585s > user 0m1.439s > sys 0m0.330s > >
2011-06-11 22:40, Jim Mauro ?????:> dtrace -n ''fbt:zfs:zio_write:entry, fbt:zfs:zio_rewrite:entry,fbt:zfs:zio_write_override:entry { @[probefunc,stack()] = count(); }''This one got me a number of printed stacks :) Sorry to other list readers for a lengthy post with several once-per-second stats for a minute-long run: root at bofh-sol:~# time dtrace -n ''fbt:zfs:zio_write:entry, fbt:zfs:zio_rewrite:entry,fbt:zfs:zio_write_override:entry { @[probefunc,stack()] = count(); }'' dtrace: description ''fbt:zfs:zio_write:entry, fbt:zfs:zio_rewrite:entry,fbt:zfs:zio_write_override:entry '' matched 3 probes ^C zio_write zfs`zio_ddt_write+0x311 zfs`zio_execute+0x8d genunix`taskq_thread+0x248 unix`thread_start+0x8 10 zio_write zfs`arc_write+0xb4 zfs`dmu_objset_sync+0x109 zfs`dsl_dataset_sync+0x5d zfs`dsl_pool_sync+0x16b zfs`spa_sync+0x38d zfs`txg_sync_thread+0x247 unix`thread_start+0x8 10 zio_write zfs`arc_write+0xb4 zfs`dmu_objset_sync+0x109 zfs`dsl_dataset_sync+0x5d zfs`dsl_pool_sync+0xc0 zfs`spa_sync+0x38d zfs`txg_sync_thread+0x247 unix`thread_start+0x8 10 zio_write zfs`arc_write+0xb4 zfs`dbuf_write+0x1ae zfs`dbuf_sync_indirect+0x95 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dnode_sync+0x377 zfs`dmu_objset_sync+0x127 zfs`dsl_dataset_sync+0x5d zfs`dsl_pool_sync+0xc0 zfs`spa_sync+0x38d zfs`txg_sync_thread+0x247 10 zio_write zfs`arc_write+0xb4 zfs`dbuf_write+0x1ae zfs`dbuf_sync_indirect+0x95 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dnode_sync+0x377 zfs`dmu_objset_sync+0x127 zfs`dsl_dataset_sync+0x5d zfs`dsl_pool_sync+0xc0 zfs`spa_sync+0x38d zfs`txg_sync_thread+0x247 unix`thread_start+0x8 10 zio_write zfs`arc_write+0xb4 zfs`dbuf_write+0x1ae zfs`dbuf_sync_indirect+0x95 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dnode_sync+0x377 zfs`dmu_objset_sync+0x127 zfs`dsl_dataset_sync+0x5d zfs`dsl_pool_sync+0xc0 zfs`spa_sync+0x38d zfs`txg_sync_thread+0x247 unix`thread_start+0x8 10 zio_write zfs`arc_write+0xb4 zfs`dbuf_write+0x1ae zfs`dbuf_sync_indirect+0x95 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dnode_sync+0x377 zfs`dmu_objset_sync+0x127 zfs`dsl_dataset_sync+0x5d zfs`dsl_pool_sync+0xc0 zfs`spa_sync+0x38d zfs`txg_sync_thread+0x247 unix`thread_start+0x8 10 zio_write zfs`arc_write+0xb4 zfs`dbuf_write+0x1ae zfs`dbuf_sync_indirect+0x95 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dnode_sync+0x377 zfs`dmu_objset_sync+0x127 zfs`dsl_dataset_sync+0x5d zfs`dsl_pool_sync+0xc0 zfs`spa_sync+0x38d zfs`txg_sync_thread+0x247 unix`thread_start+0x8 10 zio_write zfs`arc_write+0xb4 zfs`dbuf_write+0x1ae zfs`dbuf_sync_indirect+0x95 zfs`dbuf_sync_list+0x65 zfs`dnode_sync+0x377 zfs`dmu_objset_sync_dnodes+0x80 zfs`dmu_objset_sync+0x1e3 zfs`dsl_dataset_sync+0x5d zfs`dsl_pool_sync+0xc0 zfs`spa_sync+0x38d zfs`txg_sync_thread+0x247 unix`thread_start+0x8 10 zio_write zfs`arc_write+0xb4 zfs`dbuf_write+0x1ae zfs`dbuf_sync_indirect+0x95 zfs`dbuf_sync_list+0x65 zfs`dnode_sync+0x377 zfs`dmu_objset_sync+0x127 zfs`dsl_dataset_sync+0x5d zfs`dsl_pool_sync+0xc0 zfs`spa_sync+0x38d zfs`txg_sync_thread+0x247 unix`thread_start+0x8 10 zio_write zfs`arc_write+0xb4 zfs`dbuf_write+0x1ae zfs`dbuf_sync_leaf+0x14a zfs`dbuf_sync_list+0x58 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dnode_sync+0x377 zfs`dmu_objset_sync+0x127 zfs`dsl_dataset_sync+0x5d zfs`dsl_pool_sync+0xc0 10 zio_write zfs`arc_write+0xb4 zfs`dbuf_write+0x1ae zfs`dbuf_sync_leaf+0x14a zfs`dbuf_sync_list+0x58 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dnode_sync+0x377 zfs`dmu_objset_sync_dnodes+0x80 zfs`dmu_objset_sync+0x1e3 zfs`dsl_dataset_sync+0x5d zfs`dsl_pool_sync+0xc0 zfs`spa_sync+0x38d zfs`txg_sync_thread+0x247 unix`thread_start+0x8 10 zio_write zfs`arc_write+0xb4 zfs`dbuf_write+0x1ae zfs`dbuf_sync_leaf+0x14a zfs`dbuf_sync_list+0x58 zfs`dnode_sync+0x377 zfs`dmu_objset_sync+0x15c zfs`dsl_dataset_sync+0x5d zfs`dsl_pool_sync+0x16b zfs`spa_sync+0x38d zfs`txg_sync_thread+0x247 unix`thread_start+0x8 10 zio_write zfs`arc_write+0xb4 zfs`dbuf_write+0x1ae zfs`dbuf_sync_leaf+0x14a zfs`dbuf_sync_list+0x58 zfs`dnode_sync+0x377 zfs`dmu_objset_sync+0x173 zfs`dsl_dataset_sync+0x5d zfs`dsl_pool_sync+0x16b zfs`spa_sync+0x38d zfs`txg_sync_thread+0x247 unix`thread_start+0x8 10 zio_write zfs`arc_write+0xb4 zfs`dbuf_write+0x1ae zfs`dbuf_sync_indirect+0x95 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dnode_sync+0x377 zfs`dmu_objset_sync_dnodes+0x80 zfs`dmu_objset_sync+0x1e3 zfs`dsl_pool_sync+0x2b4 zfs`spa_sync+0x38d zfs`txg_sync_thread+0x247 unix`thread_start+0x8 23 zio_write zfs`arc_write+0xb4 zfs`dbuf_write+0x1ae zfs`dbuf_sync_indirect+0x95 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dnode_sync+0x377 zfs`dmu_objset_sync_dnodes+0x80 zfs`dmu_objset_sync+0x1e3 zfs`dsl_pool_sync+0x2b4 zfs`spa_sync+0x38d zfs`txg_sync_thread+0x247 unix`thread_start+0x8 30 zio_write zfs`arc_write+0xb4 zfs`dbuf_write+0x1ae zfs`dbuf_sync_leaf+0x14a zfs`dbuf_sync_list+0x58 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dnode_sync+0x377 zfs`dmu_objset_sync_dnodes+0x80 zfs`dmu_objset_sync+0x1e3 zfs`dsl_pool_sync+0x2b4 zfs`spa_sync+0x38d zfs`txg_sync_thread+0x247 unix`thread_start+0x8 30 zio_write zfs`arc_write+0xb4 zfs`dbuf_write+0x1ae zfs`dbuf_sync_leaf+0x14a zfs`dbuf_sync_list+0x58 zfs`dnode_sync+0x377 zfs`dmu_objset_sync_dnodes+0x80 zfs`dmu_objset_sync+0x1e3 zfs`dsl_pool_sync+0x2b4 zfs`spa_sync+0x38d zfs`txg_sync_thread+0x247 unix`thread_start+0x8 45 zio_write zfs`arc_write+0xb4 zfs`dmu_objset_sync+0x109 zfs`dsl_pool_sync+0x2b4 zfs`spa_sync+0x38d zfs`txg_sync_thread+0x247 unix`thread_start+0x8 73 zio_write zfs`arc_write+0xb4 zfs`dbuf_write+0x1ae zfs`dbuf_sync_indirect+0x95 zfs`dbuf_sync_list+0x65 zfs`dnode_sync+0x377 zfs`dmu_objset_sync+0x127 zfs`dsl_pool_sync+0x2b4 zfs`spa_sync+0x38d zfs`txg_sync_thread+0x247 unix`thread_start+0x8 73 zio_write zfs`arc_write+0xb4 zfs`dbuf_write+0x1ae zfs`dbuf_sync_leaf+0x14a zfs`dbuf_sync_list+0x58 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dnode_sync+0x377 zfs`dmu_objset_sync+0x127 zfs`dsl_pool_sync+0x2b4 zfs`spa_sync+0x38d zfs`txg_sync_thread+0x247 unix`thread_start+0x8 322 zio_write zfs`arc_write+0xb4 zfs`dbuf_write+0x1ae zfs`dbuf_sync_leaf+0x14a zfs`dbuf_sync_list+0x58 zfs`dbuf_sync_indirect+0xb1 zfs`dbuf_sync_list+0x65 zfs`dnode_sync+0x377 zfs`dmu_objset_sync_dnodes+0x80 zfs`dmu_objset_sync+0x1e3 zfs`dsl_pool_sync+0x2b4 zfs`spa_sync+0x38d zfs`txg_sync_thread+0x247 unix`thread_start+0x8 379 zio_write zfs`arc_write+0xb4 zfs`dbuf_write+0x1ae zfs`dbuf_sync_indirect+0x95 zfs`dbuf_sync_list+0x65 zfs`dnode_sync+0x377 zfs`dmu_objset_sync_dnodes+0x80 zfs`dmu_objset_sync+0x1e3 zfs`dsl_pool_sync+0x2b4 zfs`spa_sync+0x38d zfs`txg_sync_thread+0x247 unix`thread_start+0x8 383 real 0m55.362s user 0m1.505s sys 0m0.658s ============================================================ iostat -xn 1: Sat Jun 11 22:55:53 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 33.0 0.0 371.5 0.0 0.2 0.3 5.9 8.4 13 28 c7t0d0 20.0 0.0 319.6 0.0 0.0 0.4 0.0 19.6 0 20 c7t1d0 77.9 0.0 579.3 0.0 0.0 0.8 0.0 10.0 0 42 c7t2d0 31.0 0.0 487.4 0.0 0.2 0.3 8.0 10.3 12 32 c7t3d0 27.0 0.0 347.6 0.0 0.0 0.4 0.0 15.5 0 23 c7t4d0 71.9 0.0 423.5 0.0 0.2 0.5 2.8 7.5 12 35 c7t5d0 27.0 0.0 1667.9 0.0 0.0 1.0 0.0 35.8 0 97 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:55:54 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 24.0 0.0 336.1 0.0 0.1 0.2 2.2 10.1 3 24 c7t0d0 26.0 0.0 412.2 0.0 0.0 0.4 0.0 16.0 0 20 c7t1d0 70.0 0.0 524.2 0.0 0.0 0.5 0.0 7.8 0 32 c7t2d0 18.0 0.0 312.1 0.0 0.1 0.2 6.8 11.5 5 21 c7t3d0 26.0 0.0 352.1 0.0 0.0 0.4 0.0 14.3 0 21 c7t4d0 73.0 0.0 612.3 0.0 0.2 0.4 2.1 5.8 10 31 c7t5d0 28.0 0.0 1670.2 0.0 0.0 1.0 0.0 34.6 0 97 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:55:55 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 23.3 0.0 852.0 0.0 0.0 0.3 0.4 12.8 0 30 c7t0d0 14.6 0.0 408.5 0.0 0.0 0.2 0.0 17.1 0 20 c7t1d0 78.8 0.0 556.4 0.0 0.0 0.5 0.0 6.0 0 19 c7t2d0 20.4 0.0 606.9 0.0 0.0 0.2 0.0 12.1 0 25 c7t3d0 11.7 0.0 396.8 0.0 0.0 0.2 0.0 15.8 0 13 c7t4d0 72.9 0.0 587.5 0.0 0.2 0.4 2.9 5.4 12 25 c7t5d0 0.0 248.0 0.0 1908.3 0.4 0.1 1.6 0.5 12 12 c4t1d0 27.2 0.0 1681.7 0.0 0.0 1.0 0.0 35.5 0 97 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:55:56 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 9.3 4.1 39.1 0.0 0.0 0.0 0.0 1.6 0 2 c7t0d0 14.4 4.1 121.5 0.0 0.0 0.1 0.0 3.9 0 4 c7t1d0 21.6 4.1 212.1 0.0 0.0 0.2 0.0 6.0 0 13 c7t2d0 15.4 4.1 249.2 0.0 0.0 0.1 1.0 3.8 2 7 c7t3d0 15.4 4.1 125.6 0.0 0.0 0.1 2.4 3.6 3 4 c7t4d0 15.4 4.1 182.3 0.0 0.1 0.2 4.0 8.5 4 14 c7t5d0 7.2 8.2 3.1 4.1 0.0 0.0 0.0 0.0 0 0 c4t1d0 6.2 0.0 276.0 0.0 0.0 1.0 0.0 160.7 0 99 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:55:57 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 36.6 0.0 332.6 0.0 0.4 0.3 10.4 9.4 18 34 c7t0d0 19.8 0.0 316.7 0.0 0.0 0.3 0.0 15.5 0 18 c7t1d0 53.4 0.0 300.9 0.0 0.0 0.4 0.0 8.4 0 24 c7t2d0 43.5 0.0 601.8 0.0 0.4 0.4 9.8 8.2 18 36 c7t3d0 23.8 0.0 391.9 0.0 0.3 0.3 10.7 12.2 12 29 c7t4d0 50.5 0.0 408.8 0.0 0.2 0.3 4.4 5.3 12 27 c7t5d0 6.9 2.0 3.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t1d0 24.7 0.0 1410.9 0.0 0.0 1.0 0.0 39.3 0 97 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:55:58 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 26.3 0.0 408.2 0.0 0.2 0.3 9.1 11.9 11 31 c7t0d0 33.3 0.0 497.1 0.0 0.0 0.6 0.0 16.7 0 31 c7t1d0 93.0 0.0 428.4 0.0 0.0 0.6 0.0 6.8 0 27 c7t2d0 20.2 0.0 323.3 0.0 0.2 0.2 9.4 8.3 10 17 c7t3d0 25.3 0.0 250.6 0.0 0.3 0.2 11.8 8.0 13 20 c7t4d0 101.0 0.0 658.8 0.0 0.4 0.3 3.9 3.0 20 31 c7t5d0 30.3 0.0 1879.4 0.0 0.0 0.9 0.1 30.8 0 93 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:55:59 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 34.0 0.0 324.0 0.0 0.3 0.3 7.4 7.6 11 26 c7t0d0 25.0 0.0 400.0 0.0 0.0 0.4 0.0 14.5 0 23 c7t1d0 61.0 0.0 328.0 0.0 0.0 0.6 0.0 10.4 0 34 c7t2d0 36.0 0.0 460.0 0.0 0.3 0.3 7.0 8.0 12 29 c7t3d0 27.0 0.0 408.0 0.0 0.1 0.2 4.8 7.9 7 21 c7t4d0 61.0 0.0 540.0 0.0 0.4 0.4 5.9 5.8 16 35 c7t5d0 27.0 0.0 1670.1 0.0 0.0 1.0 0.0 36.0 0 97 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:00 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 24.0 0.0 228.0 0.0 0.2 0.2 6.3 7.3 8 17 c7t0d0 26.0 0.0 404.0 0.0 0.0 0.4 0.0 15.1 0 22 c7t1d0 60.0 0.0 312.0 0.0 0.0 0.6 0.0 10.3 0 30 c7t2d0 33.0 0.0 372.0 0.0 0.1 0.2 2.8 7.4 6 24 c7t3d0 31.0 0.0 424.0 0.0 0.2 0.2 6.6 6.8 11 21 c7t4d0 63.0 0.0 436.0 0.0 0.3 0.4 5.0 5.8 20 37 c7t5d0 0.0 370.0 0.0 1968.4 0.1 0.1 0.4 0.2 5 6 c4t1d0 29.0 0.0 1738.9 0.0 0.0 0.9 0.0 32.4 0 94 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:01 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 15.8 0.0 398.0 0.0 0.1 0.2 4.8 13.4 6 21 c7t0d0 20.5 0.0 364.6 0.0 0.0 0.3 0.0 14.3 0 18 c7t1d0 80.9 0.0 468.7 0.0 0.0 0.9 0.0 11.6 0 41 c7t2d0 22.3 0.0 312.5 0.0 0.0 0.2 2.1 10.0 3 22 c7t3d0 18.6 0.0 297.6 0.0 0.1 0.2 5.1 9.3 6 17 c7t4d0 77.2 0.0 621.2 0.0 0.4 0.4 5.5 5.0 21 38 c7t5d0 0.0 5.6 0.0 3.7 0.0 0.0 0.0 0.1 0 0 c4t1d0 26.0 0.0 1558.7 0.0 0.0 1.0 0.0 37.1 0 97 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:02 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 16.2 0.0 194.6 0.0 0.1 0.2 6.9 15.1 9 25 c7t0d0 26.0 0.0 233.6 0.0 0.0 0.4 0.0 15.7 0 18 c7t1d0 76.8 0.0 328.7 0.0 0.0 0.9 0.0 11.5 0 42 c7t2d0 17.3 0.0 263.9 0.0 0.1 0.2 4.5 11.7 8 20 c7t3d0 24.9 0.0 488.8 0.0 0.2 0.2 7.4 10.0 9 25 c7t4d0 72.5 0.0 436.9 0.0 0.4 0.3 5.7 4.4 22 32 c7t5d0 26.0 0.0 1596.1 0.0 0.0 1.0 0.1 36.7 0 95 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:03 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 25.0 0.0 159.8 0.0 0.2 0.2 6.9 7.0 9 17 c7t0d0 33.0 0.0 375.6 0.0 0.0 0.6 0.0 17.6 0 29 c7t1d0 98.9 0.0 547.5 0.0 0.0 0.9 0.0 8.8 0 40 c7t2d0 28.0 0.0 295.7 0.0 0.2 0.2 7.1 7.6 10 21 c7t3d0 35.0 0.0 447.6 0.0 0.1 0.2 3.8 5.7 6 20 c7t4d0 110.9 0.0 583.4 0.0 0.7 0.5 6.0 4.5 27 50 c7t5d0 34.0 0.0 2111.0 0.0 0.0 0.9 0.0 26.3 0 89 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:04 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 23.0 0.0 336.3 0.0 0.1 0.3 5.1 12.8 9 30 c7t0d0 28.0 0.0 296.3 0.0 0.0 0.5 0.0 16.7 0 25 c7t1d0 79.1 0.0 564.6 0.0 0.0 0.8 0.0 9.5 0 41 c7t2d0 24.0 0.0 520.5 0.0 0.1 0.2 4.2 7.9 7 19 c7t3d0 24.0 0.0 216.2 0.0 0.1 0.2 5.2 8.1 6 19 c7t4d0 75.1 0.0 300.3 0.0 0.5 0.4 6.6 4.7 23 35 c7t5d0 25.0 0.0 1422.9 0.0 0.0 0.9 0.0 37.2 0 93 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:05 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 28.0 0.0 120.0 0.0 0.4 0.2 12.6 8.5 18 24 c7t0d0 35.0 0.0 272.0 0.0 0.0 0.8 0.0 21.5 0 32 c7t1d0 82.0 0.0 612.0 0.0 0.0 0.7 0.0 8.9 0 33 c7t2d0 27.0 0.0 288.0 0.0 0.3 0.2 11.3 8.4 15 23 c7t3d0 42.0 0.0 544.0 0.0 0.3 0.3 8.0 8.0 13 34 c7t4d0 78.0 0.0 464.0 0.0 0.6 0.4 7.4 4.8 25 37 c7t5d0 0.0 372.0 0.0 2394.4 0.6 0.2 1.6 0.6 18 19 c4t1d0 28.0 0.0 1792.0 0.0 0.0 0.9 0.1 33.8 0 95 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:06 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 20.8 0.0 380.2 0.0 0.1 0.2 6.3 8.7 7 18 c7t0d0 14.9 0.0 297.0 0.0 0.0 0.2 0.0 11.8 0 14 c7t1d0 105.9 0.0 701.0 0.0 0.0 1.1 0.0 10.7 0 48 c7t2d0 23.8 0.0 510.9 0.0 0.1 0.2 5.2 9.8 5 23 c7t3d0 20.8 0.0 261.4 0.0 0.1 0.2 7.0 9.1 8 19 c7t4d0 110.9 0.0 625.7 0.0 0.6 0.4 5.8 3.8 28 42 c7t5d0 0.0 5.9 0.0 4.0 0.0 0.0 0.0 0.1 0 0 c4t1d0 29.7 0.0 1901.0 0.0 0.0 0.9 0.1 31.7 0 94 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:07 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 20.2 4.0 216.2 0.0 0.1 0.1 5.3 5.5 5 13 c7t0d0 19.2 4.0 381.8 0.0 0.0 0.1 0.5 5.6 1 13 c7t1d0 73.7 4.0 365.6 0.0 0.0 0.5 0.0 6.3 0 19 c7t2d0 22.2 4.0 276.8 0.0 0.1 0.2 5.2 6.4 5 17 c7t3d0 20.2 4.0 325.2 0.0 0.0 0.1 0.1 4.6 0 11 c7t4d0 68.7 4.0 402.0 0.0 0.4 0.3 5.7 3.6 20 26 c7t5d0 14.1 4.0 6.1 0.0 0.0 0.0 0.0 0.0 0 0 c4t1d0 19.2 0.0 1165.1 0.0 0.0 0.7 0.0 34.6 0 66 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:08 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 16.0 0.0 68.0 0.0 0.2 0.2 11.8 9.7 8 16 c7t0d0 16.0 0.0 243.9 0.0 0.1 0.1 5.3 8.1 5 13 c7t1d0 83.0 0.0 443.8 0.0 0.0 0.7 0.0 8.6 0 27 c7t2d0 14.0 0.0 359.9 0.0 0.0 0.1 2.2 8.3 2 12 c7t3d0 21.0 0.0 443.8 0.0 0.1 0.2 2.9 10.5 3 22 c7t4d0 76.0 0.0 343.9 0.0 0.4 0.2 5.4 2.9 20 22 c7t5d0 22.0 0.0 1347.5 0.0 0.0 0.7 0.0 31.8 0 70 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:09 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 21.8 0.0 407.6 0.0 0.1 0.2 5.5 9.3 4 20 c7t0d0 13.9 0.0 292.8 0.0 0.1 0.1 4.5 8.7 2 12 c7t1d0 105.9 0.0 732.1 0.0 0.0 1.1 0.0 10.2 0 45 c7t2d0 25.7 0.0 482.8 0.0 0.1 0.2 4.6 8.0 4 21 c7t3d0 13.9 0.0 292.8 0.0 0.2 0.2 12.7 11.7 7 16 c7t4d0 102.9 0.0 807.3 0.0 0.6 0.4 6.1 4.3 26 44 c7t5d0 28.7 0.0 1477.1 0.0 0.0 1.0 0.0 33.3 0 96 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:10 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 18.0 0.0 552.4 0.0 0.1 0.3 3.7 15.1 7 27 c7t0d0 13.0 0.0 300.2 0.0 0.1 0.1 9.3 11.4 3 15 c7t1d0 116.1 0.0 932.7 0.0 0.0 1.1 0.0 9.5 0 44 c7t2d0 16.0 0.0 304.2 0.0 0.0 0.2 2.8 12.7 4 20 c7t3d0 14.0 0.0 240.2 0.0 0.2 0.1 11.6 8.3 5 12 c7t4d0 117.1 0.0 812.6 0.0 0.8 0.5 7.1 4.1 31 47 c7t5d0 1.0 266.2 3.0 1934.0 0.1 0.1 0.4 0.2 4 6 c4t1d0 26.0 0.0 1544.7 0.0 0.0 0.9 0.1 36.4 0 95 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:11 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 27.3 0.0 610.2 0.0 0.3 0.3 9.5 10.7 12 29 c7t0d0 19.2 0.0 379.9 0.0 0.1 0.2 5.8 8.9 4 17 c7t1d0 96.0 0.0 719.3 0.0 0.0 0.9 0.0 9.1 0 38 c7t2d0 24.2 0.0 169.7 0.0 0.2 0.2 8.0 8.4 11 20 c7t3d0 16.2 0.0 246.5 0.0 0.1 0.1 4.8 7.5 3 12 c7t4d0 106.1 0.0 804.2 0.0 0.6 0.3 5.8 3.2 23 34 c7t5d0 0.0 6.1 0.0 4.0 0.0 0.0 0.0 0.1 0 0 c4t1d0 25.3 0.0 1616.5 0.0 0.0 0.9 0.1 34.1 0 86 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:12 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 28.7 0.0 649.1 0.0 0.2 0.3 5.8 10.5 7 30 c7t0d0 14.8 0.0 237.5 0.0 0.1 0.2 5.4 11.6 4 17 c7t1d0 101.9 0.0 803.5 0.0 0.0 0.7 0.0 6.9 0 35 c7t2d0 19.8 0.0 257.3 0.0 0.1 0.1 4.6 4.9 5 10 c7t3d0 10.9 0.0 43.5 0.0 0.1 0.1 8.1 7.5 5 8 c7t4d0 95.0 0.0 775.8 0.0 0.5 0.4 5.0 3.8 21 36 c7t5d0 27.7 0.0 1654.5 0.0 0.0 0.9 0.1 33.4 0 93 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:13 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 3.0 0.0 194.1 0.0 0.0 0.1 0.0 19.6 0 6 c7t0d0 2.0 0.0 129.4 0.0 0.0 0.0 0.0 16.1 0 3 c7t1d0 32.3 0.0 218.3 0.0 0.0 0.4 0.0 13.1 0 18 c7t2d0 2.0 0.0 68.7 0.0 0.0 0.0 0.0 19.8 0 4 c7t3d0 1.0 0.0 4.0 0.0 0.0 0.0 0.0 16.5 0 2 c7t4d0 30.3 0.0 161.7 0.0 0.2 0.1 6.8 4.9 11 15 c7t5d0 7.1 0.0 452.9 0.0 0.0 1.0 0.0 137.5 0 97 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:14 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 18.8 0.0 435.6 0.0 0.1 0.3 7.0 14.3 5 27 c7t0d0 12.9 0.0 289.1 0.0 0.1 0.2 5.7 12.7 4 16 c7t1d0 61.4 0.0 344.5 0.0 0.0 0.7 0.0 11.7 0 30 c7t2d0 20.8 0.0 261.4 0.0 0.2 0.2 9.1 11.7 9 24 c7t3d0 14.9 0.0 245.5 0.0 0.1 0.1 6.4 8.8 5 13 c7t4d0 61.4 0.0 396.0 0.0 0.5 0.3 8.1 5.3 23 33 c7t5d0 22.8 0.0 1397.5 0.0 0.0 0.9 0.0 41.2 0 94 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:15 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 28.3 0.0 355.5 0.0 0.2 0.3 7.3 10.5 11 30 c7t0d0 25.3 0.0 343.4 0.0 0.1 0.2 5.4 6.7 7 17 c7t1d0 81.8 0.0 448.4 0.0 0.0 0.6 0.0 7.1 0 26 c7t2d0 21.2 0.0 145.4 0.0 0.2 0.2 11.1 9.2 11 20 c7t3d0 21.2 0.0 387.8 0.0 0.1 0.2 3.6 9.4 6 20 c7t4d0 88.9 0.0 610.0 0.0 0.6 0.4 6.8 4.6 27 41 c7t5d0 0.0 284.8 0.0 2470.0 0.6 0.2 2.3 0.7 18 20 c4t1d0 24.2 0.0 1551.4 0.0 0.0 0.9 0.1 36.8 0 89 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:16 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 28.7 0.0 486.8 0.0 0.3 0.3 10.3 9.0 12 26 c7t0d0 17.8 0.0 308.7 0.0 0.0 0.2 2.6 9.0 3 16 c7t1d0 109.8 0.0 660.9 0.0 0.0 0.8 0.0 7.2 0 31 c7t2d0 31.7 0.0 542.2 0.0 0.2 0.3 7.8 8.5 10 27 c7t3d0 22.8 0.0 447.2 0.0 0.2 0.2 7.3 7.6 7 17 c7t4d0 111.8 0.0 866.8 0.0 0.5 0.4 4.1 3.4 20 38 c7t5d0 0.0 5.9 0.0 4.0 0.0 0.0 0.0 0.1 0 0 c4t1d0 31.7 0.0 1905.2 0.0 0.0 0.9 0.0 28.2 0 89 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:17 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 33.0 0.0 372.3 0.0 0.2 0.2 7.0 7.0 9 23 c7t0d0 29.0 0.0 328.3 0.0 0.2 0.2 6.5 6.5 9 19 c7t1d0 116.1 0.0 696.5 0.0 0.0 1.0 0.0 8.4 0 46 c7t2d0 33.0 0.0 436.3 0.0 0.2 0.2 5.4 6.6 9 22 c7t3d0 27.0 0.0 260.2 0.0 0.2 0.2 6.2 6.5 7 18 c7t4d0 113.1 0.0 680.5 0.0 0.5 0.4 4.7 3.2 22 36 c7t5d0 36.0 0.0 2121.6 0.0 0.0 0.9 0.0 25.5 0 92 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:18 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 14.0 4.0 178.0 0.0 0.0 0.1 0.0 5.5 0 10 c7t0d0 26.0 4.0 346.0 0.0 0.1 0.2 2.3 5.3 3 16 c7t1d0 92.0 4.0 694.0 0.0 0.1 0.7 1.6 7.7 9 40 c7t2d0 17.0 4.0 190.0 0.0 0.0 0.1 0.9 5.2 1 11 c7t3d0 23.0 4.0 154.0 0.0 0.1 0.1 3.8 3.3 3 9 c7t4d0 90.0 4.0 646.0 0.0 0.5 0.4 5.1 3.7 22 35 c7t5d0 14.0 4.0 6.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t1d0 23.0 0.0 1285.9 0.0 0.0 0.7 0.1 32.3 0 74 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:19 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 19.2 0.0 383.9 0.0 0.1 0.1 5.8 7.4 5 14 c7t0d0 14.1 0.0 177.8 0.0 0.1 0.1 5.5 10.5 4 15 c7t1d0 82.8 0.0 440.5 0.0 0.4 0.3 5.1 3.9 21 33 c7t2d0 18.2 0.0 198.0 0.0 0.1 0.1 5.5 8.2 4 15 c7t3d0 14.1 0.0 303.1 0.0 0.0 0.1 2.9 8.0 1 11 c7t4d0 83.8 0.0 460.7 0.0 0.4 0.3 4.6 3.5 18 29 c7t5d0 25.3 0.0 1552.7 0.0 0.0 0.7 0.1 26.2 0 66 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:20 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 30.0 0.0 484.0 0.0 0.2 0.3 6.9 10.4 10 31 c7t0d0 10.0 0.0 220.0 0.0 0.1 0.1 5.8 11.2 3 11 c7t1d0 85.0 0.0 516.0 0.0 0.5 0.4 5.4 4.3 19 36 c7t2d0 29.0 0.0 356.0 0.0 0.2 0.3 6.6 10.0 10 29 c7t3d0 13.0 0.0 232.0 0.0 0.0 0.1 3.4 10.2 3 13 c7t4d0 87.0 0.0 560.0 0.0 0.6 0.4 7.2 5.0 27 41 c7t5d0 1.0 265.0 2.0 1965.9 0.2 0.1 0.6 0.3 6 8 c4t1d0 28.0 0.0 1612.9 0.0 0.0 1.0 0.0 34.0 0 95 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:21 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 26.7 0.0 819.9 0.0 0.1 0.4 5.0 14.0 7 37 c7t0d0 16.8 0.0 368.4 0.0 0.1 0.2 7.7 11.2 5 19 c7t1d0 84.2 0.0 475.3 0.0 0.5 0.3 5.7 3.7 19 31 c7t2d0 34.7 0.0 732.7 0.0 0.1 0.3 2.6 9.0 3 31 c7t3d0 18.8 0.0 554.5 0.0 0.0 0.2 1.3 9.7 1 18 c7t4d0 80.2 0.0 653.5 0.0 0.4 0.3 5.1 3.9 17 32 c7t5d0 0.0 5.9 0.0 4.0 0.0 0.0 0.1 0.1 0 0 c4t1d0 23.8 0.0 1348.6 0.0 0.0 1.0 0.0 41.0 0 97 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:22 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 18.2 0.0 254.4 0.0 0.1 0.2 4.2 10.9 3 20 c7t0d0 12.1 0.0 411.8 0.0 0.0 0.2 1.8 15.0 2 18 c7t1d0 85.8 0.0 480.5 0.0 0.5 0.4 6.0 4.2 25 36 c7t2d0 12.1 0.0 230.1 0.0 0.0 0.1 2.5 9.8 2 12 c7t3d0 7.1 0.0 209.9 0.0 0.0 0.1 2.3 11.5 2 8 c7t4d0 97.9 0.0 646.0 0.0 0.6 0.5 6.1 4.7 28 46 c7t5d0 27.3 0.0 1744.2 0.0 0.0 0.9 0.1 32.6 0 89 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:23 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 19.0 0.0 260.2 0.0 0.2 0.2 11.0 8.9 8 17 c7t0d0 20.0 0.0 324.2 0.0 0.2 0.2 10.6 10.8 8 22 c7t1d0 84.1 0.0 780.5 0.0 0.5 0.4 5.7 5.1 22 43 c7t2d0 20.0 0.0 268.2 0.0 0.1 0.2 5.9 8.3 6 17 c7t3d0 18.0 0.0 252.2 0.0 0.1 0.2 6.0 10.7 5 19 c7t4d0 82.1 0.0 576.4 0.0 0.5 0.4 6.6 4.7 23 39 c7t5d0 24.0 0.0 1476.5 0.0 0.0 0.9 0.1 39.4 0 95 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:24 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 23.0 0.0 392.0 0.0 0.1 0.2 4.9 9.5 5 22 c7t0d0 14.0 0.0 124.0 0.0 0.1 0.1 4.6 8.7 2 12 c7t1d0 85.0 0.0 448.0 0.0 0.6 0.4 7.0 4.7 26 40 c7t2d0 28.0 0.0 592.0 0.0 0.1 0.2 4.7 6.9 9 19 c7t3d0 11.0 0.0 56.0 0.0 0.1 0.1 5.2 10.0 2 11 c7t4d0 85.0 0.0 572.0 0.0 0.6 0.4 6.7 4.2 25 36 c7t5d0 29.0 0.0 1856.0 0.0 0.0 0.9 0.1 31.6 0 92 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:25 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 12.0 0.0 116.0 0.0 0.1 0.1 6.6 7.5 3 9 c7t0d0 8.0 0.0 152.0 0.0 0.0 0.1 5.2 9.9 3 8 c7t1d0 36.0 0.0 168.0 0.0 0.2 0.1 5.5 3.6 8 13 c7t2d0 15.0 0.0 120.0 0.0 0.1 0.1 6.7 4.7 3 7 c7t3d0 9.0 0.0 36.0 0.0 0.1 0.1 6.4 8.8 4 8 c7t4d0 36.0 0.0 184.0 0.0 0.2 0.1 5.5 2.6 7 10 c7t5d0 0.0 291.9 0.0 2424.9 0.5 0.1 1.6 0.5 13 13 c4t1d0 12.0 0.0 767.8 0.0 0.0 0.7 0.0 56.7 0 68 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:26 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 9.0 0.0 156.0 0.0 0.0 0.1 0.0 9.5 0 9 c7t0d0 23.0 0.0 468.0 0.0 0.2 0.2 7.6 7.5 7 17 c7t1d0 117.0 0.0 1032.1 0.0 0.7 0.5 5.7 4.3 26 51 c7t2d0 12.0 0.0 348.0 0.0 0.0 0.2 2.4 12.7 3 15 c7t3d0 23.0 0.0 224.0 0.0 0.2 0.2 8.5 7.0 7 16 c7t4d0 114.0 0.0 604.1 0.0 0.7 0.4 6.0 3.9 28 45 c7t5d0 0.0 6.0 0.0 4.0 0.0 0.0 0.0 0.1 0 0 c4t1d0 27.0 0.0 1665.7 0.0 0.0 1.0 0.0 35.2 0 95 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:27 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 22.8 0.0 510.8 0.0 0.1 0.3 5.4 11.4 5 26 c7t0d0 22.8 0.0 328.7 0.0 0.1 0.2 6.0 8.8 6 20 c7t1d0 86.1 0.0 475.2 0.0 0.5 0.3 5.6 3.6 22 31 c7t2d0 24.7 0.0 281.1 0.0 0.2 0.2 7.5 7.4 9 18 c7t3d0 22.8 0.0 388.1 0.0 0.1 0.2 4.5 8.4 4 19 c7t4d0 74.2 0.0 594.0 0.0 0.5 0.4 6.6 4.9 20 36 c7t5d0 24.7 0.0 1465.1 0.0 0.0 0.9 0.0 38.3 0 95 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:28 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 48.5 2.0 385.0 0.0 0.4 0.3 7.0 6.2 17 31 c7t0d0 26.3 2.0 106.1 0.0 0.2 0.2 6.2 6.9 7 19 c7t1d0 105.1 2.0 599.2 0.0 0.6 0.4 5.5 3.3 26 36 c7t2d0 41.4 2.0 421.4 0.0 0.4 0.3 8.4 6.9 15 30 c7t3d0 24.3 2.0 166.7 0.0 0.1 0.2 5.6 7.2 8 19 c7t4d0 104.1 0.0 634.6 0.0 0.6 0.4 6.2 3.9 27 41 c7t5d0 29.3 0.0 1875.4 0.0 0.0 0.9 0.1 30.7 0 90 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:29 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 38.0 2.0 409.0 0.0 0.4 0.4 8.9 9.3 16 37 c7t0d0 20.0 2.0 381.0 0.0 0.1 0.2 4.4 7.4 7 16 c7t1d0 114.0 2.0 793.0 0.0 0.6 0.4 4.8 3.4 22 40 c7t2d0 30.0 2.0 609.0 0.0 0.2 0.2 6.5 6.8 11 22 c7t3d0 21.0 2.0 209.0 0.0 0.1 0.1 4.5 5.3 5 12 c7t4d0 114.0 4.0 570.0 0.0 0.7 0.4 6.0 3.7 30 43 c7t5d0 14.0 4.0 6.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t1d0 29.0 0.0 1796.0 0.0 0.0 0.9 0.0 32.7 0 95 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:30 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 15.0 0.0 180.0 0.0 0.1 0.2 6.8 10.9 5 16 c7t0d0 19.0 0.0 136.0 0.0 0.1 0.1 5.4 7.7 5 15 c7t1d0 70.0 0.0 452.0 0.0 0.3 0.2 3.9 3.2 12 22 c7t2d0 23.0 0.0 512.0 0.0 0.3 0.3 13.0 12.2 11 28 c7t3d0 14.0 0.0 176.0 0.0 0.1 0.1 9.3 8.4 5 12 c7t4d0 73.0 0.0 528.0 0.0 0.3 0.3 4.5 3.7 15 27 c7t5d0 0.0 36.0 0.0 76.5 0.0 0.0 0.2 0.1 0 0 c4t1d0 22.0 0.0 1407.9 0.0 0.0 1.0 0.0 44.2 0 97 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:31 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 16.8 0.0 304.9 0.0 0.0 0.1 2.0 7.6 2 13 c7t0d0 18.8 0.0 134.6 0.0 0.2 0.2 11.3 10.3 10 19 c7t1d0 30.7 0.0 134.6 0.0 0.2 0.1 5.7 4.4 7 14 c7t2d0 10.9 0.0 43.6 0.0 0.1 0.1 6.5 11.3 7 12 c7t3d0 9.9 0.0 158.4 0.0 0.1 0.1 8.4 10.4 5 10 c7t4d0 37.6 0.0 225.7 0.0 0.2 0.2 6.5 5.5 11 21 c7t5d0 0.0 258.4 0.0 2299.9 0.3 0.1 1.0 0.3 8 8 c4t1d0 10.9 0.0 639.6 0.0 0.0 1.0 0.0 90.4 0 98 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:32 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 9.1 0.0 218.2 0.0 0.0 0.1 1.8 14.2 2 13 c7t0d0 27.3 0.0 351.5 0.0 0.2 0.3 6.3 10.8 10 29 c7t1d0 110.1 0.0 759.6 0.0 0.5 0.4 4.9 4.0 25 44 c7t2d0 17.2 0.0 553.5 0.0 0.0 0.2 0.6 9.7 1 17 c7t3d0 30.3 0.0 606.0 0.0 0.2 0.3 5.5 8.6 8 26 c7t4d0 104.0 0.0 553.5 0.0 0.5 0.4 5.2 3.6 26 37 c7t5d0 30.3 0.0 1939.3 0.0 0.0 1.0 0.0 31.9 0 97 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:33 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 23.8 0.0 395.9 0.0 0.1 0.2 2.8 8.7 3 21 c7t0d0 13.9 0.0 293.0 0.0 0.1 0.2 7.7 12.1 4 17 c7t1d0 139.6 0.0 795.8 0.0 0.8 0.5 5.9 3.4 34 48 c7t2d0 19.8 0.0 197.9 0.0 0.1 0.1 2.5 5.8 2 12 c7t3d0 16.8 0.0 245.5 0.0 0.1 0.1 5.4 7.9 4 13 c7t4d0 143.5 0.0 720.5 0.0 0.8 0.4 5.3 2.9 33 42 c7t5d0 38.6 0.0 2411.5 0.0 0.0 0.9 0.0 24.3 0 94 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:34 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 25.3 0.0 480.9 0.0 0.2 0.3 6.3 11.2 6 28 c7t0d0 24.2 0.0 230.3 0.0 0.3 0.2 11.3 8.2 13 20 c7t1d0 98.0 0.0 699.1 0.0 0.6 0.4 6.4 3.9 28 38 c7t2d0 25.3 0.0 238.4 0.0 0.2 0.2 7.2 7.8 8 20 c7t3d0 21.2 0.0 165.7 0.0 0.3 0.2 12.4 8.6 11 18 c7t4d0 102.0 0.0 662.8 0.0 0.7 0.4 6.9 3.9 32 39 c7t5d0 29.3 0.0 1750.8 0.0 0.0 1.0 0.0 32.8 0 96 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:35 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 30.0 0.0 428.1 0.0 0.2 0.3 8.2 10.3 10 31 c7t0d0 16.0 0.0 424.1 0.0 0.0 0.2 0.5 12.6 1 20 c7t1d0 93.0 0.0 676.2 0.0 0.5 0.4 5.0 4.0 19 37 c7t2d0 31.0 0.0 308.1 0.0 0.3 0.3 8.3 9.3 12 29 c7t3d0 12.0 0.0 228.1 0.0 0.0 0.1 1.2 11.3 1 14 c7t4d0 98.0 0.0 632.2 0.0 0.6 0.4 5.7 3.8 23 37 c7t5d0 1.0 271.1 2.0 2035.1 0.3 0.1 1.2 0.4 9 11 c4t1d0 31.0 0.0 1922.5 0.0 0.0 1.0 0.0 30.9 0 96 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:36 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 21.0 0.0 324.0 0.0 0.2 0.2 8.0 8.7 9 18 c7t0d0 10.0 0.0 280.0 0.0 0.0 0.1 3.0 13.2 3 13 c7t1d0 35.0 0.0 148.0 0.0 0.2 0.2 5.2 5.0 10 18 c7t2d0 16.0 0.0 184.0 0.0 0.1 0.2 6.5 10.7 5 17 c7t3d0 12.0 0.0 288.0 0.0 0.0 0.1 2.1 11.7 1 14 c7t4d0 44.0 0.0 540.0 0.0 0.3 0.3 6.6 6.9 12 30 c7t5d0 0.0 6.0 0.0 4.0 0.0 0.0 0.0 0.1 0 0 c4t1d0 16.0 0.0 966.0 0.0 0.0 1.0 0.0 61.4 0 98 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:37 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 18.8 0.0 669.0 0.0 0.1 0.2 4.6 12.5 4 24 c7t0d0 12.9 0.0 407.8 0.0 0.0 0.2 3.8 13.6 5 17 c7t1d0 56.4 0.0 328.6 0.0 0.3 0.3 5.9 4.6 16 26 c7t2d0 10.9 0.0 162.3 0.0 0.0 0.1 4.6 12.9 3 14 c7t3d0 9.9 0.0 99.0 0.0 0.1 0.1 5.1 9.3 3 9 c7t4d0 50.5 0.0 419.6 0.0 0.3 0.2 6.7 4.1 15 21 c7t5d0 14.8 0.0 950.1 0.0 0.0 1.0 0.0 65.8 0 98 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:38 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 22.2 0.0 149.6 0.0 0.2 0.3 9.3 12.3 13 27 c7t0d0 17.2 0.0 190.0 0.0 0.1 0.1 7.7 8.3 5 14 c7t1d0 71.8 0.0 440.6 0.0 0.4 0.4 6.0 5.3 19 38 c7t2d0 33.3 0.0 315.3 0.0 0.3 0.3 8.6 9.3 16 31 c7t3d0 21.2 0.0 388.1 0.0 0.0 0.1 2.2 4.9 3 10 c7t4d0 68.7 0.0 473.0 0.0 0.4 0.3 5.6 4.2 14 29 c7t5d0 32.3 0.0 2006.0 0.0 0.0 1.0 0.0 29.7 0 96 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:39 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 53.4 4.0 516.6 0.0 0.5 0.4 9.0 6.4 24 37 c7t0d0 22.8 4.0 279.1 0.0 0.1 0.2 2.5 6.2 2 17 c7t1d0 106.9 4.0 591.8 0.0 0.5 0.4 4.5 3.3 21 37 c7t2d0 53.4 4.0 524.5 0.0 0.3 0.3 5.5 4.5 14 26 c7t3d0 17.8 4.0 148.4 0.0 0.1 0.1 2.8 4.2 2 9 c7t4d0 99.0 4.0 631.4 0.0 0.3 0.2 3.3 2.0 13 20 c7t5d0 13.9 4.0 5.9 0.0 0.0 0.0 0.0 0.0 0 0 c4t1d0 30.7 0.0 1963.5 0.0 0.0 1.0 0.0 31.1 0 95 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:40 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 28.3 0.0 420.0 0.0 0.3 0.4 11.0 12.8 18 36 c7t0d0 21.2 0.0 206.0 0.0 0.1 0.2 5.6 9.4 7 20 c7t1d0 83.8 0.0 557.3 0.0 0.5 0.4 6.5 4.7 24 39 c7t2d0 29.3 0.0 420.0 0.0 0.2 0.3 7.1 9.4 13 27 c7t3d0 19.2 0.0 197.9 0.0 0.1 0.2 7.7 9.8 8 19 c7t4d0 84.8 0.0 500.8 0.0 0.6 0.4 7.0 5.1 26 43 c7t5d0 1.0 236.2 3.0 1602.7 0.0 0.4 0.0 1.7 0 11 c4t1d0 24.2 0.0 1489.7 0.0 0.0 0.9 0.1 39.1 0 95 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:41 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 31.0 0.0 368.1 0.0 0.3 0.4 10.7 12.4 16 38 c7t0d0 13.0 0.0 232.1 0.0 0.1 0.1 4.9 8.9 3 12 c7t1d0 83.0 0.0 504.1 0.0 0.6 0.4 7.4 5.1 27 43 c7t2d0 30.0 0.0 540.1 0.0 0.2 0.3 6.5 8.8 12 26 c7t3d0 10.0 0.0 100.0 0.0 0.0 0.1 3.8 8.8 3 9 c7t4d0 87.0 0.0 656.1 0.0 0.6 0.5 6.6 5.2 27 45 c7t5d0 0.0 6.0 0.0 4.0 0.0 0.0 0.0 0.1 0 0 c4t1d0 25.0 0.0 1542.3 0.0 0.0 1.0 0.0 38.2 0 95 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:42 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 8.0 0.0 219.8 0.0 0.1 0.1 9.7 12.6 3 10 c7t0d0 4.0 0.0 75.9 0.0 0.0 0.0 10.2 7.3 2 3 c7t1d0 42.0 0.0 275.8 0.0 0.3 0.1 6.6 3.3 11 14 c7t2d0 9.0 0.0 163.9 0.0 0.1 0.1 14.3 9.4 4 8 c7t3d0 4.0 0.0 75.9 0.0 0.0 0.1 4.8 13.1 2 5 c7t4d0 38.0 0.0 183.9 0.0 0.2 0.1 6.4 3.6 9 14 c7t5d0 9.0 0.0 575.6 0.0 0.0 1.0 0.1 108.8 0 98 c0t600144F09844CF0000004D8376AE0002d0 Sat Jun 11 22:56:43 MSD 2011 extended device statistics r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device 23.0 0.0 452.5 0.0 0.1 0.2 5.4 10.4 7 24 c7t0d0 11.0 0.0 284.3 0.0 0.1 0.2 4.9 15.2 5 17 c7t1d0 76.1 0.0 416.5 0.0 0.5 0.3 6.6 4.5 21 34 c7t2d0 28.0 0.0 352.4 0.0 0.2 0.2 7.8 7.5 12 21 c7t3d0 8.0 0.0 212.3 0.0 0.0 0.1 2.3 12.2 2 10 c7t4d0 75.1 0.0 424.5 0.0 0.4 0.3 5.6 3.9 16 29 c7t5d0 23.0 0.0 1350.6 0.0 0.0 1.0 0.1 41.6 0 96 c0t600144F09844CF0000004D8376AE0002d0 ================================================== zpool iostat -v 10: capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- pool 9.40T 1.48T 220 0 913K 0 raidz2 9.40T 1.48T 220 0 913K 0 c7t0d0 - - 20 0 400K 0 c7t1d0 - - 17 0 263K 0 c7t2d0 - - 66 0 456K 0 c7t3d0 - - 18 0 375K 0 c7t4d0 - - 18 0 284K 0 c7t5d0 - - 65 0 485K 0 ---------- ----- ----- ----- ----- ----- ----- rpool 17.1G 2.76G 0 55 0 433K c4t1d0s0 17.1G 2.76G 0 55 0 433K ---------- ----- ----- ----- ----- ----- ----- capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- pool 9.40T 1.48T 279 0 1.12M 0 raidz2 9.40T 1.48T 279 0 1.12M 0 c7t0d0 - - 24 0 278K 0 c7t1d0 - - 25 0 340K 0 c7t2d0 - - 78 0 465K 0 c7t3d0 - - 27 0 405K 0 c7t4d0 - - 26 0 365K 0 c7t5d0 - - 79 0 508K 0 ---------- ----- ----- ----- ----- ----- ----- rpool 17.1G 2.76G 0 71 0 418K c4t1d0s0 17.1G 2.76G 0 71 0 418K ---------- ----- ----- ----- ----- ----- ----- capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- pool 9.40T 1.48T 278 0 1.11M 0 raidz2 9.40T 1.48T 278 0 1.11M 0 c7t0d0 - - 21 0 385K 0 c7t1d0 - - 15 0 291K 0 c7t2d0 - - 86 0 574K 0 c7t3d0 - - 20 0 293K 0 c7t4d0 - - 15 0 258K 0 c7t5d0 - - 86 0 601K 0 ---------- ----- ----- ----- ----- ----- ----- rpool 17.1G 2.76G 0 49 273 391K c4t1d0s0 17.1G 2.76G 0 49 273 391K ---------- ----- ----- ----- ----- ----- ----- capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- pool 9.40T 1.48T 271 0 1.08M 0 raidz2 9.40T 1.48T 271 0 1.08M 0 c7t0d0 - - 20 0 362K 0 c7t1d0 - - 16 0 274K 0 c7t2d0 - - 84 0 551K 0 c7t3d0 - - 22 0 343K 0 c7t4d0 - - 15 0 236K 0 c7t5d0 - - 84 0 556K 0 ---------- ----- ----- ----- ----- ----- ----- rpool 17.1G 2.76G 0 54 195 421K c4t1d0s0 17.1G 2.76G 0 54 195 421K ---------- ----- ----- ----- ----- ----- ----- capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- pool 9.40T 1.48T 273 0 1.09M 0 raidz2 9.40T 1.48T 273 0 1.09M 0 c7t0d0 - - 23 0 352K 0 c7t1d0 - - 18 0 286K 0 c7t2d0 - - 81 0 517K 0 c7t3d0 - - 22 0 310K 0 c7t4d0 - - 17 0 249K 0 c7t5d0 - - 81 0 516K 0 ---------- ----- ----- ----- ----- ----- ----- rpool 17.1G 2.76G 0 53 193 419K c4t1d0s0 17.1G 2.76G 0 53 193 419K ---------- ----- ----- ----- ----- ----- -----