Michael
2007-Oct-09 18:11 UTC
[zfs-discuss] zpool status backwards scrub progress on when using iostat
I am using a x4500 with a single "4*( raid2z 9 + 2)+ 2 spare" pool. I
some bad blocks on one of the disks
Oct 9 13:36:01 zeta1 scsi: [ID 107833 kern.warning] WARNING: /pci at
2,0/pci1022,7458 at 8/pci11ab,11ab at 1/disk at 2,0 (sd13):
Oct 9 13:36:01 zeta1 Error for Command: read Error Level: Retryable
I am running zpool scrub (20 hours so far) (UK time)
2007-10-08.21:51:54 zpool scrub zeta
The progress seem to go backwards when I run zpool iostat.
scrub: scrub in progress, 2.28% done, 5h19m to go
scrub: scrub in progress, 2.45% done, 5h18m to go
scrub: scrub in progress, 2.70% done, 5h15m to go
#
# zpool iostat 5
capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
zeta 10.4T 9.64T 692 177 74.9M 15.3M
zeta 10.4T 9.64T 3.28K 54 395M 238K
zeta 10.4T 9.64T 1.69K 0 8.96M 0
zeta 10.4T 9.64T 981 42 6.82M 356K
zeta 10.4T 9.64T 693 177 74.9M 15.3M
zeta 10.4T 9.64T 4.75K 0 594M 0
zeta 10.4T 9.64T 4.51K 0 564M 0
zeta 10.4T 9.64T 4.62K 75 578M 402K
scrub: scrub in progress, 0.54% done, 4h49m to go
And the time to go is not progressing.
Here is the full status
# zpool status -v
pool: zeta
state: ONLINE
scrub: scrub in progress, 0.32% done, 5h14m to go
config:
NAME STATE READ WRITE CKSUM
gsazeta ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c4t0d0 ONLINE 0 0 0
c4t4d0 ONLINE 0 0 0
c7t0d0 ONLINE 0 0 0
c7t4d0 ONLINE 0 0 0
c6t0d0 ONLINE 0 0 0
c6t4d0 ONLINE 0 0 0
c1t0d0 ONLINE 0 0 0
c1t4d0 ONLINE 0 0 0
c0t0d0 ONLINE 0 0 0
c0t4d0 ONLINE 0 0 0
c5t1d0 ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c5t5d0 ONLINE 0 0 0
c4t1d0 ONLINE 0 0 0
c4t5d0 ONLINE 0 0 0
c7t1d0 ONLINE 0 0 0
c7t5d0 ONLINE 0 0 0
c6t1d0 ONLINE 0 0 0
c6t5d0 ONLINE 0 0 0
c1t1d0 ONLINE 0 0 0
c1t5d0 ONLINE 0 0 0
c0t1d0 ONLINE 0 0 0
c0t5d0 ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c0t2d0 ONLINE 0 0 0
c0t6d0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c1t6d0 ONLINE 0 0 0
c4t2d0 ONLINE 0 0 0
c4t6d0 ONLINE 0 0 0
c6t2d0 ONLINE 0 0 0
c6t6d0 ONLINE 0 0 0
c7t2d0 ONLINE 0 0 0
c7t6d0 ONLINE 0 0 0
c5t6d0 ONLINE 0 0 0
raidz2 ONLINE 0 0 0
c0t3d0 ONLINE 0 0 0
c0t7d0 ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0
c1t7d0 ONLINE 0 0 0
c4t3d0 ONLINE 0 0 0
c4t7d0 ONLINE 0 0 0
c6t3d0 ONLINE 0 0 0
c6t7d0 ONLINE 0 0 0
c7t3d0 ONLINE 0 0 0
c7t7d0 ONLINE 0 0 0
c5t7d0 ONLINE 0 0 0
spares
c5t2d0 AVAIL
c5t3d0 AVAIL
errors: No known data errors
This message posted from opensolaris.org
Wade.Stuart at fallon.com
2007-Oct-09 18:24 UTC
[zfs-discuss] zpool status backwards scrub progress on when using iostat
zfs-discuss-bounces at opensolaris.org wrote on 10/09/2007 01:11:16 PM:> I am using a x4500 with a single "4*( raid2z 9 + 2)+ 2 spare" pool. > I some bad blocks on one of the disks > Oct 9 13:36:01 zeta1 scsi: [ID 107833 kern.warning] WARNING: /pci at 2, > 0/pci1022,7458 at 8/pci11ab,11ab at 1/disk at 2,0 (sd13): > Oct 9 13:36:01 zeta1 Error for Command: read Error Level: Retryable > > I am running zpool scrub (20 hours so far) (UK time) > 2007-10-08.21:51:54 zpool scrub zeta > > The progress seem to go backwards when I run zpool iostat. > scrub: scrub in progress, 2.28% done, 5h19m to go > scrub: scrub in progress, 2.45% done, 5h18m to go > scrub: scrub in progress, 2.70% done, 5h15m to goHave you created any snapshots while the scrub was running? There is a bug that resets the scrub/resilver every time you make a new snapshot. The workaround: Stop making snapshots while scrubbing and resilvering. It really sucks if the sole purpose of the machine is for snaps. More Info: bug id 6343667 Also Matthew Ahrens recently said that this should be fixed sometime around the new year, the bugid really does not show any useful information about the status. -Wade> > # > # zpool iostat 5 > capacity operations bandwidth > pool used avail read write read write > ---------- ----- ----- ----- ----- ----- ----- > zeta 10.4T 9.64T 692 177 74.9M 15.3M > zeta 10.4T 9.64T 3.28K 54 395M 238K > zeta 10.4T 9.64T 1.69K 0 8.96M 0 > zeta 10.4T 9.64T 981 42 6.82M 356K > zeta 10.4T 9.64T 693 177 74.9M 15.3M > zeta 10.4T 9.64T 4.75K 0 594M 0 > zeta 10.4T 9.64T 4.51K 0 564M 0 > zeta 10.4T 9.64T 4.62K 75 578M 402K > > scrub: scrub in progress, 0.54% done, 4h49m to go > > And the time to go is not progressing. > > Here is the full status > # zpool status -v > pool: zeta > state: ONLINE > scrub: scrub in progress, 0.32% done, 5h14m to go > config: > > NAME STATE READ WRITE CKSUM > gsazeta ONLINE 0 0 0 > raidz2 ONLINE 0 0 0 > c4t0d0 ONLINE 0 0 0 > c4t4d0 ONLINE 0 0 0 > c7t0d0 ONLINE 0 0 0 > c7t4d0 ONLINE 0 0 0 > c6t0d0 ONLINE 0 0 0 > c6t4d0 ONLINE 0 0 0 > c1t0d0 ONLINE 0 0 0 > c1t4d0 ONLINE 0 0 0 > c0t0d0 ONLINE 0 0 0 > c0t4d0 ONLINE 0 0 0 > c5t1d0 ONLINE 0 0 0 > raidz2 ONLINE 0 0 0 > c5t5d0 ONLINE 0 0 0 > c4t1d0 ONLINE 0 0 0 > c4t5d0 ONLINE 0 0 0 > c7t1d0 ONLINE 0 0 0 > c7t5d0 ONLINE 0 0 0 > c6t1d0 ONLINE 0 0 0 > c6t5d0 ONLINE 0 0 0 > c1t1d0 ONLINE 0 0 0 > c1t5d0 ONLINE 0 0 0 > c0t1d0 ONLINE 0 0 0 > c0t5d0 ONLINE 0 0 0 > raidz2 ONLINE 0 0 0 > c0t2d0 ONLINE 0 0 0 > c0t6d0 ONLINE 0 0 0 > c1t2d0 ONLINE 0 0 0 > c1t6d0 ONLINE 0 0 0 > c4t2d0 ONLINE 0 0 0 > c4t6d0 ONLINE 0 0 0 > c6t2d0 ONLINE 0 0 0 > c6t6d0 ONLINE 0 0 0 > c7t2d0 ONLINE 0 0 0 > c7t6d0 ONLINE 0 0 0 > c5t6d0 ONLINE 0 0 0 > raidz2 ONLINE 0 0 0 > c0t3d0 ONLINE 0 0 0 > c0t7d0 ONLINE 0 0 0 > c1t3d0 ONLINE 0 0 0 > c1t7d0 ONLINE 0 0 0 > c4t3d0 ONLINE 0 0 0 > c4t7d0 ONLINE 0 0 0 > c6t3d0 ONLINE 0 0 0 > c6t7d0 ONLINE 0 0 0 > c7t3d0 ONLINE 0 0 0 > c7t7d0 ONLINE 0 0 0 > c5t7d0 ONLINE 0 0 0 > spares > c5t2d0 AVAIL > c5t3d0 AVAIL > > errors: No known data errors > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss