Wade.Stuart at fallon.com
2008-Oct-13 15:49 UTC
[zfs-discuss] scrub restart patch status..
Any news on if the scrub/resilver/snap reset patch will make it into 10/08 update? Thanks! Wade Stuart we are fallon P: 612.758.2660 C: 612.877.0385 ** Fallon has moved. Effective May 19, 2008 our address is 901 Marquette Ave, Suite 2400, Minneapolis, MN 55402.
I''m also very interested in this. I''m having a lot of pain with status requests killing my resilvers. In the example below I was trying to test to see if timf''s auto-snapshot service was killing my resilver, only to find that calling zpool status seems to be the issue: [root at filer1 ~]# env LC_ALL=C zpool status $POOL | grep " in progress" scrub: resilver in progress, 0.26% done, 35h4m to go [root at filer1 ~]# env LC_ALL=C zpool status $POOL pool: pit state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach the missing device and online it using ''zpool online''. see: http://www.sun.com/msg/ZFS-8000-D3 scrub: resilver in progress, 0.00% done, 484h39m to go config: NAME STATE READ WRITE CKSUM pit DEGRADED 0 0 0 raidz2 DEGRADED 0 0 0 c2t0d0 ONLINE 0 0 0 c3t0d0 ONLINE 0 0 0 c3t1d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 spare DEGRADED 0 0 0 c3t3d0 UNAVAIL 0 0 0 cannot open c3t7d0 ONLINE 0 0 0 c2t2d0 ONLINE 0 0 0 c3t5d0 ONLINE 0 0 0 c3t6d0 ONLINE 0 0 0 spares c3t7d0 INUSE currently in use errors: No known data errors -- This message posted from opensolaris.org
Blake Irvin wrote:> I''m also very interested in this. I''m having a lot of pain with status requests killing my resilvers. In the example below I was trying to test to see if timf''s auto-snapshot service was killing my resilver, only to find that calling zpool status seems to be the issue: >workaround: don''t run zpool status as root. -- richard> [root at filer1 ~]# env LC_ALL=C zpool status $POOL | grep " in progress" > scrub: resilver in progress, 0.26% done, 35h4m to go > > [root at filer1 ~]# env LC_ALL=C zpool status $POOL > pool: pit > state: DEGRADED > status: One or more devices could not be opened. Sufficient replicas exist for > the pool to continue functioning in a degraded state. > action: Attach the missing device and online it using ''zpool online''. > see: http://www.sun.com/msg/ZFS-8000-D3 > scrub: resilver in progress, 0.00% done, 484h39m to go > config: > > NAME STATE READ WRITE CKSUM > pit DEGRADED 0 0 0 > raidz2 DEGRADED 0 0 0 > c2t0d0 ONLINE 0 0 0 > c3t0d0 ONLINE 0 0 0 > c3t1d0 ONLINE 0 0 0 > c2t1d0 ONLINE 0 0 0 > spare DEGRADED 0 0 0 > c3t3d0 UNAVAIL 0 0 0 cannot open > c3t7d0 ONLINE 0 0 0 > c2t2d0 ONLINE 0 0 0 > c3t5d0 ONLINE 0 0 0 > c3t6d0 ONLINE 0 0 0 > spares > c3t7d0 INUSE currently in use > > errors: No known data errors > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Correct, that is a workaround. The fact that I use the beta (alpha?) zfs auto-snaphot service means that when the service checks for active scrubs, it kills the resilver. I think I will talk to Tim about modifying his method script to run the scrub check with least privileges (ie, not as root). On 10/13/08, Richard Elling <Richard.Elling at sun.com> wrote:> Blake Irvin wrote: >> I''m also very interested in this. I''m having a lot of pain with status >> requests killing my resilvers. In the example below I was trying to test >> to see if timf''s auto-snapshot service was killing my resilver, only to >> find that calling zpool status seems to be the issue: >> > > workaround: don''t run zpool status as root. > -- richard > >> [root at filer1 ~]# env LC_ALL=C zpool status $POOL | grep " in progress" >> scrub: resilver in progress, 0.26% done, 35h4m to go >> >> [root at filer1 ~]# env LC_ALL=C zpool status $POOL >> pool: pit >> state: DEGRADED >> status: One or more devices could not be opened. Sufficient replicas >> exist for >> the pool to continue functioning in a degraded state. >> action: Attach the missing device and online it using ''zpool online''. >> see: http://www.sun.com/msg/ZFS-8000-D3 >> scrub: resilver in progress, 0.00% done, 484h39m to go >> config: >> >> NAME STATE READ WRITE CKSUM >> pit DEGRADED 0 0 0 >> raidz2 DEGRADED 0 0 0 >> c2t0d0 ONLINE 0 0 0 >> c3t0d0 ONLINE 0 0 0 >> c3t1d0 ONLINE 0 0 0 >> c2t1d0 ONLINE 0 0 0 >> spare DEGRADED 0 0 0 >> c3t3d0 UNAVAIL 0 0 0 cannot open >> c3t7d0 ONLINE 0 0 0 >> c2t2d0 ONLINE 0 0 0 >> c3t5d0 ONLINE 0 0 0 >> c3t6d0 ONLINE 0 0 0 >> spares >> c3t7d0 INUSE currently in use >> >> errors: No known data errors >> -- >> This message posted from opensolaris.org >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > >