Stuart Anderson
2007-Sep-08 03:55 UTC
[zfs-discuss] zpool degraded status after resilver completed
I am curious why zpool status reports a pool to be in the DEGRADED state after a drive in a raidz2 vdev has been successfully replaced. In this particular case drive c0t6d0 was failing so I ran, zpool offline home/c0t6d0 zpool replace home c0t6d0 c8t1d0 and after the resilvering finished the pool reports a degraded state. Hopefully this is incorrect. At this point is the vdev in question now has full raidz2 protected even though it is listed as "DEGRADED"? P.S. This is on a pool created on S10U3 and upgraded to ZFS version 4 after upgrading the host to S10U4. Thanks. # zpool status pool: home state: DEGRADED status: One or more devices has been taken offline by the administrator. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Online the device using ''zpool online'' or replace the device with ''zpool replace''. scrub: resilver completed with 0 errors on Fri Sep 7 18:39:03 2007 config: NAME STATE READ WRITE CKSUM home DEGRADED 0 0 0 raidz2 ONLINE 0 0 0 c0t0d0 ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 c5t0d0 ONLINE 0 0 0 c7t0d0 ONLINE 0 0 0 c8t0d0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c5t1d0 ONLINE 0 0 0 c6t1d0 ONLINE 0 0 0 c7t1d0 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 raidz2 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c5t2d0 ONLINE 0 0 0 c6t2d0 ONLINE 0 0 0 c7t2d0 ONLINE 0 0 0 c8t2d0 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 c5t3d0 ONLINE 0 0 0 c6t3d0 ONLINE 0 0 0 c7t3d0 ONLINE 0 0 0 c8t3d0 ONLINE 0 0 0 raidz2 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c1t4d0 ONLINE 0 0 0 c5t4d0 ONLINE 0 0 0 c7t4d0 ONLINE 0 0 0 c8t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 c1t5d0 ONLINE 0 0 0 c5t5d0 ONLINE 0 0 0 c6t5d0 ONLINE 0 0 0 c7t5d0 ONLINE 0 0 0 c8t5d0 ONLINE 0 0 0 raidz2 DEGRADED 0 0 0 spare DEGRADED 0 0 0 c0t6d0 OFFLINE 0 0 0 c8t1d0 ONLINE 0 0 0 c1t6d0 ONLINE 0 0 0 c5t6d0 ONLINE 0 0 0 c6t6d0 ONLINE 0 0 0 c7t6d0 ONLINE 0 0 0 c8t6d0 ONLINE 0 0 0 c0t7d0 ONLINE 0 0 0 c1t7d0 ONLINE 0 0 0 c5t7d0 ONLINE 0 0 0 c6t7d0 ONLINE 0 0 0 c7t7d0 ONLINE 0 0 0 c8t7d0 ONLINE 0 0 0 spares c8t1d0 INUSE currently in use errors: No known data errors -- Stuart Anderson anderson at ligo.caltech.edu http://www.ligo.caltech.edu/~anderson
Stuart Anderson
2007-Sep-08 19:27 UTC
[zfs-discuss] zpool degraded status after resilver completed
Possibly related is the fact that fmd is now in a CPU spin loop constantly checking the time, even tough there are no reported faults, i.e., # fmdump -v TIME UUID SUNW-MSG-ID fmdump: /var/fm/fmd/fltlog is empty # svcs fmd STATE STIME FMRI online 13:11:43 svc:/system/fmd:default # prstat PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP 422 root 17M 13M run 11 0 20:42:51 19% fmd/22 # truss -p 422 |& head -20 /13: time() = 1189279453 /13: time() = 1189279453 /13: time() = 1189279453 /13: time() = 1189279453 /13: time() = 1189279453 /13: time() = 1189279453 /13: time() = 1189279453 /13: time() = 1189279453 /13: time() = 1189279453 /13: time() = 1189279453 /13: lwp_park(0xFDB7DF40, 0) Err#62 ETIME /13: time() = 1189279453 /13: time() = 1189279453 /13: time() = 1189279453 /13: time() = 1189279453 /13: time() = 1189279453 /13: time() = 1189279453 /13: time() = 1189279453 /13: time() = 1189279453 /13: time() = 1189279453 Is this a known bug with fmd and ZFS? Thanks. On Fri, Sep 07, 2007 at 08:55:52PM -0700, Stuart Anderson wrote:> I am curious why zpool status reports a pool to be in the DEGRADED state > after a drive in a raidz2 vdev has been successfully replaced. In this > particular case drive c0t6d0 was failing so I ran, > > zpool offline home/c0t6d0 > zpool replace home c0t6d0 c8t1d0 > > and after the resilvering finished the pool reports a degraded state. > Hopefully this is incorrect. At this point is the vdev in question > now has full raidz2 protected even though it is listed as "DEGRADED"? >-- Stuart Anderson anderson at ligo.caltech.edu http://www.ligo.caltech.edu/~anderson