Kevin Denton
2010-Apr-15 09:01 UTC
[zfs-discuss] raidz2 drive failure zpool will not import
After attempting unsuccessfully to replace a failed drive in a 10 drive raidz2
array and reading as many forum entries as I could find I followed a suggestion
to export and import the pool.
In another attempt to import the pool I reinstalled the OS, but I have so far
been unable to import the pool.
Here is the output from format and zpool commands:
kevin at opensolaris:~# zpool status
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c8d0s0 ONLINE 0 0 0
errors: No known data errors
kevin at opensolaris:~# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c4d0 <ST350083- 9QG0LW8-0001-465.76GB>
/pci at 0,0/pci8086,244e at 1e/pci-ide at 1/ide at 0/cmdk at 0,0
1. c4d1 <ST350063- 9QG1E50-0001-465.76GB>
/pci at 0,0/pci8086,244e at 1e/pci-ide at 1/ide at 0/cmdk at 1,0
2. c5d0 <ST350063- 9QG3AM7-0001-465.76GB>
/pci at 0,0/pci8086,244e at 1e/pci-ide at 1/ide at 1/cmdk at 0,0
3. c5d1 <ST350063- 9QG19MY-0001-465.76GB>
/pci at 0,0/pci8086,244e at 1e/pci-ide at 1/ide at 1/cmdk at 1,0
4. c6d0 <ST350063- 9QG19VY-0001-465.76GB>
/pci at 0,0/pci8086,244e at 1e/pci-ide at 2/ide at 0/cmdk at 0,0
5. c6d1 <ST350063- 5QG019W-0001-465.76GB>
/pci at 0,0/pci8086,244e at 1e/pci-ide at 2/ide at 0/cmdk at 1,0
6. c7d0 <ST350063- 9QG1DKF-0001-465.76GB>
/pci at 0,0/pci8086,244e at 1e/pci-ide at 2/ide at 1/cmdk at 0,0
7. c7d1 <ST350063- 5QG0B2Y-0001-465.76GB>
/pci at 0,0/pci8086,244e at 1e/pci-ide at 2/ide at 1/cmdk at 1,0
8. c8d0 <DEFAULT cyl 9961 alt 2 hd 255 sec 63>
/pci at 0,0/pci-ide at 1f,1/ide at 0/cmdk at 0,0
9. c10d0 <ST350083- 9QG0LR5-0001-465.76GB>
/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0
10. c11d0 <ST350083- 9QG0LW6-0001-465.76GB>
/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0
Specify disk (enter its number): ^C
kevin at opensolaris:~# zpool import
pool: storage
id: 18058787158441119951
state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
see: http://www.sun.com/msg/ZFS-8000-EY
config:
storage UNAVAIL insufficient replicas
raidz2-0 DEGRADED
c4d0 ONLINE
c4d1 ONLINE
c5d0 ONLINE
replacing-3 DEGRADED
c5d1 ONLINE
c5d1 FAULTED corrupted data
c6d0 ONLINE
c6d1 ONLINE
c7d0 ONLINE
c7d1 ONLINE
c10d0 ONLINE
c11d0 ONLINE
kevin at opensolaris:~# zpool import -f
pool: storage
id: 18058787158441119951
state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
see: http://www.sun.com/msg/ZFS-8000-EY
config:
storage UNAVAIL insufficient replicas
raidz2-0 DEGRADED
c4d0 ONLINE
c4d1 ONLINE
c5d0 ONLINE
replacing-3 DEGRADED
c5d1 ONLINE
c5d1 FAULTED corrupted data
c6d0 ONLINE
c6d1 ONLINE
c7d0 ONLINE
c7d1 ONLINE
c10d0 ONLINE
c11d0 ONLINE
kevin at opensolaris:~# zpool import -f storage
cannot import ''storage'': one or more devices is currently
unavailable
Destroy and re-create the pool from
a backup source.
Prior to exporting the pool I was able to offline the failed drive.
Finally about a month ago I upgraded the zpool version to enable dedupe.
The suggestions I have read include "playing with" the metadata and
this is something I would need help with as I am just an "informed"
user.
I am hoping that as only one drive failed and this is a dual parity raid that
there is someway to recover the pool.
Thanks in advance,
Kevin
--
This message posted from opensolaris.org
Richard Elling
2010-Apr-15 22:39 UTC
[zfs-discuss] raidz2 drive failure zpool will not import
zpool import can be a little pessimistic about corrupted labels. First, try physically removing the problem disk and try to import again. If that doesn''t work, then verify the labels on each disk using: zdb -l /dev/rdsk/c5d1s0 each disk should have 4 readable labels. -- richard On Apr 15, 2010, at 2:01 AM, Kevin Denton wrote:> After attempting unsuccessfully to replace a failed drive in a 10 drive raidz2 array and reading as many forum entries as I could find I followed a suggestion to export and import the pool. > > In another attempt to import the pool I reinstalled the OS, but I have so far been unable to import the pool. > > Here is the output from format and zpool commands: > > kevin at opensolaris:~# zpool status > pool: rpool > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > rpool ONLINE 0 0 0 > c8d0s0 ONLINE 0 0 0 > > errors: No known data errors > kevin at opensolaris:~# format > Searching for disks...done > > > AVAILABLE DISK SELECTIONS: > 0. c4d0 <ST350083- 9QG0LW8-0001-465.76GB> > /pci at 0,0/pci8086,244e at 1e/pci-ide at 1/ide at 0/cmdk at 0,0 > 1. c4d1 <ST350063- 9QG1E50-0001-465.76GB> > /pci at 0,0/pci8086,244e at 1e/pci-ide at 1/ide at 0/cmdk at 1,0 > 2. c5d0 <ST350063- 9QG3AM7-0001-465.76GB> > /pci at 0,0/pci8086,244e at 1e/pci-ide at 1/ide at 1/cmdk at 0,0 > 3. c5d1 <ST350063- 9QG19MY-0001-465.76GB> > /pci at 0,0/pci8086,244e at 1e/pci-ide at 1/ide at 1/cmdk at 1,0 > 4. c6d0 <ST350063- 9QG19VY-0001-465.76GB> > /pci at 0,0/pci8086,244e at 1e/pci-ide at 2/ide at 0/cmdk at 0,0 > 5. c6d1 <ST350063- 5QG019W-0001-465.76GB> > /pci at 0,0/pci8086,244e at 1e/pci-ide at 2/ide at 0/cmdk at 1,0 > 6. c7d0 <ST350063- 9QG1DKF-0001-465.76GB> > /pci at 0,0/pci8086,244e at 1e/pci-ide at 2/ide at 1/cmdk at 0,0 > 7. c7d1 <ST350063- 5QG0B2Y-0001-465.76GB> > /pci at 0,0/pci8086,244e at 1e/pci-ide at 2/ide at 1/cmdk at 1,0 > 8. c8d0 <DEFAULT cyl 9961 alt 2 hd 255 sec 63> > /pci at 0,0/pci-ide at 1f,1/ide at 0/cmdk at 0,0 > 9. c10d0 <ST350083- 9QG0LR5-0001-465.76GB> > /pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at 0,0 > 10. c11d0 <ST350083- 9QG0LW6-0001-465.76GB> > /pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at 0,0 > Specify disk (enter its number): ^C > kevin at opensolaris:~# zpool import > pool: storage > id: 18058787158441119951 > state: UNAVAIL > status: The pool was last accessed by another system. > action: The pool cannot be imported due to damaged devices or data. > see: http://www.sun.com/msg/ZFS-8000-EY > config: > > storage UNAVAIL insufficient replicas > raidz2-0 DEGRADED > c4d0 ONLINE > c4d1 ONLINE > c5d0 ONLINE > replacing-3 DEGRADED > c5d1 ONLINE > c5d1 FAULTED corrupted data > c6d0 ONLINE > c6d1 ONLINE > c7d0 ONLINE > c7d1 ONLINE > c10d0 ONLINE > c11d0 ONLINE > kevin at opensolaris:~# zpool import -f > pool: storage > id: 18058787158441119951 > state: UNAVAIL > status: The pool was last accessed by another system. > action: The pool cannot be imported due to damaged devices or data. > see: http://www.sun.com/msg/ZFS-8000-EY > config: > > storage UNAVAIL insufficient replicas > raidz2-0 DEGRADED > c4d0 ONLINE > c4d1 ONLINE > c5d0 ONLINE > replacing-3 DEGRADED > c5d1 ONLINE > c5d1 FAULTED corrupted data > c6d0 ONLINE > c6d1 ONLINE > c7d0 ONLINE > c7d1 ONLINE > c10d0 ONLINE > c11d0 ONLINE > kevin at opensolaris:~# zpool import -f storage > cannot import ''storage'': one or more devices is currently unavailable > Destroy and re-create the pool from > a backup source. > > > Prior to exporting the pool I was able to offline the failed drive. > > Finally about a month ago I upgraded the zpool version to enable dedupe. > > The suggestions I have read include "playing with" the metadata and this is something I would need help with as I am just an "informed" user. > > I am hoping that as only one drive failed and this is a dual parity raid that there is someway to recover the pool. > > Thanks in advance, > Kevin > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discussZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com
Kevin Denton
2010-Apr-17 12:29 UTC
[zfs-discuss] raidz2 drive failure zpool will not import
Thanks Richard, I tried removing the replacement drive and received the same error. Output of zdb -l /dev/rdsk/c5d1s0 results in: kevin at opensolaris:~# zdb -l /dev/rdsk/c5d1s0 cannot open ''/dev/rdsk/c5d1s0'': No such device or address All other drives have 4 readable labels 0-3 I even attempted the old trick of putting the failed drive in the freezer for an hour and it did spin up, but only for a minute and not long enough to be recognized by the system. Not sure what to try next. ~kevin -- This message posted from opensolaris.org