Alex
2007-May-09 11:24 UTC
[zfs-discuss] zpool status faulted, but raid1z status is online?
Drive in my solaris box that had the OS on it decided to kick the bucket this
evening, a joyous occasion for all, but luckly all my data is stored on a zpool
and the OS is nothing but a shell to serve it up on. One quick install later and
im back trying to import my pool, and things are not going well.
Once I have things where I want them, I issue an import
# zpool import
pool: ftp
id: 1752478903061397634
state: FAULTED
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
see: http://www.sun.com/msg/ZFS-8000-3C
config:
ftp FAULTED corrupted data
raidz1 DEGRADED
c1d0 ONLINE
c1d1 ONLINE
c4d0 UNAVAIL cannot open
Looks like c4d0 died as well, they were purchased at the same time but oh well.
zfs should still be able to recover because i have 2 working drives, and the
raidz1 says its degraded but not destroyed. But the pool itself reads as
faulted?
I issue a import, with force thinking the system is just being silly.
# zpool import -f ftp
cannot import ''ftp'': I/O error
Odd. After looking on the threads here I see that when importing a drive the
label of a drive is rather important, so I go look at what zdb thinks the labels
for my drives are
first, the pool itself
# zdb -l ftp
--------------------------------------------
LABEL 0
--------------------------------------------
failed to read label 0
--------------------------------------------
LABEL 1
--------------------------------------------
failed to read label 1
--------------------------------------------
LABEL 2
--------------------------------------------
failed to read label 2
--------------------------------------------
LABEL 3
--------------------------------------------
failed to read label 3
thats not good, how about the drives?
# zdb -l /dev/dsk/c1d0
--------------------------------------------
LABEL 0
--------------------------------------------
failed to unpack label 0
--------------------------------------------
LABEL 1
--------------------------------------------
failed to unpack label 1
--------------------------------------------
LABEL 2
--------------------------------------------
version=3
name=''ftp''
state=2
txg=21807
pool_guid=7724307712458785867
top_guid=14476414087876222880
guid=3298133982375235519
vdev_tree
type=''raidz''
id=0
guid=14476414087876222880
nparity=1
metaslab_array=13
metaslab_shift=32
ashift=9
asize=482945794048
children[0]
type=''disk''
id=0
guid=4586792833877823382
path=''/dev/dsk/c0d0s3''
devid=''id1,cmdk at AMaxtor_6Y250P0=Y63AL5YE/d''
whole_disk=0
children[1]
type=''disk''
id=1
guid=3298133982375235519
path=''/dev/dsk/c4d0p0''
devid=''id1,cmdk at
AST3250623A=____________5ND2VX5R/q''
whole_disk=0
--------------------------------------------
LABEL 3
--------------------------------------------
version=3
name=''ftp''
state=2
txg=21807
pool_guid=7724307712458785867
top_guid=14476414087876222880
guid=3298133982375235519
vdev_tree
type=''raidz''
id=0
guid=14476414087876222880
nparity=1
metaslab_array=13
metaslab_shift=32
ashift=9
asize=482945794048
children[0]
type=''disk''
id=0
guid=4586792833877823382
path=''/dev/dsk/c0d0s3''
devid=''id1,cmdk at AMaxtor_6Y250P0=Y63AL5YE/d''
whole_disk=0
children[1]
type=''disk''
id=1
guid=3298133982375235519
path=''/dev/dsk/c4d0p0''
devid=''id1,cmdk at
AST3250623A=____________5ND2VX5R/q''
whole_disk=0
#
So the label on the disk itself is there....mostly.
now for disk 2
# zdb -l /dev/dsk/c1d1
--------------------------------------------
LABEL 0
--------------------------------------------
failed to unpack label 0
--------------------------------------------
LABEL 1
--------------------------------------------
failed to unpack label 1
--------------------------------------------
LABEL 2
--------------------------------------------
version=3
name=''ftp''
state=2
txg=21807
pool_guid=7724307712458785867
top_guid=11006938707951749786
guid=11006938707951749786
vdev_tree
type=''disk''
id=1
guid=11006938707951749786
path=''/dev/dsk/c1d0p0''
devid=''id1,cmdk at
AWDC_WD2500JB-00GVA0=WD-WCAL71595642/q''
whole_disk=0
metaslab_array=112
metaslab_shift=31
ashift=9
asize=250053918720
--------------------------------------------
LABEL 3
--------------------------------------------
version=3
name=''ftp''
state=2
txg=21807
pool_guid=7724307712458785867
top_guid=11006938707951749786
guid=11006938707951749786
vdev_tree
type=''disk''
id=1
guid=11006938707951749786
path=''/dev/dsk/c1d0p0''
devid=''id1,cmdk at
AWDC_WD2500JB-00GVA0=WD-WCAL71595642/q''
whole_disk=0
metaslab_array=112
metaslab_shift=31
ashift=9
asize=250053918720
Finally, just because I like to laugh, c4d0
# zdb -l /dev/dsk/c4d0
--------------------------------------------
LABEL 0
--------------------------------------------
failed to read label 0
--------------------------------------------
LABEL 1
--------------------------------------------
failed to read label 1
--------------------------------------------
LABEL 2
--------------------------------------------
failed to read label 2
--------------------------------------------
LABEL 3
--------------------------------------------
mmm, dead drive.
And so I have come to the end of my knowledge, I have no clue how to interpret
this information into exactly what went wrong, so I was hoping someone else
would be able to. Any help that anyone could provide would be just great. If any
other commands for data are needed, just ask.
This message posted from opensolaris.org