I seem to have run into an issue with a pool I have, and haven''t found
a resolution yet. The box is currently running FreeBSD 7-STABLE with ZFS v13,
(Open)Solaris doesn''t support my raid controller.
In short: I moved all data off a pool and destroyed it. Then I added a single
slice to each drive, and labeled the slices use glabel. Then I created 3, four
device raidz vdevs. All is well so far, so I copy all the data back to the new,
nicely setup pool. After a reboot, I don''t know what the hell happened,
but pool is now showing as unavailable, the first four disks don''t want
to cooperate, and zpool import storage gives the following error...
# zpool import storage
cannot import ''storage'': more than one matching pool
import by numeric ID instead
So doing a zpool import gives me this:
pool: storage
id: 2169223940234886392
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:
storage ONLINE
raidz1 ONLINE
da0 ONLINE
da4 ONLINE
da5 ONLINE
da2 ONLINE
pool: storage
id: 4935707693171446193
state: UNAVAIL
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
see: http://www.sun.com/msg/ZFS-8000-5E
config:
storage UNAVAIL insufficient replicas
raidz1 ONLINE
label/storage_02_1 ONLINE
label/storage_02_2 ONLINE
label/storage_02_3 ONLINE
label/storage_02_4 ONLINE
raidz1 ONLINE
label/storage_03_1 ONLINE
label/storage_03_2 ONLINE
label/storage_03_3 ONLINE
label/storage_03_4 ONLINE
For some reason I now have duplicate pools named storage, the first one that
says it''s comprised of da{0,4,5,2} is the missing drives from the
''real'' pool, except it should be slices (da0s1) not the entire
disk, and the pool was originally created with the respective glabels, not
device names. So I searched and read zfs mailing lists for a few hours now and
I''m at a loss. It seems that the zfs/zpool ''labels''
(??) are corrupted on the first raidz vdev. Running zdb -l /dev/da0s1(one of the
non-cooperating disks) gives the following:
--------------------------------------------
LABEL 0
--------------------------------------------
failed to unpack label 0
--------------------------------------------
LABEL 1
--------------------------------------------
failed to unpack label 1
--------------------------------------------
LABEL 2
--------------------------------------------
version=13
name=''storage''
state=1
txg=154704
pool_guid=4935707693171446193
hostid=3798766754
hostname=''unset''
top_guid=17696126969775704657
guid=2203261993905846015
vdev_tree
type=''raidz''
id=0
guid=17696126969775704657
nparity=1
metaslab_array=23
metaslab_shift=34
ashift=9
asize=2000401989632
is_log=0
children[0]
type=''disk''
id=0
guid=2203261993905846015
path=''/dev/label/storage_01_1''
whole_disk=0
DTL=31
children[1]
type=''disk''
id=1
guid=8995448228292161600
path=''/dev/label/storage_01_2''
whole_disk=0
DTL=30
children[2]
type=''disk''
id=2
guid=5590467752431399831
path=''/dev/label/storage_01_3''
whole_disk=0
DTL=29
children[3]
type=''disk''
id=3
guid=4709121270437373818
path=''/dev/label/storage_01_4''
whole_disk=0
DTL=28
--------------------------------------------
LABEL 3
--------------------------------------------
version=13
name=''storage''
state=1
txg=154704
pool_guid=4935707693171446193
hostid=3798766754
hostname=''unset''
top_guid=17696126969775704657
guid=2203261993905846015
vdev_tree
type=''raidz''
id=0
guid=17696126969775704657
nparity=1
metaslab_array=23
metaslab_shift=34
ashift=9
asize=2000401989632
is_log=0
children[0]
type=''disk''
id=0
guid=2203261993905846015
path=''/dev/label/storage_01_1''
whole_disk=0
DTL=31
children[1]
type=''disk''
id=1
guid=8995448228292161600
path=''/dev/label/storage_01_2''
whole_disk=0
DTL=30
children[2]
type=''disk''
id=2
guid=5590467752431399831
path=''/dev/label/storage_01_3''
whole_disk=0
DTL=29
children[3]
type=''disk''
id=3
guid=4709121270437373818
path=''/dev/label/storage_01_4''
whole_disk=0
DTL=28
And this is the output from a ''working'' disk...
[CODE]--------------------------------------------
LABEL 0
--------------------------------------------
version=13
name=''storage''
state=1
txg=154704
pool_guid=4935707693171446193
hostid=3798766754
hostname=''unset''
top_guid=7858109641389082720
guid=12991459201766304634
vdev_tree
type=''raidz''
id=1
guid=7858109641389082720
nparity=1
metaslab_array=182
metaslab_shift=34
ashift=9
asize=2000411426816
is_log=0
children[0]
type=''disk''
id=0
guid=1240561937346707488
path=''/dev/label/storage_02_1''
whole_disk=0
DTL=194
children[1]
type=''disk''
id=1
guid=12991459201766304634
path=''/dev/label/storage_02_2''
whole_disk=0
DTL=193
children[2]
type=''disk''
id=2
guid=5168805825707118436
path=''/dev/label/storage_02_3''
whole_disk=0
DTL=192
children[3]
type=''disk''
id=3
guid=18159031621477119715
path=''/dev/label/storage_02_4''
whole_disk=0
DTL=191
--------------------------------------------
LABEL 1
--------------------------------------------
version=13
name=''storage''
state=1
txg=154704
pool_guid=4935707693171446193
hostid=3798766754
hostname=''unset''
top_guid=7858109641389082720
guid=12991459201766304634
vdev_tree
type=''raidz''
id=1
guid=7858109641389082720
nparity=1
metaslab_array=182
metaslab_shift=34
ashift=9
asize=2000411426816
is_log=0
children[0]
type=''disk''
id=0
guid=1240561937346707488
path=''/dev/label/storage_02_1''
whole_disk=0
DTL=194
children[1]
type=''disk''
id=1
guid=12991459201766304634
path=''/dev/label/storage_02_2''
whole_disk=0
DTL=193
children[2]
type=''disk''
id=2
guid=5168805825707118436
path=''/dev/label/storage_02_3''
whole_disk=0
DTL=192
children[3]
type=''disk''
id=3
guid=18159031621477119715
path=''/dev/label/storage_02_4''
whole_disk=0
DTL=191
--------------------------------------------
LABEL 2
--------------------------------------------
version=13
name=''storage''
state=1
txg=154704
pool_guid=4935707693171446193
hostid=3798766754
hostname=''unset''
top_guid=7858109641389082720
guid=12991459201766304634
vdev_tree
type=''raidz''
id=1
guid=7858109641389082720
nparity=1
metaslab_array=182
metaslab_shift=34
ashift=9
asize=2000411426816
is_log=0
children[0]
type=''disk''
id=0
guid=1240561937346707488
path=''/dev/label/storage_02_1''
whole_disk=0
DTL=194
children[1]
type=''disk''
id=1
guid=12991459201766304634
path=''/dev/label/storage_02_2''
whole_disk=0
DTL=193
children[2]
type=''disk''
id=2
guid=5168805825707118436
path=''/dev/label/storage_02_3''
whole_disk=0
DTL=192
children[3]
type=''disk''
id=3
guid=18159031621477119715
path=''/dev/label/storage_02_4''
whole_disk=0
DTL=191
--------------------------------------------
LABEL 3
--------------------------------------------
version=13
name=''storage''
state=1
txg=154704
pool_guid=4935707693171446193
hostid=3798766754
hostname=''unset''
top_guid=7858109641389082720
guid=12991459201766304634
vdev_tree
type=''raidz''
id=1
guid=7858109641389082720
nparity=1
metaslab_array=182
metaslab_shift=34
ashift=9
asize=2000411426816
is_log=0
children[0]
type=''disk''
id=0
guid=1240561937346707488
path=''/dev/label/storage_02_1''
whole_disk=0
DTL=194
children[1]
type=''disk''
id=1
guid=12991459201766304634
path=''/dev/label/storage_02_2''
whole_disk=0
DTL=193
children[2]
type=''disk''
id=2
guid=5168805825707118436
path=''/dev/label/storage_02_3''
whole_disk=0
DTL=192
children[3]
type=''disk''
id=3
guid=18159031621477119715
path=''/dev/label/storage_02_4''
whole_disk=0
DTL=191
I also ran zdb -e 4935707693171446193 on the pool to get this:
version=13
name=''4935707693171446193''
state=0
txg=0
pool_guid=4935707693171446193
hostid=3798766754
hostname=''libzpool''
vdev_tree
type=''root''
id=0
guid=4935707693171446193
bad config type 16 for stats
children[0]
type=''missing''
id=0
guid=7522906381581172908
metaslab_array=0
metaslab_shift=0
ashift=9
asize=62390272
is_log=0
bad config type 16 for stats
children[1]
type=''raidz''
id=1
guid=7858109641389082720
nparity=1
metaslab_array=182
metaslab_shift=34
ashift=9
asize=2000411426816
is_log=0
bad config type 16 for stats
children[0]
type=''disk''
id=0
guid=1240561937346707488
path=''/dev/label/storage_02_1''
whole_disk=0
DTL=194
bad config type 16 for stats
children[1]
type=''disk''
id=1
guid=12991459201766304634
path=''/dev/label/storage_02_2''
whole_disk=0
DTL=193
bad config type 16 for stats
children[2]
type=''disk''
id=2
guid=5168805825707118436
path=''/dev/label/storage_02_3''
whole_disk=0
DTL=192
bad config type 16 for stats
children[3]
type=''disk''
id=3
guid=18159031621477119715
path=''/dev/label/storage_02_4''
whole_disk=0
DTL=191
bad config type 16 for stats
children[2]
type=''raidz''
id=2
guid=5757731786036758091
nparity=1
metaslab_array=195
metaslab_shift=34
ashift=9
asize=2000411426816
is_log=0
bad config type 16 for stats
children[0]
type=''disk''
id=0
guid=9395247026089255413
path=''/dev/label/storage_03_1''
whole_disk=0
DTL=190
bad config type 16 for stats
children[1]
type=''disk''
id=1
guid=17248074673319151620
path=''/dev/label/storage_03_2''
whole_disk=0
DTL=189
bad config type 16 for stats
children[2]
type=''disk''
id=2
guid=5207362801642277457
path=''/dev/label/storage_03_3''
whole_disk=0
DTL=188
bad config type 16 for stats
children[3]
type=''disk''
id=3
guid=2325967529400575592
path=''/dev/label/storage_03_4''
whole_disk=0
DTL=187
bad config type 16 for stats
I have read the On Disk Format pdf and came to the conclusion that I could
possibly use dd to copy one of the working labels (2/3) and write it to the
failed ones on each disk, if I could find them. Any help/suggestions would be
appreciated.
Thanks.
--
This message posted from opensolaris.org
Try zpool import 2169223940234886392 [storage1] -r Le 4 ao?t 09 ? 15:11, David a ?crit :> I seem to have run into an issue with a pool I have, and haven''t > found a resolution yet. The box is currently running FreeBSD 7- > STABLE with ZFS v13, (Open)Solaris doesn''t support my raid controller. > > In short: I moved all data off a pool and destroyed it. Then I added > a single slice to each drive, and labeled the slices use glabel. > Then I created 3, four device raidz vdevs. All is well so far, so I > copy all the data back to the new, nicely setup pool. After a > reboot, I don''t know what the hell happened, but pool is now showing > as unavailable, the first four disks don''t want to cooperate, and > zpool import storage gives the following error... > > # zpool import storage > cannot import ''storage'': more than one matching pool > import by numeric ID instead > > > So doing a zpool import gives me this: > > pool: storage > id: 2169223940234886392 > state: ONLINE > action: The pool can be imported using its name or numeric identifier. > config: > > storage ONLINE > raidz1 ONLINE > da0 ONLINE > da4 ONLINE > da5 ONLINE > da2 ONLINE > > pool: storage > id: 4935707693171446193 > state: UNAVAIL > status: One or more devices contains corrupted data. > action: The pool cannot be imported due to damaged devices or data. > see: http://www.sun.com/msg/ZFS-8000-5E > config: > > storage UNAVAIL insufficient replicas > raidz1 ONLINE > label/storage_02_1 ONLINE > label/storage_02_2 ONLINE > label/storage_02_3 ONLINE > label/storage_02_4 ONLINE > raidz1 ONLINE > label/storage_03_1 ONLINE > label/storage_03_2 ONLINE > label/storage_03_3 ONLINE > label/storage_03_4 ONLINE > > > For some reason I now have duplicate pools named storage, the first > one that says it''s comprised of da{0,4,5,2} is the missing drives > from the ''real'' pool, except it should be slices (da0s1) not the > entire disk, and the pool was originally created with the respective > glabels, not device names. So I searched and read zfs mailing lists > for a few hours now and I''m at a loss. It seems that the zfs/zpool > ''labels'' (??) are corrupted on the first raidz vdev. Running zdb -l / > dev/da0s1(one of the non-cooperating disks) gives the following: > > -------------------------------------------- > LABEL 0 > -------------------------------------------- > failed to unpack label 0 > -------------------------------------------- > LABEL 1 > -------------------------------------------- > failed to unpack label 1 > -------------------------------------------- > LABEL 2 > -------------------------------------------- > version=13 > name=''storage'' > state=1 > txg=154704 > pool_guid=4935707693171446193 > hostid=3798766754 > hostname=''unset'' > top_guid=17696126969775704657 > guid=2203261993905846015 > vdev_tree > type=''raidz'' > id=0 > guid=17696126969775704657 > nparity=1 > metaslab_array=23 > metaslab_shift=34 > ashift=9 > asize=2000401989632 > is_log=0 > children[0] > type=''disk'' > id=0 > guid=2203261993905846015 > path=''/dev/label/storage_01_1'' > whole_disk=0 > DTL=31 > children[1] > type=''disk'' > id=1 > guid=8995448228292161600 > path=''/dev/label/storage_01_2'' > whole_disk=0 > DTL=30 > children[2] > type=''disk'' > id=2 > guid=5590467752431399831 > path=''/dev/label/storage_01_3'' > whole_disk=0 > DTL=29 > children[3] > type=''disk'' > id=3 > guid=4709121270437373818 > path=''/dev/label/storage_01_4'' > whole_disk=0 > DTL=28 > -------------------------------------------- > LABEL 3 > -------------------------------------------- > version=13 > name=''storage'' > state=1 > txg=154704 > pool_guid=4935707693171446193 > hostid=3798766754 > hostname=''unset'' > top_guid=17696126969775704657 > guid=2203261993905846015 > vdev_tree > type=''raidz'' > id=0 > guid=17696126969775704657 > nparity=1 > metaslab_array=23 > metaslab_shift=34 > ashift=9 > asize=2000401989632 > is_log=0 > children[0] > type=''disk'' > id=0 > guid=2203261993905846015 > path=''/dev/label/storage_01_1'' > whole_disk=0 > DTL=31 > children[1] > type=''disk'' > id=1 > guid=8995448228292161600 > path=''/dev/label/storage_01_2'' > whole_disk=0 > DTL=30 > children[2] > type=''disk'' > id=2 > guid=5590467752431399831 > path=''/dev/label/storage_01_3'' > whole_disk=0 > DTL=29 > children[3] > type=''disk'' > id=3 > guid=4709121270437373818 > path=''/dev/label/storage_01_4'' > whole_disk=0 > DTL=28 > > > And this is the output from a ''working'' disk... > > [CODE]-------------------------------------------- > LABEL 0 > -------------------------------------------- > version=13 > name=''storage'' > state=1 > txg=154704 > pool_guid=4935707693171446193 > hostid=3798766754 > hostname=''unset'' > top_guid=7858109641389082720 > guid=12991459201766304634 > vdev_tree > type=''raidz'' > id=1 > guid=7858109641389082720 > nparity=1 > metaslab_array=182 > metaslab_shift=34 > ashift=9 > asize=2000411426816 > is_log=0 > children[0] > type=''disk'' > id=0 > guid=1240561937346707488 > path=''/dev/label/storage_02_1'' > whole_disk=0 > DTL=194 > children[1] > type=''disk'' > id=1 > guid=12991459201766304634 > path=''/dev/label/storage_02_2'' > whole_disk=0 > DTL=193 > children[2] > type=''disk'' > id=2 > guid=5168805825707118436 > path=''/dev/label/storage_02_3'' > whole_disk=0 > DTL=192 > children[3] > type=''disk'' > id=3 > guid=18159031621477119715 > path=''/dev/label/storage_02_4'' > whole_disk=0 > DTL=191 > -------------------------------------------- > LABEL 1 > -------------------------------------------- > version=13 > name=''storage'' > state=1 > txg=154704 > pool_guid=4935707693171446193 > hostid=3798766754 > hostname=''unset'' > top_guid=7858109641389082720 > guid=12991459201766304634 > vdev_tree > type=''raidz'' > id=1 > guid=7858109641389082720 > nparity=1 > metaslab_array=182 > metaslab_shift=34 > ashift=9 > asize=2000411426816 > is_log=0 > children[0] > type=''disk'' > id=0 > guid=1240561937346707488 > path=''/dev/label/storage_02_1'' > whole_disk=0 > DTL=194 > children[1] > type=''disk'' > id=1 > guid=12991459201766304634 > path=''/dev/label/storage_02_2'' > whole_disk=0 > DTL=193 > children[2] > type=''disk'' > id=2 > guid=5168805825707118436 > path=''/dev/label/storage_02_3'' > whole_disk=0 > DTL=192 > children[3] > type=''disk'' > id=3 > guid=18159031621477119715 > path=''/dev/label/storage_02_4'' > whole_disk=0 > DTL=191 > -------------------------------------------- > LABEL 2 > -------------------------------------------- > version=13 > name=''storage'' > state=1 > txg=154704 > pool_guid=4935707693171446193 > hostid=3798766754 > hostname=''unset'' > top_guid=7858109641389082720 > guid=12991459201766304634 > vdev_tree > type=''raidz'' > id=1 > guid=7858109641389082720 > nparity=1 > metaslab_array=182 > metaslab_shift=34 > ashift=9 > asize=2000411426816 > is_log=0 > children[0] > type=''disk'' > id=0 > guid=1240561937346707488 > path=''/dev/label/storage_02_1'' > whole_disk=0 > DTL=194 > children[1] > type=''disk'' > id=1 > guid=12991459201766304634 > path=''/dev/label/storage_02_2'' > whole_disk=0 > DTL=193 > children[2] > type=''disk'' > id=2 > guid=5168805825707118436 > path=''/dev/label/storage_02_3'' > whole_disk=0 > DTL=192 > children[3] > type=''disk'' > id=3 > guid=18159031621477119715 > path=''/dev/label/storage_02_4'' > whole_disk=0 > DTL=191 > -------------------------------------------- > LABEL 3 > -------------------------------------------- > version=13 > name=''storage'' > state=1 > txg=154704 > pool_guid=4935707693171446193 > hostid=3798766754 > hostname=''unset'' > top_guid=7858109641389082720 > guid=12991459201766304634 > vdev_tree > type=''raidz'' > id=1 > guid=7858109641389082720 > nparity=1 > metaslab_array=182 > metaslab_shift=34 > ashift=9 > asize=2000411426816 > is_log=0 > children[0] > type=''disk'' > id=0 > guid=1240561937346707488 > path=''/dev/label/storage_02_1'' > whole_disk=0 > DTL=194 > children[1] > type=''disk'' > id=1 > guid=12991459201766304634 > path=''/dev/label/storage_02_2'' > whole_disk=0 > DTL=193 > children[2] > type=''disk'' > id=2 > guid=5168805825707118436 > path=''/dev/label/storage_02_3'' > whole_disk=0 > DTL=192 > children[3] > type=''disk'' > id=3 > guid=18159031621477119715 > path=''/dev/label/storage_02_4'' > whole_disk=0 > DTL=191 > > > > I also ran zdb -e 4935707693171446193 on the pool to get this: > > version=13 > name=''4935707693171446193'' > state=0 > txg=0 > pool_guid=4935707693171446193 > hostid=3798766754 > hostname=''libzpool'' > vdev_tree > type=''root'' > id=0 > guid=4935707693171446193 > bad config type 16 for stats > children[0] > type=''missing'' > id=0 > guid=7522906381581172908 > metaslab_array=0 > metaslab_shift=0 > ashift=9 > asize=62390272 > is_log=0 > bad config type 16 for stats > children[1] > type=''raidz'' > id=1 > guid=7858109641389082720 > nparity=1 > metaslab_array=182 > metaslab_shift=34 > ashift=9 > asize=2000411426816 > is_log=0 > bad config type 16 for stats > children[0] > type=''disk'' > id=0 > guid=1240561937346707488 > path=''/dev/label/storage_02_1'' > whole_disk=0 > DTL=194 > bad config type 16 for stats > children[1] > type=''disk'' > id=1 > guid=12991459201766304634 > path=''/dev/label/storage_02_2'' > whole_disk=0 > DTL=193 > bad config type 16 for stats > children[2] > type=''disk'' > id=2 > guid=5168805825707118436 > path=''/dev/label/storage_02_3'' > whole_disk=0 > DTL=192 > bad config type 16 for stats > children[3] > type=''disk'' > id=3 > guid=18159031621477119715 > path=''/dev/label/storage_02_4'' > whole_disk=0 > DTL=191 > bad config type 16 for stats > children[2] > type=''raidz'' > id=2 > guid=5757731786036758091 > nparity=1 > metaslab_array=195 > metaslab_shift=34 > ashift=9 > asize=2000411426816 > is_log=0 > bad config type 16 for stats > children[0] > type=''disk'' > id=0 > guid=9395247026089255413 > path=''/dev/label/storage_03_1'' > whole_disk=0 > DTL=190 > bad config type 16 for stats > children[1] > type=''disk'' > id=1 > guid=17248074673319151620 > path=''/dev/label/storage_03_2'' > whole_disk=0 > DTL=189 > bad config type 16 for stats > children[2] > type=''disk'' > id=2 > guid=5207362801642277457 > path=''/dev/label/storage_03_3'' > whole_disk=0 > DTL=188 > bad config type 16 for stats > children[3] > type=''disk'' > id=3 > guid=2325967529400575592 > path=''/dev/label/storage_03_4'' > whole_disk=0 > DTL=187 > bad config type 16 for stats > > I have read the On Disk Format pdf and came to the conclusion that I > could possibly use dd to copy one of the working labels (2/3) and > write it to the failed ones on each disk, if I could find them. Any > help/suggestions would be appreciated. > > Thanks. > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2431 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090804/06fc9db4/attachment.bin>
> Try > > zpool import 2169223940234886392 [storage1] > > -r > > Le 4 ao?t 09 ? 15:11, David a ?crit : >Thanks for the suggestion, but that only gives me a 4 drive vdev with no data/filesystems. amnesiac# zpool import 2169223940234886392 amnesiac# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT storage 1.81T 135K 1.81T 0% ONLINE - That only confuses me more because I must have a left over label/uberblock? somewhere. -- This message posted from opensolaris.org