Osvald Ivarsson
2009-Oct-01 13:54 UTC
[zfs-discuss] Unable to import pool: invalid vdev configuration
I''m running OpenSolaris build svn_101b. I have 3 SATA disks connected to my motherboard. The raid, a raidz, which is called "rescamp", has worked good before until a power failure yesterday. I''m now unable to import the pool. I can''t export the raid, since it isn''t imported. # zpool import rescamp cannot import ''rescamp'': invalid vdev configuration # zpool import pool: rescamp id: 12297694211509104163 state: UNAVAIL action: The pool cannot be imported due to damaged devices or data. config: rescamp UNAVAIL insufficient replicas raidz1 UNAVAIL corrupted data c15d0 ONLINE c14d0 ONLINE c14d1 ONLINE I''ve tried using zdb -l on all three disks, but in all cases it failes to unpack the labels. # zdb -l /dev/dsk/c14d0 -------------------------------------------- LABEL 0 -------------------------------------------- failed to unpack label 0 -------------------------------------------- LABEL 1 -------------------------------------------- failed to unpack label 1 -------------------------------------------- LABEL 2 -------------------------------------------- failed to unpack label 2 -------------------------------------------- LABEL 3 -------------------------------------------- failed to unpack label 3 If I run # zdb -l /dev/dsk/c14d0s0 I do find 4 labels, but c14d0, c14d1 and c15d0 is what I created the raid with. I do find labels this way for all three disks. Is this to any help? # zdb -l /dev/dsk/c14d1s0 -------------------------------------------- LABEL 0 -------------------------------------------- version=13 name=''rescamp'' state=0 txg=218097573 pool_guid=12297694211509104163 hostid=4925114 hostname=''slaskvald'' top_guid=9479723326726871122 guid=17774184411399278071 vdev_tree type=''raidz'' id=0 guid=9479723326726871122 nparity=1 metaslab_array=23 metaslab_shift=34 ashift=9 asize=3000574672896 is_log=0 children[0] type=''disk'' id=0 guid=9020535344824299914 path=''/dev/dsk/c15d0s0'' devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' whole_disk=1 DTL=102 children[1] type=''disk'' id=1 guid=14384361563876398475 path=''/dev/dsk/c14d0s0'' devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' whole_disk=1 DTL=216 children[2] type=''disk'' id=2 guid=17774184411399278071 path=''/dev/dsk/c14d1s0'' devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' whole_disk=1 DTL=100 -------------------------------------------- LABEL 1 -------------------------------------------- version=13 name=''rescamp'' state=0 txg=218097573 pool_guid=12297694211509104163 hostid=4925114 hostname=''slaskvald'' top_guid=9479723326726871122 guid=17774184411399278071 vdev_tree type=''raidz'' id=0 guid=9479723326726871122 nparity=1 metaslab_array=23 metaslab_shift=34 ashift=9 asize=3000574672896 is_log=0 children[0] type=''disk'' id=0 guid=9020535344824299914 path=''/dev/dsk/c15d0s0'' devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' whole_disk=1 DTL=102 children[1] type=''disk'' id=1 guid=14384361563876398475 path=''/dev/dsk/c14d0s0'' devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' whole_disk=1 DTL=216 children[2] type=''disk'' id=2 guid=17774184411399278071 path=''/dev/dsk/c14d1s0'' devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' whole_disk=1 DTL=100 -------------------------------------------- LABEL 2 -------------------------------------------- version=13 name=''rescamp'' state=0 txg=218097573 pool_guid=12297694211509104163 hostid=4925114 hostname=''slaskvald'' top_guid=9479723326726871122 guid=17774184411399278071 vdev_tree type=''raidz'' id=0 guid=9479723326726871122 nparity=1 metaslab_array=23 metaslab_shift=34 ashift=9 asize=3000574672896 is_log=0 children[0] type=''disk'' id=0 guid=9020535344824299914 path=''/dev/dsk/c15d0s0'' devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' whole_disk=1 DTL=102 children[1] type=''disk'' id=1 guid=14384361563876398475 path=''/dev/dsk/c14d0s0'' devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' whole_disk=1 DTL=216 children[2] type=''disk'' id=2 guid=17774184411399278071 path=''/dev/dsk/c14d1s0'' devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' whole_disk=1 DTL=100 -------------------------------------------- LABEL 3 -------------------------------------------- version=13 name=''rescamp'' state=0 txg=218097573 pool_guid=12297694211509104163 hostid=4925114 hostname=''slaskvald'' top_guid=9479723326726871122 guid=17774184411399278071 vdev_tree type=''raidz'' id=0 guid=9479723326726871122 nparity=1 metaslab_array=23 metaslab_shift=34 ashift=9 asize=3000574672896 is_log=0 children[0] type=''disk'' id=0 guid=9020535344824299914 path=''/dev/dsk/c15d0s0'' devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' whole_disk=1 DTL=102 children[1] type=''disk'' id=1 guid=14384361563876398475 path=''/dev/dsk/c14d0s0'' devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' whole_disk=1 DTL=216 children[2] type=''disk'' id=2 guid=17774184411399278071 path=''/dev/dsk/c14d1s0'' devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' whole_disk=1 DTL=100 Any idea what to do? -- This message posted from opensolaris.org
Victor Latushkin
2009-Oct-01 17:40 UTC
[zfs-discuss] Unable to import pool: invalid vdev configuration
On 01.10.09 17:54, Osvald Ivarsson wrote:> I''m running OpenSolaris build svn_101b. I have 3 SATA disks connected to my motherboard. The raid, a raidz, which is called "rescamp", has worked good before until a power failure yesterday. I''m now unable to import the pool. I can''t export the raid, since it isn''t imported. > > # zpool import rescamp > cannot import ''rescamp'': invalid vdev configuration > > # zpool import > pool: rescamp > id: 12297694211509104163 > state: UNAVAIL > action: The pool cannot be imported due to damaged devices or data. > config: > > rescamp UNAVAIL insufficient replicas > raidz1 UNAVAIL corrupted data > c15d0 ONLINE > c14d0 ONLINE > c14d1 ONLINE > > I''ve tried using zdb -l on all three disks, but in all cases it failes to unpack the labels. > > # zdb -l /dev/dsk/c14d0 > -------------------------------------------- > LABEL 0 > -------------------------------------------- > failed to unpack label 0 > -------------------------------------------- > LABEL 1 > -------------------------------------------- > failed to unpack label 1 > -------------------------------------------- > LABEL 2 > -------------------------------------------- > failed to unpack label 2 > -------------------------------------------- > LABEL 3 > -------------------------------------------- > failed to unpack label 3 > > If I run # zdb -l /dev/dsk/c14d0s0 I do find 4 labels, but c14d0, c14d1 and c15d0 is what I created the raid with. I do find labels this way for all three disks. Is this to any help? > > # zdb -l /dev/dsk/c14d1s0 > -------------------------------------------- > LABEL 0 > -------------------------------------------- > version=13 > name=''rescamp'' > state=0 > txg=218097573 > pool_guid=12297694211509104163 > hostid=4925114 > hostname=''slaskvald'' > top_guid=9479723326726871122 > guid=17774184411399278071 > vdev_tree > type=''raidz'' > id=0 > guid=9479723326726871122 > nparity=1 > metaslab_array=23 > metaslab_shift=34 > ashift=9 > asize=3000574672896 > is_log=0 > children[0] > type=''disk'' > id=0 > guid=9020535344824299914 > path=''/dev/dsk/c15d0s0'' > devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' > phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' > whole_disk=1 > DTL=102 > children[1] > type=''disk'' > id=1 > guid=14384361563876398475 > path=''/dev/dsk/c14d0s0'' > devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' > phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' > whole_disk=1 > DTL=216 > children[2] > type=''disk'' > id=2 > guid=17774184411399278071 > path=''/dev/dsk/c14d1s0'' > devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' > phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' > whole_disk=1 > DTL=100 > -------------------------------------------- > LABEL 1 > -------------------------------------------- > version=13 > name=''rescamp'' > state=0 > txg=218097573 > pool_guid=12297694211509104163 > hostid=4925114 > hostname=''slaskvald'' > top_guid=9479723326726871122 > guid=17774184411399278071 > vdev_tree > type=''raidz'' > id=0 > guid=9479723326726871122 > nparity=1 > metaslab_array=23 > metaslab_shift=34 > ashift=9 > asize=3000574672896 > is_log=0 > children[0] > type=''disk'' > id=0 > guid=9020535344824299914 > path=''/dev/dsk/c15d0s0'' > devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' > phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' > whole_disk=1 > DTL=102 > children[1] > type=''disk'' > id=1 > guid=14384361563876398475 > path=''/dev/dsk/c14d0s0'' > devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' > phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' > whole_disk=1 > DTL=216 > children[2] > type=''disk'' > id=2 > guid=17774184411399278071 > path=''/dev/dsk/c14d1s0'' > devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' > phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' > whole_disk=1 > DTL=100 > -------------------------------------------- > LABEL 2 > -------------------------------------------- > version=13 > name=''rescamp'' > state=0 > txg=218097573 > pool_guid=12297694211509104163 > hostid=4925114 > hostname=''slaskvald'' > top_guid=9479723326726871122 > guid=17774184411399278071 > vdev_tree > type=''raidz'' > id=0 > guid=9479723326726871122 > nparity=1 > metaslab_array=23 > metaslab_shift=34 > ashift=9 > asize=3000574672896 > is_log=0 > children[0] > type=''disk'' > id=0 > guid=9020535344824299914 > path=''/dev/dsk/c15d0s0'' > devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' > phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' > whole_disk=1 > DTL=102 > children[1] > type=''disk'' > id=1 > guid=14384361563876398475 > path=''/dev/dsk/c14d0s0'' > devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' > phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' > whole_disk=1 > DTL=216 > children[2] > type=''disk'' > id=2 > guid=17774184411399278071 > path=''/dev/dsk/c14d1s0'' > devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' > phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' > whole_disk=1 > DTL=100 > -------------------------------------------- > LABEL 3 > -------------------------------------------- > version=13 > name=''rescamp'' > state=0 > txg=218097573 > pool_guid=12297694211509104163 > hostid=4925114 > hostname=''slaskvald'' > top_guid=9479723326726871122 > guid=17774184411399278071 > vdev_tree > type=''raidz'' > id=0 > guid=9479723326726871122 > nparity=1 > metaslab_array=23 > metaslab_shift=34 > ashift=9 > asize=3000574672896 > is_log=0 > children[0] > type=''disk'' > id=0 > guid=9020535344824299914 > path=''/dev/dsk/c15d0s0'' > devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' > phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' > whole_disk=1 > DTL=102 > children[1] > type=''disk'' > id=1 > guid=14384361563876398475 > path=''/dev/dsk/c14d0s0'' > devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' > phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' > whole_disk=1 > DTL=216 > children[2] > type=''disk'' > id=2 > guid=17774184411399278071 > path=''/dev/dsk/c14d1s0'' > devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' > phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' > whole_disk=1 > DTL=100 > > Any idea what to do?Please have a look at this message http://www.opensolaris.org/jive/message.jspa?messageID=420146#420146 victor
Osvald Ivarsson
2009-Oct-02 12:27 UTC
[zfs-discuss] Unable to import pool: invalid vdev configuration
On Thu, Oct 1, 2009 at 7:40 PM, Victor Latushkin <Victor.Latushkin at sun.com> wrote:> On 01.10.09 17:54, Osvald Ivarsson wrote: >> >> I''m running OpenSolaris build svn_101b. I have 3 SATA disks connected to >> my motherboard. The raid, a raidz, which is called "rescamp", has worked >> good before until a power failure yesterday. I''m now unable to import the >> pool. I can''t export the raid, since it isn''t imported. >> >> # zpool import rescamp >> cannot import ''rescamp'': invalid vdev configuration >> >> # zpool import >> ?pool: rescamp >> ? ?id: 12297694211509104163 >> ?state: UNAVAIL >> action: The pool cannot be imported due to damaged devices or data. >> config: >> >> ? ? ? ?rescamp ? ? UNAVAIL ?insufficient replicas >> ? ? ? ? ?raidz1 ? ?UNAVAIL ?corrupted data >> ? ? ? ? ? ?c15d0 ? ONLINE >> ? ? ? ? ? ?c14d0 ? ONLINE >> ? ? ? ? ? ?c14d1 ? ONLINE >> >> I''ve tried using zdb -l on all three disks, but in all cases it failes to >> unpack the labels. >> >> # zdb -l /dev/dsk/c14d0 >> -------------------------------------------- >> LABEL 0 >> -------------------------------------------- >> failed to unpack label 0 >> -------------------------------------------- >> LABEL 1 >> -------------------------------------------- >> failed to unpack label 1 >> -------------------------------------------- >> LABEL 2 >> -------------------------------------------- >> failed to unpack label 2 >> -------------------------------------------- >> LABEL 3 >> -------------------------------------------- >> failed to unpack label 3 >> >> If I run # zdb -l /dev/dsk/c14d0s0 I do find 4 labels, but c14d0, c14d1 >> and c15d0 is what I created the raid with. I do find labels this way for all >> three disks. Is this to any help? >> >> # zdb -l /dev/dsk/c14d1s0 >> -------------------------------------------- >> LABEL 0 >> -------------------------------------------- >> ? ?version=13 >> ? ?name=''rescamp'' >> ? ?state=0 >> ? ?txg=218097573 >> ? ?pool_guid=12297694211509104163 >> ? ?hostid=4925114 >> ? ?hostname=''slaskvald'' >> ? ?top_guid=9479723326726871122 >> ? ?guid=17774184411399278071 >> ? ?vdev_tree >> ? ? ? ?type=''raidz'' >> ? ? ? ?id=0 >> ? ? ? ?guid=9479723326726871122 >> ? ? ? ?nparity=1 >> ? ? ? ?metaslab_array=23 >> ? ? ? ?metaslab_shift=34 >> ? ? ? ?ashift=9 >> ? ? ? ?asize=3000574672896 >> ? ? ? ?is_log=0 >> ? ? ? ?children[0] >> ? ? ? ? ? ? ? ?type=''disk'' >> ? ? ? ? ? ? ? ?id=0 >> ? ? ? ? ? ? ? ?guid=9020535344824299914 >> ? ? ? ? ? ? ? ?path=''/dev/dsk/c15d0s0'' >> ? ? ? ? ? ? ? ?devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' >> ? ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' >> ? ? ? ? ? ? ? ?whole_disk=1 >> ? ? ? ? ? ? ? ?DTL=102 >> ? ? ? ?children[1] >> ? ? ? ? ? ? ? ?type=''disk'' >> ? ? ? ? ? ? ? ?id=1 >> ? ? ? ? ? ? ? ?guid=14384361563876398475 >> ? ? ? ? ? ? ? ?path=''/dev/dsk/c14d0s0'' >> ? ? ? ? ? ? ? ?devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' >> ? ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' >> ? ? ? ? ? ? ? ?whole_disk=1 >> ? ? ? ? ? ? ? ?DTL=216 >> ? ? ? ?children[2] >> ? ? ? ? ? ? ? ?type=''disk'' >> ? ? ? ? ? ? ? ?id=2 >> ? ? ? ? ? ? ? ?guid=17774184411399278071 >> ? ? ? ? ? ? ? ?path=''/dev/dsk/c14d1s0'' >> ? ? ? ? ? ? ? ?devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' >> ? ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' >> ? ? ? ? ? ? ? ?whole_disk=1 >> ? ? ? ? ? ? ? ?DTL=100 >> -------------------------------------------- >> LABEL 1 >> -------------------------------------------- >> ? ?version=13 >> ? ?name=''rescamp'' >> ? ?state=0 >> ? ?txg=218097573 >> ? ?pool_guid=12297694211509104163 >> ? ?hostid=4925114 >> ? ?hostname=''slaskvald'' >> ? ?top_guid=9479723326726871122 >> ? ?guid=17774184411399278071 >> ? ?vdev_tree >> ? ? ? ?type=''raidz'' >> ? ? ? ?id=0 >> ? ? ? ?guid=9479723326726871122 >> ? ? ? ?nparity=1 >> ? ? ? ?metaslab_array=23 >> ? ? ? ?metaslab_shift=34 >> ? ? ? ?ashift=9 >> ? ? ? ?asize=3000574672896 >> ? ? ? ?is_log=0 >> ? ? ? ?children[0] >> ? ? ? ? ? ? ? ?type=''disk'' >> ? ? ? ? ? ? ? ?id=0 >> ? ? ? ? ? ? ? ?guid=9020535344824299914 >> ? ? ? ? ? ? ? ?path=''/dev/dsk/c15d0s0'' >> ? ? ? ? ? ? ? ?devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' >> ? ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' >> ? ? ? ? ? ? ? ?whole_disk=1 >> ? ? ? ? ? ? ? ?DTL=102 >> ? ? ? ?children[1] >> ? ? ? ? ? ? ? ?type=''disk'' >> ? ? ? ? ? ? ? ?id=1 >> ? ? ? ? ? ? ? ?guid=14384361563876398475 >> ? ? ? ? ? ? ? ?path=''/dev/dsk/c14d0s0'' >> ? ? ? ? ? ? ? ?devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' >> ? ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' >> ? ? ? ? ? ? ? ?whole_disk=1 >> ? ? ? ? ? ? ? ?DTL=216 >> ? ? ? ?children[2] >> ? ? ? ? ? ? ? ?type=''disk'' >> ? ? ? ? ? ? ? ?id=2 >> ? ? ? ? ? ? ? ?guid=17774184411399278071 >> ? ? ? ? ? ? ? ?path=''/dev/dsk/c14d1s0'' >> ? ? ? ? ? ? ? ?devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' >> ? ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' >> ? ? ? ? ? ? ? ?whole_disk=1 >> ? ? ? ? ? ? ? ?DTL=100 >> -------------------------------------------- >> LABEL 2 >> -------------------------------------------- >> ? ?version=13 >> ? ?name=''rescamp'' >> ? ?state=0 >> ? ?txg=218097573 >> ? ?pool_guid=12297694211509104163 >> ? ?hostid=4925114 >> ? ?hostname=''slaskvald'' >> ? ?top_guid=9479723326726871122 >> ? ?guid=17774184411399278071 >> ? ?vdev_tree >> ? ? ? ?type=''raidz'' >> ? ? ? ?id=0 >> ? ? ? ?guid=9479723326726871122 >> ? ? ? ?nparity=1 >> ? ? ? ?metaslab_array=23 >> ? ? ? ?metaslab_shift=34 >> ? ? ? ?ashift=9 >> ? ? ? ?asize=3000574672896 >> ? ? ? ?is_log=0 >> ? ? ? ?children[0] >> ? ? ? ? ? ? ? ?type=''disk'' >> ? ? ? ? ? ? ? ?id=0 >> ? ? ? ? ? ? ? ?guid=9020535344824299914 >> ? ? ? ? ? ? ? ?path=''/dev/dsk/c15d0s0'' >> ? ? ? ? ? ? ? ?devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' >> ? ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' >> ? ? ? ? ? ? ? ?whole_disk=1 >> ? ? ? ? ? ? ? ?DTL=102 >> ? ? ? ?children[1] >> ? ? ? ? ? ? ? ?type=''disk'' >> ? ? ? ? ? ? ? ?id=1 >> ? ? ? ? ? ? ? ?guid=14384361563876398475 >> ? ? ? ? ? ? ? ?path=''/dev/dsk/c14d0s0'' >> ? ? ? ? ? ? ? ?devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' >> ? ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' >> ? ? ? ? ? ? ? ?whole_disk=1 >> ? ? ? ? ? ? ? ?DTL=216 >> ? ? ? ?children[2] >> ? ? ? ? ? ? ? ?type=''disk'' >> ? ? ? ? ? ? ? ?id=2 >> ? ? ? ? ? ? ? ?guid=17774184411399278071 >> ? ? ? ? ? ? ? ?path=''/dev/dsk/c14d1s0'' >> ? ? ? ? ? ? ? ?devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' >> ? ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' >> ? ? ? ? ? ? ? ?whole_disk=1 >> ? ? ? ? ? ? ? ?DTL=100 >> -------------------------------------------- >> LABEL 3 >> -------------------------------------------- >> ? ?version=13 >> ? ?name=''rescamp'' >> ? ?state=0 >> ? ?txg=218097573 >> ? ?pool_guid=12297694211509104163 >> ? ?hostid=4925114 >> ? ?hostname=''slaskvald'' >> ? ?top_guid=9479723326726871122 >> ? ?guid=17774184411399278071 >> ? ?vdev_tree >> ? ? ? ?type=''raidz'' >> ? ? ? ?id=0 >> ? ? ? ?guid=9479723326726871122 >> ? ? ? ?nparity=1 >> ? ? ? ?metaslab_array=23 >> ? ? ? ?metaslab_shift=34 >> ? ? ? ?ashift=9 >> ? ? ? ?asize=3000574672896 >> ? ? ? ?is_log=0 >> ? ? ? ?children[0] >> ? ? ? ? ? ? ? ?type=''disk'' >> ? ? ? ? ? ? ? ?id=0 >> ? ? ? ? ? ? ? ?guid=9020535344824299914 >> ? ? ? ? ? ? ? ?path=''/dev/dsk/c15d0s0'' >> ? ? ? ? ? ? ? ?devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' >> ? ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' >> ? ? ? ? ? ? ? ?whole_disk=1 >> ? ? ? ? ? ? ? ?DTL=102 >> ? ? ? ?children[1] >> ? ? ? ? ? ? ? ?type=''disk'' >> ? ? ? ? ? ? ? ?id=1 >> ? ? ? ? ? ? ? ?guid=14384361563876398475 >> ? ? ? ? ? ? ? ?path=''/dev/dsk/c14d0s0'' >> ? ? ? ? ? ? ? ?devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' >> ? ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' >> ? ? ? ? ? ? ? ?whole_disk=1 >> ? ? ? ? ? ? ? ?DTL=216 >> ? ? ? ?children[2] >> ? ? ? ? ? ? ? ?type=''disk'' >> ? ? ? ? ? ? ? ?id=2 >> ? ? ? ? ? ? ? ?guid=17774184411399278071 >> ? ? ? ? ? ? ? ?path=''/dev/dsk/c14d1s0'' >> ? ? ? ? ? ? ? ?devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' >> ? ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' >> ? ? ? ? ? ? ? ?whole_disk=1 >> ? ? ? ? ? ? ? ?DTL=100 >> >> Any idea what to do? > > Please have a look at this message > > http://www.opensolaris.org/jive/message.jspa?messageID=420146#420146 > > victor >prtvtoc gives the following: # prtvtoc /dev/rdsk/c14d0s0 * /dev/rdsk/c14d0s0 partition map * * Dimensions: * 512 bytes/sector * 1953520128 sectors * 1953520061 accessible sectors * * Flags: * 1: unmountable * 10: read-only * * Unallocated space: * First Sector Last * Sector Count Sector * 34 222 255 * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 4 00 256 1953503455 1953503710 8 11 00 1953503711 16384 1953520094 I''ve read the thread, and in it there''s an advice to either: 1. Restore original labeling on the c7d0 disk 2. Remove (physically or logically) disk c7d0 And well, I''d have to remove all three disks if I used the second advice, since they all have missing labels on c14d0 identically. Which leaves me with restoring the original labeling. How do I do this? What actually prevents zfs from importing my pool? Thank you! /Osvald Ivarsson
Victor Latushkin
2009-Oct-02 12:36 UTC
[zfs-discuss] Unable to import pool: invalid vdev configuration
Osvald Ivarsson wrote:> On Thu, Oct 1, 2009 at 7:40 PM, Victor Latushkin > <Victor.Latushkin at sun.com> wrote: >> On 01.10.09 17:54, Osvald Ivarsson wrote: >>> I''m running OpenSolaris build svn_101b. I have 3 SATA disks connected to >>> my motherboard. The raid, a raidz, which is called "rescamp", has worked >>> good before until a power failure yesterday. I''m now unable to import the >>> pool. I can''t export the raid, since it isn''t imported. >>> >>> # zpool import rescamp >>> cannot import ''rescamp'': invalid vdev configuration >>> >>> # zpool import >>> pool: rescamp >>> id: 12297694211509104163 >>> state: UNAVAIL >>> action: The pool cannot be imported due to damaged devices or data. >>> config: >>> >>> rescamp UNAVAIL insufficient replicas >>> raidz1 UNAVAIL corrupted data >>> c15d0 ONLINE >>> c14d0 ONLINE >>> c14d1 ONLINE >>> >>> I''ve tried using zdb -l on all three disks, but in all cases it failes to >>> unpack the labels. >>> >>> # zdb -l /dev/dsk/c14d0 >>> -------------------------------------------- >>> LABEL 0 >>> -------------------------------------------- >>> failed to unpack label 0 >>> -------------------------------------------- >>> LABEL 1 >>> -------------------------------------------- >>> failed to unpack label 1 >>> -------------------------------------------- >>> LABEL 2 >>> -------------------------------------------- >>> failed to unpack label 2 >>> -------------------------------------------- >>> LABEL 3 >>> -------------------------------------------- >>> failed to unpack label 3 >>> >>> If I run # zdb -l /dev/dsk/c14d0s0 I do find 4 labels, but c14d0, c14d1 >>> and c15d0 is what I created the raid with. I do find labels this way for all >>> three disks. Is this to any help? >>> >>> # zdb -l /dev/dsk/c14d1s0 >>> -------------------------------------------- >>> LABEL 0 >>> -------------------------------------------- >>> version=13 >>> name=''rescamp'' >>> state=0 >>> txg=218097573 >>> pool_guid=12297694211509104163 >>> hostid=4925114 >>> hostname=''slaskvald'' >>> top_guid=9479723326726871122 >>> guid=17774184411399278071 >>> vdev_tree >>> type=''raidz'' >>> id=0 >>> guid=9479723326726871122 >>> nparity=1 >>> metaslab_array=23 >>> metaslab_shift=34 >>> ashift=9 >>> asize=3000574672896 >>> is_log=0 >>> children[0] >>> type=''disk'' >>> id=0 >>> guid=9020535344824299914 >>> path=''/dev/dsk/c15d0s0'' >>> devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' >>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' >>> whole_disk=1 >>> DTL=102 >>> children[1] >>> type=''disk'' >>> id=1 >>> guid=14384361563876398475 >>> path=''/dev/dsk/c14d0s0'' >>> devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' >>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' >>> whole_disk=1 >>> DTL=216 >>> children[2] >>> type=''disk'' >>> id=2 >>> guid=17774184411399278071 >>> path=''/dev/dsk/c14d1s0'' >>> devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' >>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' >>> whole_disk=1 >>> DTL=100 >>> -------------------------------------------- >>> LABEL 1 >>> -------------------------------------------- >>> version=13 >>> name=''rescamp'' >>> state=0 >>> txg=218097573 >>> pool_guid=12297694211509104163 >>> hostid=4925114 >>> hostname=''slaskvald'' >>> top_guid=9479723326726871122 >>> guid=17774184411399278071 >>> vdev_tree >>> type=''raidz'' >>> id=0 >>> guid=9479723326726871122 >>> nparity=1 >>> metaslab_array=23 >>> metaslab_shift=34 >>> ashift=9 >>> asize=3000574672896 >>> is_log=0 >>> children[0] >>> type=''disk'' >>> id=0 >>> guid=9020535344824299914 >>> path=''/dev/dsk/c15d0s0'' >>> devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' >>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' >>> whole_disk=1 >>> DTL=102 >>> children[1] >>> type=''disk'' >>> id=1 >>> guid=14384361563876398475 >>> path=''/dev/dsk/c14d0s0'' >>> devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' >>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' >>> whole_disk=1 >>> DTL=216 >>> children[2] >>> type=''disk'' >>> id=2 >>> guid=17774184411399278071 >>> path=''/dev/dsk/c14d1s0'' >>> devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' >>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' >>> whole_disk=1 >>> DTL=100 >>> -------------------------------------------- >>> LABEL 2 >>> -------------------------------------------- >>> version=13 >>> name=''rescamp'' >>> state=0 >>> txg=218097573 >>> pool_guid=12297694211509104163 >>> hostid=4925114 >>> hostname=''slaskvald'' >>> top_guid=9479723326726871122 >>> guid=17774184411399278071 >>> vdev_tree >>> type=''raidz'' >>> id=0 >>> guid=9479723326726871122 >>> nparity=1 >>> metaslab_array=23 >>> metaslab_shift=34 >>> ashift=9 >>> asize=3000574672896 >>> is_log=0 >>> children[0] >>> type=''disk'' >>> id=0 >>> guid=9020535344824299914 >>> path=''/dev/dsk/c15d0s0'' >>> devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' >>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' >>> whole_disk=1 >>> DTL=102 >>> children[1] >>> type=''disk'' >>> id=1 >>> guid=14384361563876398475 >>> path=''/dev/dsk/c14d0s0'' >>> devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' >>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' >>> whole_disk=1 >>> DTL=216 >>> children[2] >>> type=''disk'' >>> id=2 >>> guid=17774184411399278071 >>> path=''/dev/dsk/c14d1s0'' >>> devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' >>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' >>> whole_disk=1 >>> DTL=100 >>> -------------------------------------------- >>> LABEL 3 >>> -------------------------------------------- >>> version=13 >>> name=''rescamp'' >>> state=0 >>> txg=218097573 >>> pool_guid=12297694211509104163 >>> hostid=4925114 >>> hostname=''slaskvald'' >>> top_guid=9479723326726871122 >>> guid=17774184411399278071 >>> vdev_tree >>> type=''raidz'' >>> id=0 >>> guid=9479723326726871122 >>> nparity=1 >>> metaslab_array=23 >>> metaslab_shift=34 >>> ashift=9 >>> asize=3000574672896 >>> is_log=0 >>> children[0] >>> type=''disk'' >>> id=0 >>> guid=9020535344824299914 >>> path=''/dev/dsk/c15d0s0'' >>> devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' >>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' >>> whole_disk=1 >>> DTL=102 >>> children[1] >>> type=''disk'' >>> id=1 >>> guid=14384361563876398475 >>> path=''/dev/dsk/c14d0s0'' >>> devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' >>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' >>> whole_disk=1 >>> DTL=216 >>> children[2] >>> type=''disk'' >>> id=2 >>> guid=17774184411399278071 >>> path=''/dev/dsk/c14d1s0'' >>> devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' >>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' >>> whole_disk=1 >>> DTL=100 >>> >>> Any idea what to do? >> Please have a look at this message >> >> http://www.opensolaris.org/jive/message.jspa?messageID=420146#420146 >> >> victor >> > > prtvtoc gives the following: > > # prtvtoc /dev/rdsk/c14d0s0 > * /dev/rdsk/c14d0s0 partition map > * > * Dimensions: > * 512 bytes/sector > * 1953520128 sectors > * 1953520061 accessible sectors > * > * Flags: > * 1: unmountable > * 10: read-only > * > * Unallocated space: > * First Sector Last > * Sector Count Sector > * 34 222 255 > * > * First Sector Last > * Partition Tag Flags Sector Count Sector Mount Directory > 0 4 00 256 1953503455 1953503710 > 8 11 00 1953503711 16384 1953520094Can you post prtvtoc output for other two disks? victor> > > I''ve read the thread, and in it there''s an advice to either: > > 1. Restore original labeling on the c7d0 disk > > 2. Remove (physically or logically) disk c7d0 > > And well, I''d have to remove all three disks if I used the second > advice, since they all have missing labels on c14d0 identically. Which > leaves me with restoring the original labeling. How do I do this? What > actually prevents zfs from importing my pool? > > Thank you! > /Osvald Ivarsson
Osvald Ivarsson
2009-Oct-02 12:44 UTC
[zfs-discuss] Unable to import pool: invalid vdev configuration
On Fri, Oct 2, 2009 at 2:36 PM, Victor Latushkin <Victor.Latushkin at sun.com> wrote:> Osvald Ivarsson wrote: >> >> On Thu, Oct 1, 2009 at 7:40 PM, Victor Latushkin >> <Victor.Latushkin at sun.com> wrote: >>> >>> On 01.10.09 17:54, Osvald Ivarsson wrote: >>>> >>>> I''m running OpenSolaris build svn_101b. I have 3 SATA disks connected to >>>> my motherboard. The raid, a raidz, which is called "rescamp", has worked >>>> good before until a power failure yesterday. I''m now unable to import >>>> the >>>> pool. I can''t export the raid, since it isn''t imported. >>>> >>>> # zpool import rescamp >>>> cannot import ''rescamp'': invalid vdev configuration >>>> >>>> # zpool import >>>> ?pool: rescamp >>>> ? id: 12297694211509104163 >>>> ?state: UNAVAIL >>>> action: The pool cannot be imported due to damaged devices or data. >>>> config: >>>> >>>> ? ? ? rescamp ? ? UNAVAIL ?insufficient replicas >>>> ? ? ? ? raidz1 ? ?UNAVAIL ?corrupted data >>>> ? ? ? ? ? c15d0 ? ONLINE >>>> ? ? ? ? ? c14d0 ? ONLINE >>>> ? ? ? ? ? c14d1 ? ONLINE >>>> >>>> I''ve tried using zdb -l on all three disks, but in all cases it failes >>>> to >>>> unpack the labels. >>>> >>>> # zdb -l /dev/dsk/c14d0 >>>> -------------------------------------------- >>>> LABEL 0 >>>> -------------------------------------------- >>>> failed to unpack label 0 >>>> -------------------------------------------- >>>> LABEL 1 >>>> -------------------------------------------- >>>> failed to unpack label 1 >>>> -------------------------------------------- >>>> LABEL 2 >>>> -------------------------------------------- >>>> failed to unpack label 2 >>>> -------------------------------------------- >>>> LABEL 3 >>>> -------------------------------------------- >>>> failed to unpack label 3 >>>> >>>> If I run # zdb -l /dev/dsk/c14d0s0 I do find 4 labels, but c14d0, c14d1 >>>> and c15d0 is what I created the raid with. I do find labels this way for >>>> all >>>> three disks. Is this to any help? >>>> >>>> # zdb -l /dev/dsk/c14d1s0 >>>> -------------------------------------------- >>>> LABEL 0 >>>> -------------------------------------------- >>>> ? version=13 >>>> ? name=''rescamp'' >>>> ? state=0 >>>> ? txg=218097573 >>>> ? pool_guid=12297694211509104163 >>>> ? hostid=4925114 >>>> ? hostname=''slaskvald'' >>>> ? top_guid=9479723326726871122 >>>> ? guid=17774184411399278071 >>>> ? vdev_tree >>>> ? ? ? type=''raidz'' >>>> ? ? ? id=0 >>>> ? ? ? guid=9479723326726871122 >>>> ? ? ? nparity=1 >>>> ? ? ? metaslab_array=23 >>>> ? ? ? metaslab_shift=34 >>>> ? ? ? ashift=9 >>>> ? ? ? asize=3000574672896 >>>> ? ? ? is_log=0 >>>> ? ? ? children[0] >>>> ? ? ? ? ? ? ? type=''disk'' >>>> ? ? ? ? ? ? ? id=0 >>>> ? ? ? ? ? ? ? guid=9020535344824299914 >>>> ? ? ? ? ? ? ? path=''/dev/dsk/c15d0s0'' >>>> ? ? ? ? ? ? ? devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' >>>> ? ? ? ? ? ? ? phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' >>>> ? ? ? ? ? ? ? whole_disk=1 >>>> ? ? ? ? ? ? ? DTL=102 >>>> ? ? ? children[1] >>>> ? ? ? ? ? ? ? type=''disk'' >>>> ? ? ? ? ? ? ? id=1 >>>> ? ? ? ? ? ? ? guid=14384361563876398475 >>>> ? ? ? ? ? ? ? path=''/dev/dsk/c14d0s0'' >>>> ? ? ? ? ? ? ? devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' >>>> ? ? ? ? ? ? ? phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' >>>> ? ? ? ? ? ? ? whole_disk=1 >>>> ? ? ? ? ? ? ? DTL=216 >>>> ? ? ? children[2] >>>> ? ? ? ? ? ? ? type=''disk'' >>>> ? ? ? ? ? ? ? id=2 >>>> ? ? ? ? ? ? ? guid=17774184411399278071 >>>> ? ? ? ? ? ? ? path=''/dev/dsk/c14d1s0'' >>>> ? ? ? ? ? ? ? devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' >>>> ? ? ? ? ? ? ? phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' >>>> ? ? ? ? ? ? ? whole_disk=1 >>>> ? ? ? ? ? ? ? DTL=100 >>>> -------------------------------------------- >>>> LABEL 1 >>>> -------------------------------------------- >>>> ? version=13 >>>> ? name=''rescamp'' >>>> ? state=0 >>>> ? txg=218097573 >>>> ? pool_guid=12297694211509104163 >>>> ? hostid=4925114 >>>> ? hostname=''slaskvald'' >>>> ? top_guid=9479723326726871122 >>>> ? guid=17774184411399278071 >>>> ? vdev_tree >>>> ? ? ? type=''raidz'' >>>> ? ? ? id=0 >>>> ? ? ? guid=9479723326726871122 >>>> ? ? ? nparity=1 >>>> ? ? ? metaslab_array=23 >>>> ? ? ? metaslab_shift=34 >>>> ? ? ? ashift=9 >>>> ? ? ? asize=3000574672896 >>>> ? ? ? is_log=0 >>>> ? ? ? children[0] >>>> ? ? ? ? ? ? ? type=''disk'' >>>> ? ? ? ? ? ? ? id=0 >>>> ? ? ? ? ? ? ? guid=9020535344824299914 >>>> ? ? ? ? ? ? ? path=''/dev/dsk/c15d0s0'' >>>> ? ? ? ? ? ? ? devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' >>>> ? ? ? ? ? ? ? phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' >>>> ? ? ? ? ? ? ? whole_disk=1 >>>> ? ? ? ? ? ? ? DTL=102 >>>> ? ? ? children[1] >>>> ? ? ? ? ? ? ? type=''disk'' >>>> ? ? ? ? ? ? ? id=1 >>>> ? ? ? ? ? ? ? guid=14384361563876398475 >>>> ? ? ? ? ? ? ? path=''/dev/dsk/c14d0s0'' >>>> ? ? ? ? ? ? ? devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' >>>> ? ? ? ? ? ? ? phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' >>>> ? ? ? ? ? ? ? whole_disk=1 >>>> ? ? ? ? ? ? ? DTL=216 >>>> ? ? ? children[2] >>>> ? ? ? ? ? ? ? type=''disk'' >>>> ? ? ? ? ? ? ? id=2 >>>> ? ? ? ? ? ? ? guid=17774184411399278071 >>>> ? ? ? ? ? ? ? path=''/dev/dsk/c14d1s0'' >>>> ? ? ? ? ? ? ? devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' >>>> ? ? ? ? ? ? ? phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' >>>> ? ? ? ? ? ? ? whole_disk=1 >>>> ? ? ? ? ? ? ? DTL=100 >>>> -------------------------------------------- >>>> LABEL 2 >>>> -------------------------------------------- >>>> ? version=13 >>>> ? name=''rescamp'' >>>> ? state=0 >>>> ? txg=218097573 >>>> ? pool_guid=12297694211509104163 >>>> ? hostid=4925114 >>>> ? hostname=''slaskvald'' >>>> ? top_guid=9479723326726871122 >>>> ? guid=17774184411399278071 >>>> ? vdev_tree >>>> ? ? ? type=''raidz'' >>>> ? ? ? id=0 >>>> ? ? ? guid=9479723326726871122 >>>> ? ? ? nparity=1 >>>> ? ? ? metaslab_array=23 >>>> ? ? ? metaslab_shift=34 >>>> ? ? ? ashift=9 >>>> ? ? ? asize=3000574672896 >>>> ? ? ? is_log=0 >>>> ? ? ? children[0] >>>> ? ? ? ? ? ? ? type=''disk'' >>>> ? ? ? ? ? ? ? id=0 >>>> ? ? ? ? ? ? ? guid=9020535344824299914 >>>> ? ? ? ? ? ? ? path=''/dev/dsk/c15d0s0'' >>>> ? ? ? ? ? ? ? devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' >>>> ? ? ? ? ? ? ? phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' >>>> ? ? ? ? ? ? ? whole_disk=1 >>>> ? ? ? ? ? ? ? DTL=102 >>>> ? ? ? children[1] >>>> ? ? ? ? ? ? ? type=''disk'' >>>> ? ? ? ? ? ? ? id=1 >>>> ? ? ? ? ? ? ? guid=14384361563876398475 >>>> ? ? ? ? ? ? ? path=''/dev/dsk/c14d0s0'' >>>> ? ? ? ? ? ? ? devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' >>>> ? ? ? ? ? ? ? phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' >>>> ? ? ? ? ? ? ? whole_disk=1 >>>> ? ? ? ? ? ? ? DTL=216 >>>> ? ? ? children[2] >>>> ? ? ? ? ? ? ? type=''disk'' >>>> ? ? ? ? ? ? ? id=2 >>>> ? ? ? ? ? ? ? guid=17774184411399278071 >>>> ? ? ? ? ? ? ? path=''/dev/dsk/c14d1s0'' >>>> ? ? ? ? ? ? ? devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' >>>> ? ? ? ? ? ? ? phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' >>>> ? ? ? ? ? ? ? whole_disk=1 >>>> ? ? ? ? ? ? ? DTL=100 >>>> -------------------------------------------- >>>> LABEL 3 >>>> -------------------------------------------- >>>> ? version=13 >>>> ? name=''rescamp'' >>>> ? state=0 >>>> ? txg=218097573 >>>> ? pool_guid=12297694211509104163 >>>> ? hostid=4925114 >>>> ? hostname=''slaskvald'' >>>> ? top_guid=9479723326726871122 >>>> ? guid=17774184411399278071 >>>> ? vdev_tree >>>> ? ? ? type=''raidz'' >>>> ? ? ? id=0 >>>> ? ? ? guid=9479723326726871122 >>>> ? ? ? nparity=1 >>>> ? ? ? metaslab_array=23 >>>> ? ? ? metaslab_shift=34 >>>> ? ? ? ashift=9 >>>> ? ? ? asize=3000574672896 >>>> ? ? ? is_log=0 >>>> ? ? ? children[0] >>>> ? ? ? ? ? ? ? type=''disk'' >>>> ? ? ? ? ? ? ? id=0 >>>> ? ? ? ? ? ? ? guid=9020535344824299914 >>>> ? ? ? ? ? ? ? path=''/dev/dsk/c15d0s0'' >>>> ? ? ? ? ? ? ? devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' >>>> ? ? ? ? ? ? ? phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' >>>> ? ? ? ? ? ? ? whole_disk=1 >>>> ? ? ? ? ? ? ? DTL=102 >>>> ? ? ? children[1] >>>> ? ? ? ? ? ? ? type=''disk'' >>>> ? ? ? ? ? ? ? id=1 >>>> ? ? ? ? ? ? ? guid=14384361563876398475 >>>> ? ? ? ? ? ? ? path=''/dev/dsk/c14d0s0'' >>>> ? ? ? ? ? ? ? devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' >>>> ? ? ? ? ? ? ? phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' >>>> ? ? ? ? ? ? ? whole_disk=1 >>>> ? ? ? ? ? ? ? DTL=216 >>>> ? ? ? children[2] >>>> ? ? ? ? ? ? ? type=''disk'' >>>> ? ? ? ? ? ? ? id=2 >>>> ? ? ? ? ? ? ? guid=17774184411399278071 >>>> ? ? ? ? ? ? ? path=''/dev/dsk/c14d1s0'' >>>> ? ? ? ? ? ? ? devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' >>>> ? ? ? ? ? ? ? phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' >>>> ? ? ? ? ? ? ? whole_disk=1 >>>> ? ? ? ? ? ? ? DTL=100 >>>> >>>> Any idea what to do? >>> >>> Please have a look at this message >>> >>> http://www.opensolaris.org/jive/message.jspa?messageID=420146#420146 >>> >>> victor >>> >> >> prtvtoc gives the following: >> >> # prtvtoc /dev/rdsk/c14d0s0 >> * /dev/rdsk/c14d0s0 partition map >> * >> * Dimensions: >> * ? ? 512 bytes/sector >> * 1953520128 sectors >> * 1953520061 accessible sectors >> * >> * Flags: >> * ? 1: unmountable >> * ?10: read-only >> * >> * Unallocated space: >> * ? ? ? First ? ? Sector ? ?Last >> * ? ? ? Sector ? ? Count ? ?Sector >> * ? ? ? ? ?34 ? ? ? 222 ? ? ? 255 >> * >> * ? ? ? ? ? ? ? ? ? ? ? ? ?First ? ? Sector ? ?Last >> * Partition ?Tag ?Flags ? ?Sector ? ? Count ? ?Sector ?Mount Directory >> ? ? ? 0 ? ? ?4 ? ?00 ? ? ? ?256 1953503455 1953503710 >> ? ? ? 8 ? ? 11 ? ?00 ?1953503711 ? ? 16384 1953520094 > > Can you post prtvtoc output for other two disks? > > victor > >> >> >> I''ve read the thread, and in it there''s an advice to either: >> >> 1. Restore original labeling on the c7d0 disk >> >> 2. Remove (physically or logically) disk c7d0 >> >> And well, I''d have to remove all three disks if I used the second >> advice, since they all have missing labels on c14d0 identically. Which >> leaves me with restoring the original labeling. How do I do this? What >> actually prevents zfs from importing my pool? >> >> Thank you! >> /Osvald Ivarsson > >Here is the output: # prtvtoc /dev/dsk/c14d1 * /dev/dsk/c14d1 partition map * * Dimensions: * 512 bytes/sector * 1953520128 sectors * 1953525101 accessible sectors * * Flags: * 1: unmountable * 10: read-only * * Unallocated space: * First Sector Last * Sector Count Sector * 34 222 255 * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 4 00 256 1953508495 1953508750 8 11 00 1953508751 16384 1953525134 # prtvtoc /dev/dsk/c15d0 * /dev/dsk/c15d0 partition map * * Dimensions: * 512 bytes/sector * 1953520128 sectors * 1953525101 accessible sectors * * Flags: * 1: unmountable * 10: read-only * * Unallocated space: * First Sector Last * Sector Count Sector * 34 222 255 * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 4 00 256 1953508495 1953508750 8 11 00 1953508751 16384 1953525134 /Osvald Ivarsson
Victor Latushkin
2009-Oct-02 12:51 UTC
[zfs-discuss] Unable to import pool: invalid vdev configuration
Osvald Ivarsson wrote:> On Fri, Oct 2, 2009 at 2:36 PM, Victor Latushkin > <Victor.Latushkin at sun.com> wrote: >> Osvald Ivarsson wrote: >>> On Thu, Oct 1, 2009 at 7:40 PM, Victor Latushkin >>> <Victor.Latushkin at sun.com> wrote: >>>> On 01.10.09 17:54, Osvald Ivarsson wrote: >>>>> I''m running OpenSolaris build svn_101b. I have 3 SATA disks connected to >>>>> my motherboard. The raid, a raidz, which is called "rescamp", has worked >>>>> good before until a power failure yesterday. I''m now unable to import >>>>> the >>>>> pool. I can''t export the raid, since it isn''t imported. >>>>> >>>>> # zpool import rescamp >>>>> cannot import ''rescamp'': invalid vdev configuration >>>>> >>>>> # zpool import >>>>> pool: rescamp >>>>> id: 12297694211509104163 >>>>> state: UNAVAIL >>>>> action: The pool cannot be imported due to damaged devices or data. >>>>> config: >>>>> >>>>> rescamp UNAVAIL insufficient replicas >>>>> raidz1 UNAVAIL corrupted data >>>>> c15d0 ONLINE >>>>> c14d0 ONLINE >>>>> c14d1 ONLINE >>>>> >>>>> I''ve tried using zdb -l on all three disks, but in all cases it failes >>>>> to >>>>> unpack the labels. >>>>> >>>>> # zdb -l /dev/dsk/c14d0 >>>>> -------------------------------------------- >>>>> LABEL 0 >>>>> -------------------------------------------- >>>>> failed to unpack label 0 >>>>> -------------------------------------------- >>>>> LABEL 1 >>>>> -------------------------------------------- >>>>> failed to unpack label 1 >>>>> -------------------------------------------- >>>>> LABEL 2 >>>>> -------------------------------------------- >>>>> failed to unpack label 2 >>>>> -------------------------------------------- >>>>> LABEL 3 >>>>> -------------------------------------------- >>>>> failed to unpack label 3 >>>>> >>>>> If I run # zdb -l /dev/dsk/c14d0s0 I do find 4 labels, but c14d0, c14d1 >>>>> and c15d0 is what I created the raid with. I do find labels this way for >>>>> all >>>>> three disks. Is this to any help? >>>>> >>>>> # zdb -l /dev/dsk/c14d1s0 >>>>> -------------------------------------------- >>>>> LABEL 0 >>>>> -------------------------------------------- >>>>> version=13 >>>>> name=''rescamp'' >>>>> state=0 >>>>> txg=218097573 >>>>> pool_guid=12297694211509104163 >>>>> hostid=4925114 >>>>> hostname=''slaskvald'' >>>>> top_guid=9479723326726871122 >>>>> guid=17774184411399278071 >>>>> vdev_tree >>>>> type=''raidz'' >>>>> id=0 >>>>> guid=9479723326726871122 >>>>> nparity=1 >>>>> metaslab_array=23 >>>>> metaslab_shift=34 >>>>> ashift=9 >>>>> asize=3000574672896 >>>>> is_log=0 >>>>> children[0] >>>>> type=''disk'' >>>>> id=0 >>>>> guid=9020535344824299914 >>>>> path=''/dev/dsk/c15d0s0'' >>>>> devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' >>>>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' >>>>> whole_disk=1 >>>>> DTL=102 >>>>> children[1] >>>>> type=''disk'' >>>>> id=1 >>>>> guid=14384361563876398475 >>>>> path=''/dev/dsk/c14d0s0'' >>>>> devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' >>>>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' >>>>> whole_disk=1 >>>>> DTL=216 >>>>> children[2] >>>>> type=''disk'' >>>>> id=2 >>>>> guid=17774184411399278071 >>>>> path=''/dev/dsk/c14d1s0'' >>>>> devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' >>>>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' >>>>> whole_disk=1 >>>>> DTL=100 >>>>> -------------------------------------------- >>>>> LABEL 1 >>>>> -------------------------------------------- >>>>> version=13 >>>>> name=''rescamp'' >>>>> state=0 >>>>> txg=218097573 >>>>> pool_guid=12297694211509104163 >>>>> hostid=4925114 >>>>> hostname=''slaskvald'' >>>>> top_guid=9479723326726871122 >>>>> guid=17774184411399278071 >>>>> vdev_tree >>>>> type=''raidz'' >>>>> id=0 >>>>> guid=9479723326726871122 >>>>> nparity=1 >>>>> metaslab_array=23 >>>>> metaslab_shift=34 >>>>> ashift=9 >>>>> asize=3000574672896 >>>>> is_log=0 >>>>> children[0] >>>>> type=''disk'' >>>>> id=0 >>>>> guid=9020535344824299914 >>>>> path=''/dev/dsk/c15d0s0'' >>>>> devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' >>>>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' >>>>> whole_disk=1 >>>>> DTL=102 >>>>> children[1] >>>>> type=''disk'' >>>>> id=1 >>>>> guid=14384361563876398475 >>>>> path=''/dev/dsk/c14d0s0'' >>>>> devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' >>>>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' >>>>> whole_disk=1 >>>>> DTL=216 >>>>> children[2] >>>>> type=''disk'' >>>>> id=2 >>>>> guid=17774184411399278071 >>>>> path=''/dev/dsk/c14d1s0'' >>>>> devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' >>>>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' >>>>> whole_disk=1 >>>>> DTL=100 >>>>> -------------------------------------------- >>>>> LABEL 2 >>>>> -------------------------------------------- >>>>> version=13 >>>>> name=''rescamp'' >>>>> state=0 >>>>> txg=218097573 >>>>> pool_guid=12297694211509104163 >>>>> hostid=4925114 >>>>> hostname=''slaskvald'' >>>>> top_guid=9479723326726871122 >>>>> guid=17774184411399278071 >>>>> vdev_tree >>>>> type=''raidz'' >>>>> id=0 >>>>> guid=9479723326726871122 >>>>> nparity=1 >>>>> metaslab_array=23 >>>>> metaslab_shift=34 >>>>> ashift=9 >>>>> asize=3000574672896 >>>>> is_log=0 >>>>> children[0] >>>>> type=''disk'' >>>>> id=0 >>>>> guid=9020535344824299914 >>>>> path=''/dev/dsk/c15d0s0'' >>>>> devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' >>>>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' >>>>> whole_disk=1 >>>>> DTL=102 >>>>> children[1] >>>>> type=''disk'' >>>>> id=1 >>>>> guid=14384361563876398475 >>>>> path=''/dev/dsk/c14d0s0'' >>>>> devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' >>>>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' >>>>> whole_disk=1 >>>>> DTL=216 >>>>> children[2] >>>>> type=''disk'' >>>>> id=2 >>>>> guid=17774184411399278071 >>>>> path=''/dev/dsk/c14d1s0'' >>>>> devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' >>>>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' >>>>> whole_disk=1 >>>>> DTL=100 >>>>> -------------------------------------------- >>>>> LABEL 3 >>>>> -------------------------------------------- >>>>> version=13 >>>>> name=''rescamp'' >>>>> state=0 >>>>> txg=218097573 >>>>> pool_guid=12297694211509104163 >>>>> hostid=4925114 >>>>> hostname=''slaskvald'' >>>>> top_guid=9479723326726871122 >>>>> guid=17774184411399278071 >>>>> vdev_tree >>>>> type=''raidz'' >>>>> id=0 >>>>> guid=9479723326726871122 >>>>> nparity=1 >>>>> metaslab_array=23 >>>>> metaslab_shift=34 >>>>> ashift=9 >>>>> asize=3000574672896 >>>>> is_log=0 >>>>> children[0] >>>>> type=''disk'' >>>>> id=0 >>>>> guid=9020535344824299914 >>>>> path=''/dev/dsk/c15d0s0'' >>>>> devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' >>>>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' >>>>> whole_disk=1 >>>>> DTL=102 >>>>> children[1] >>>>> type=''disk'' >>>>> id=1 >>>>> guid=14384361563876398475 >>>>> path=''/dev/dsk/c14d0s0'' >>>>> devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' >>>>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' >>>>> whole_disk=1 >>>>> DTL=216 >>>>> children[2] >>>>> type=''disk'' >>>>> id=2 >>>>> guid=17774184411399278071 >>>>> path=''/dev/dsk/c14d1s0'' >>>>> devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' >>>>> phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' >>>>> whole_disk=1 >>>>> DTL=100 >>>>> >>>>> Any idea what to do? >>>> Please have a look at this message >>>> >>>> http://www.opensolaris.org/jive/message.jspa?messageID=420146#420146 >>>> >>>> victor >>>> >>> prtvtoc gives the following: >>> >>> # prtvtoc /dev/rdsk/c14d0s0 >>> * /dev/rdsk/c14d0s0 partition map >>> * First Sector Last >>> * Partition Tag Flags Sector Count Sector Mount Directory >>> 0 4 00 256 1953503455 1953503710 >>> 8 11 00 1953503711 16384 1953520094 > Here is the output: > > # prtvtoc /dev/dsk/c14d1 > * /dev/dsk/c14d1 partition map > * First Sector Last > * Partition Tag Flags Sector Count Sector Mount Directory > 0 4 00 256 1953508495 1953508750 > 8 11 00 1953508751 16384 1953525134 > > # prtvtoc /dev/dsk/c15d0 > * /dev/dsk/c15d0 partition map > * First Sector Last > * Partition Tag Flags Sector Count Sector Mount Directory > 0 4 00 256 1953508495 1953508750 > 8 11 00 1953508751 16384 1953525134 > > /Osvald IvarssonLooks like all your disks got relabeled at once, so yes, you need to get old labeling back. Can you try the following: dd if=/dev/rdsk/cXtYd0 bs=1k iseek=17 count=512 of=front.labels.cXtYd0 zdb -l front.labels.cXtYd0 I expect it''ll show label information victor
Osvald Ivarsson
2009-Oct-02 13:13 UTC
[zfs-discuss] Unable to import pool: invalid vdev configuration
On Fri, Oct 2, 2009 at 2:51 PM, Victor Latushkin <Victor.Latushkin at sun.com> wrote:> Osvald Ivarsson wrote: >> >> On Fri, Oct 2, 2009 at 2:36 PM, Victor Latushkin >> <Victor.Latushkin at sun.com> wrote: >>> >>> Osvald Ivarsson wrote: >>>> >>>> On Thu, Oct 1, 2009 at 7:40 PM, Victor Latushkin >>>> <Victor.Latushkin at sun.com> wrote: >>>>> >>>>> On 01.10.09 17:54, Osvald Ivarsson wrote: >>>>>> >>>>>> I''m running OpenSolaris build svn_101b. I have 3 SATA disks connected >>>>>> to >>>>>> my motherboard. The raid, a raidz, which is called "rescamp", has >>>>>> worked >>>>>> good before until a power failure yesterday. I''m now unable to import >>>>>> the >>>>>> pool. I can''t export the raid, since it isn''t imported. >>>>>> >>>>>> # zpool import rescamp >>>>>> cannot import ''rescamp'': invalid vdev configuration >>>>>> >>>>>> # zpool import >>>>>> ?pool: rescamp >>>>>> ?id: 12297694211509104163 >>>>>> ?state: UNAVAIL >>>>>> action: The pool cannot be imported due to damaged devices or data. >>>>>> config: >>>>>> >>>>>> ? ? ?rescamp ? ? UNAVAIL ?insufficient replicas >>>>>> ? ? ? ?raidz1 ? ?UNAVAIL ?corrupted data >>>>>> ? ? ? ? ?c15d0 ? ONLINE >>>>>> ? ? ? ? ?c14d0 ? ONLINE >>>>>> ? ? ? ? ?c14d1 ? ONLINE >>>>>> >>>>>> I''ve tried using zdb -l on all three disks, but in all cases it failes >>>>>> to >>>>>> unpack the labels. >>>>>> >>>>>> # zdb -l /dev/dsk/c14d0 >>>>>> -------------------------------------------- >>>>>> LABEL 0 >>>>>> -------------------------------------------- >>>>>> failed to unpack label 0 >>>>>> -------------------------------------------- >>>>>> LABEL 1 >>>>>> -------------------------------------------- >>>>>> failed to unpack label 1 >>>>>> -------------------------------------------- >>>>>> LABEL 2 >>>>>> -------------------------------------------- >>>>>> failed to unpack label 2 >>>>>> -------------------------------------------- >>>>>> LABEL 3 >>>>>> -------------------------------------------- >>>>>> failed to unpack label 3 >>>>>> >>>>>> If I run # zdb -l /dev/dsk/c14d0s0 I do find 4 labels, but c14d0, >>>>>> c14d1 >>>>>> and c15d0 is what I created the raid with. I do find labels this way >>>>>> for >>>>>> all >>>>>> three disks. Is this to any help? >>>>>> >>>>>> # zdb -l /dev/dsk/c14d1s0 >>>>>> -------------------------------------------- >>>>>> LABEL 0 >>>>>> -------------------------------------------- >>>>>> ?version=13 >>>>>> ?name=''rescamp'' >>>>>> ?state=0 >>>>>> ?txg=218097573 >>>>>> ?pool_guid=12297694211509104163 >>>>>> ?hostid=4925114 >>>>>> ?hostname=''slaskvald'' >>>>>> ?top_guid=9479723326726871122 >>>>>> ?guid=17774184411399278071 >>>>>> ?vdev_tree >>>>>> ? ? ?type=''raidz'' >>>>>> ? ? ?id=0 >>>>>> ? ? ?guid=9479723326726871122 >>>>>> ? ? ?nparity=1 >>>>>> ? ? ?metaslab_array=23 >>>>>> ? ? ?metaslab_shift=34 >>>>>> ? ? ?ashift=9 >>>>>> ? ? ?asize=3000574672896 >>>>>> ? ? ?is_log=0 >>>>>> ? ? ?children[0] >>>>>> ? ? ? ? ? ? ?type=''disk'' >>>>>> ? ? ? ? ? ? ?id=0 >>>>>> ? ? ? ? ? ? ?guid=9020535344824299914 >>>>>> ? ? ? ? ? ? ?path=''/dev/dsk/c15d0s0'' >>>>>> ? ? ? ? ? ? ?devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' >>>>>> ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' >>>>>> ? ? ? ? ? ? ?whole_disk=1 >>>>>> ? ? ? ? ? ? ?DTL=102 >>>>>> ? ? ?children[1] >>>>>> ? ? ? ? ? ? ?type=''disk'' >>>>>> ? ? ? ? ? ? ?id=1 >>>>>> ? ? ? ? ? ? ?guid=14384361563876398475 >>>>>> ? ? ? ? ? ? ?path=''/dev/dsk/c14d0s0'' >>>>>> ? ? ? ? ? ? ?devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' >>>>>> ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' >>>>>> ? ? ? ? ? ? ?whole_disk=1 >>>>>> ? ? ? ? ? ? ?DTL=216 >>>>>> ? ? ?children[2] >>>>>> ? ? ? ? ? ? ?type=''disk'' >>>>>> ? ? ? ? ? ? ?id=2 >>>>>> ? ? ? ? ? ? ?guid=17774184411399278071 >>>>>> ? ? ? ? ? ? ?path=''/dev/dsk/c14d1s0'' >>>>>> ? ? ? ? ? ? ?devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' >>>>>> ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' >>>>>> ? ? ? ? ? ? ?whole_disk=1 >>>>>> ? ? ? ? ? ? ?DTL=100 >>>>>> -------------------------------------------- >>>>>> LABEL 1 >>>>>> -------------------------------------------- >>>>>> ?version=13 >>>>>> ?name=''rescamp'' >>>>>> ?state=0 >>>>>> ?txg=218097573 >>>>>> ?pool_guid=12297694211509104163 >>>>>> ?hostid=4925114 >>>>>> ?hostname=''slaskvald'' >>>>>> ?top_guid=9479723326726871122 >>>>>> ?guid=17774184411399278071 >>>>>> ?vdev_tree >>>>>> ? ? ?type=''raidz'' >>>>>> ? ? ?id=0 >>>>>> ? ? ?guid=9479723326726871122 >>>>>> ? ? ?nparity=1 >>>>>> ? ? ?metaslab_array=23 >>>>>> ? ? ?metaslab_shift=34 >>>>>> ? ? ?ashift=9 >>>>>> ? ? ?asize=3000574672896 >>>>>> ? ? ?is_log=0 >>>>>> ? ? ?children[0] >>>>>> ? ? ? ? ? ? ?type=''disk'' >>>>>> ? ? ? ? ? ? ?id=0 >>>>>> ? ? ? ? ? ? ?guid=9020535344824299914 >>>>>> ? ? ? ? ? ? ?path=''/dev/dsk/c15d0s0'' >>>>>> ? ? ? ? ? ? ?devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' >>>>>> ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' >>>>>> ? ? ? ? ? ? ?whole_disk=1 >>>>>> ? ? ? ? ? ? ?DTL=102 >>>>>> ? ? ?children[1] >>>>>> ? ? ? ? ? ? ?type=''disk'' >>>>>> ? ? ? ? ? ? ?id=1 >>>>>> ? ? ? ? ? ? ?guid=14384361563876398475 >>>>>> ? ? ? ? ? ? ?path=''/dev/dsk/c14d0s0'' >>>>>> ? ? ? ? ? ? ?devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' >>>>>> ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' >>>>>> ? ? ? ? ? ? ?whole_disk=1 >>>>>> ? ? ? ? ? ? ?DTL=216 >>>>>> ? ? ?children[2] >>>>>> ? ? ? ? ? ? ?type=''disk'' >>>>>> ? ? ? ? ? ? ?id=2 >>>>>> ? ? ? ? ? ? ?guid=17774184411399278071 >>>>>> ? ? ? ? ? ? ?path=''/dev/dsk/c14d1s0'' >>>>>> ? ? ? ? ? ? ?devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' >>>>>> ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' >>>>>> ? ? ? ? ? ? ?whole_disk=1 >>>>>> ? ? ? ? ? ? ?DTL=100 >>>>>> -------------------------------------------- >>>>>> LABEL 2 >>>>>> -------------------------------------------- >>>>>> ?version=13 >>>>>> ?name=''rescamp'' >>>>>> ?state=0 >>>>>> ?txg=218097573 >>>>>> ?pool_guid=12297694211509104163 >>>>>> ?hostid=4925114 >>>>>> ?hostname=''slaskvald'' >>>>>> ?top_guid=9479723326726871122 >>>>>> ?guid=17774184411399278071 >>>>>> ?vdev_tree >>>>>> ? ? ?type=''raidz'' >>>>>> ? ? ?id=0 >>>>>> ? ? ?guid=9479723326726871122 >>>>>> ? ? ?nparity=1 >>>>>> ? ? ?metaslab_array=23 >>>>>> ? ? ?metaslab_shift=34 >>>>>> ? ? ?ashift=9 >>>>>> ? ? ?asize=3000574672896 >>>>>> ? ? ?is_log=0 >>>>>> ? ? ?children[0] >>>>>> ? ? ? ? ? ? ?type=''disk'' >>>>>> ? ? ? ? ? ? ?id=0 >>>>>> ? ? ? ? ? ? ?guid=9020535344824299914 >>>>>> ? ? ? ? ? ? ?path=''/dev/dsk/c15d0s0'' >>>>>> ? ? ? ? ? ? ?devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' >>>>>> ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' >>>>>> ? ? ? ? ? ? ?whole_disk=1 >>>>>> ? ? ? ? ? ? ?DTL=102 >>>>>> ? ? ?children[1] >>>>>> ? ? ? ? ? ? ?type=''disk'' >>>>>> ? ? ? ? ? ? ?id=1 >>>>>> ? ? ? ? ? ? ?guid=14384361563876398475 >>>>>> ? ? ? ? ? ? ?path=''/dev/dsk/c14d0s0'' >>>>>> ? ? ? ? ? ? ?devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' >>>>>> ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' >>>>>> ? ? ? ? ? ? ?whole_disk=1 >>>>>> ? ? ? ? ? ? ?DTL=216 >>>>>> ? ? ?children[2] >>>>>> ? ? ? ? ? ? ?type=''disk'' >>>>>> ? ? ? ? ? ? ?id=2 >>>>>> ? ? ? ? ? ? ?guid=17774184411399278071 >>>>>> ? ? ? ? ? ? ?path=''/dev/dsk/c14d1s0'' >>>>>> ? ? ? ? ? ? ?devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' >>>>>> ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' >>>>>> ? ? ? ? ? ? ?whole_disk=1 >>>>>> ? ? ? ? ? ? ?DTL=100 >>>>>> -------------------------------------------- >>>>>> LABEL 3 >>>>>> -------------------------------------------- >>>>>> ?version=13 >>>>>> ?name=''rescamp'' >>>>>> ?state=0 >>>>>> ?txg=218097573 >>>>>> ?pool_guid=12297694211509104163 >>>>>> ?hostid=4925114 >>>>>> ?hostname=''slaskvald'' >>>>>> ?top_guid=9479723326726871122 >>>>>> ?guid=17774184411399278071 >>>>>> ?vdev_tree >>>>>> ? ? ?type=''raidz'' >>>>>> ? ? ?id=0 >>>>>> ? ? ?guid=9479723326726871122 >>>>>> ? ? ?nparity=1 >>>>>> ? ? ?metaslab_array=23 >>>>>> ? ? ?metaslab_shift=34 >>>>>> ? ? ?ashift=9 >>>>>> ? ? ?asize=3000574672896 >>>>>> ? ? ?is_log=0 >>>>>> ? ? ?children[0] >>>>>> ? ? ? ? ? ? ?type=''disk'' >>>>>> ? ? ? ? ? ? ?id=0 >>>>>> ? ? ? ? ? ? ?guid=9020535344824299914 >>>>>> ? ? ? ? ? ? ?path=''/dev/dsk/c15d0s0'' >>>>>> ? ? ? ? ? ? ?devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' >>>>>> ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' >>>>>> ? ? ? ? ? ? ?whole_disk=1 >>>>>> ? ? ? ? ? ? ?DTL=102 >>>>>> ? ? ?children[1] >>>>>> ? ? ? ? ? ? ?type=''disk'' >>>>>> ? ? ? ? ? ? ?id=1 >>>>>> ? ? ? ? ? ? ?guid=14384361563876398475 >>>>>> ? ? ? ? ? ? ?path=''/dev/dsk/c14d0s0'' >>>>>> ? ? ? ? ? ? ?devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' >>>>>> ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' >>>>>> ? ? ? ? ? ? ?whole_disk=1 >>>>>> ? ? ? ? ? ? ?DTL=216 >>>>>> ? ? ?children[2] >>>>>> ? ? ? ? ? ? ?type=''disk'' >>>>>> ? ? ? ? ? ? ?id=2 >>>>>> ? ? ? ? ? ? ?guid=17774184411399278071 >>>>>> ? ? ? ? ? ? ?path=''/dev/dsk/c14d1s0'' >>>>>> ? ? ? ? ? ? ?devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' >>>>>> ? ? ? ? ? ? ?phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' >>>>>> ? ? ? ? ? ? ?whole_disk=1 >>>>>> ? ? ? ? ? ? ?DTL=100 >>>>>> >>>>>> Any idea what to do? >>>>> >>>>> Please have a look at this message >>>>> >>>>> http://www.opensolaris.org/jive/message.jspa?messageID=420146#420146 >>>>> >>>>> victor >>>>> >>>> prtvtoc gives the following: >>>> >>>> # prtvtoc /dev/rdsk/c14d0s0 >>>> * /dev/rdsk/c14d0s0 partition map >>>> * ? ? ? ? ? ? ? ? ? ? ? ? ?First ? ? Sector ? ?Last >>>> * Partition ?Tag ?Flags ? ?Sector ? ? Count ? ?Sector ?Mount Directory >>>> ? ? ?0 ? ? ?4 ? ?00 ? ? ? ?256 1953503455 1953503710 >>>> ? ? ?8 ? ? 11 ? ?00 ?1953503711 ? ? 16384 1953520094 >> >> Here is the output: >> >> # prtvtoc /dev/dsk/c14d1 >> * /dev/dsk/c14d1 partition map >> * ? ? ? ? ? ? ? ? ? ? ? ? ?First ? ? Sector ? ?Last >> * Partition ?Tag ?Flags ? ?Sector ? ? Count ? ?Sector ?Mount Directory >> ? ? ? 0 ? ? ?4 ? ?00 ? ? ? ?256 1953508495 1953508750 >> ? ? ? 8 ? ? 11 ? ?00 ?1953508751 ? ? 16384 1953525134 >> >> # prtvtoc /dev/dsk/c15d0 >> * /dev/dsk/c15d0 partition map >> * ? ? ? ? ? ? ? ? ? ? ? ? ?First ? ? Sector ? ?Last >> * Partition ?Tag ?Flags ? ?Sector ? ? Count ? ?Sector ?Mount Directory >> ? ? ? 0 ? ? ?4 ? ?00 ? ? ? ?256 1953508495 1953508750 >> ? ? ? 8 ? ? 11 ? ?00 ?1953508751 ? ? 16384 1953525134 >> >> /Osvald Ivarsson > > Looks like all your disks got relabeled at once, so yes, you need to get old > labeling back. > > Can you try the following: > > dd if=/dev/rdsk/cXtYd0 bs=1k iseek=17 count=512 of=front.labels.cXtYd0 > > zdb -l front.labels.cXtYd0 > > I expect it''ll show label information > > victor >Ok, I ran the command for all three disks, and in all cases it''s unable to unpack the labels... # /usr/bin/dd if=/dev/rdsk/c14d0 bs=1k iseek=17 count=512 of=front.labels.c14d0 512+0 records in 512+0 records out # zdb -l front.labels.c14d0 -------------------------------------------- LABEL 0 -------------------------------------------- failed to unpack label 0 -------------------------------------------- LABEL 1 -------------------------------------------- failed to unpack label 1 -------------------------------------------- LABEL 2 -------------------------------------------- failed to unpack label 2 -------------------------------------------- LABEL 3 -------------------------------------------- failed to unpack label 3 The same goes for # /usr/bin/dd if=/dev/rdsk/c14d0s0 bs=1k iseek=17 count=512 of=front.labels.c14d0s0 If I however remove "iseek=17" it works for c14d0s0, but still not for c14d0... # /usr/bin/dd if=/dev/rdsk/c14d0s0 bs=1k count=512 of=front.labels.c14d0s0 512+0 records in 512+0 records out # zdb -l front.labels.c14d0s0 -------------------------------------------- LABEL 0 -------------------------------------------- version=13 name=''rescamp'' state=0 txg=218097573 pool_guid=12297694211509104163 hostid=4925114 hostname=''slaskvald'' top_guid=9479723326726871122 guid=14384361563876398475 vdev_tree type=''raidz'' id=0 guid=9479723326726871122 nparity=1 metaslab_array=23 metaslab_shift=34 ashift=9 asize=3000574672896 is_log=0 children[0] type=''disk'' id=0 guid=9020535344824299914 path=''/dev/dsk/c15d0s0'' devid=''id1,cmdk at AST31000333AS=____________9TE0DGLF/a'' phys_path=''/pci at 0,0/pci-ide at 11/ide at 1/cmdk at 0,0:a'' whole_disk=1 DTL=102 children[1] type=''disk'' id=1 guid=14384361563876398475 path=''/dev/dsk/c14d0s0'' devid=''id1,cmdk at ASAMSUNG_HD103UJ=S13PJDWS690618/a'' phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 0,0:a'' whole_disk=1 DTL=216 children[2] type=''disk'' id=2 guid=17774184411399278071 path=''/dev/dsk/c14d1s0'' devid=''id1,cmdk at AST31000333AS=____________9TE0DE8W/a'' phys_path=''/pci at 0,0/pci-ide at 11/ide at 0/cmdk at 1,0:a'' whole_disk=1 DTL=100 and label 1,2 and 3 is displayed too.
Osvald Ivarsson
2009-Oct-03 15:46 UTC
[zfs-discuss] Unable to import pool: invalid vdev configuration
I managed to solve this problem thanks to much help from Victor Latushkin. Anyways, the problem is related to the following bug: Bug ID 6753869 Synopsis labeling/shrinking a disk in raid-z vdev makes pool un-importable http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6753869 Somehow my c14d0 disk was now a bit too small... The following is from prtvtoc: c14d0 * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 4 00 256 1953503455 1953503710 8 11 00 1953503711 16384 1953520094 c14d1 * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 4 00 256 1953508495 1953508750 8 11 00 1953508751 16384 1953525134 c15d0 * /dev/dsk/c15d0 partition map * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 4 00 256 1953508495 1953508750 8 11 00 1953508751 16384 1953525134 Victor wrote: -------------------- For c14d0 we have 1953503455 blocks in slice 0, so if we divide it by 512 we''ll get number of 256KB blocks that can fit there, so it is 3815436.435546875 or 3815436 whole 256KB blocks For c14t1 and c15d0 we have 1953508495, so it is 3815446.279296875 or 3815446 whole 256KB blocks. We see that s0 on c14d0 is ten 256KB blocks smaller than other two. Let''s now determine how big s0 ZFS expects to have. We have asize for the raidz in the label - it is 3000574672896. It is equal to the asize of the smallest device multiplied by 3, so it should be divisible by 3 without remainder, and it is: 3000574672896 / 3 = 1000191557632 So asize of the smallest device is 1000191557632 bytes or 3815428 blocks of 256KB. But we need to factor in ZFS front and back labels and reserved are in the front between front labels and allocatabale area. Front labels a 0.5MB, reserved area in the front is 3.5MB, this gives us 16 blocks of 256KB, plus two labels in the back are 2 blocks of 256KB, so in total we have 18 additional blocks and thus we arrive at required slice of s0 to be 3815446 blocks. This is exactly size of c14d1 and c15d0, and c14d0 is different (too small) You can use attached dtrace script to verify this calculations. format -e to change size of slice 8 (or remove it altogether), and then increase size of s0 so it is big enough to accommodate 3815446 blocks of 256KB You can always try another option - remove c14d0 (physically or logically with cfgadm -c unconfigure or just by removing symlink for c14d0s0 from /dev/dsk and /dev/rdsk) and try to say ''zpool import'' to see if it would be happy to import pool in the degraded state -------------------- So I tried to remove c14d0 by removing the symlinks, but that didn''t change anything. So I decided to use "format -e" to remove the slice 8 and increase s0, and that actually worked! Now my pool imported without problems! Removing the disk would''ve worked too. Many thanks to Victor! /Osvald Ivarsson -- This message posted from opensolaris.org
Cindy Swearingen
2009-Oct-05 22:00 UTC
[zfs-discuss] Unable to import pool: invalid vdev configuration
Hi Osvald, Can you comment on how the disks shrank or how the labeling on these disks changed? We would like to track the issues that causes the hardware underneath a live pool to change so that we can figure out how to prevent pool failures in the future. Thanks, Cindy On 10/03/09 09:46, Osvald Ivarsson wrote:> I managed to solve this problem thanks to much help from Victor Latushkin. > > Anyways, the problem is related to the following bug: > > Bug ID 6753869 > Synopsis labeling/shrinking a disk in raid-z vdev makes pool un-importable > > http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6753869 > > Somehow my c14d0 disk was now a bit too small... > > The following is from prtvtoc: > c14d0 > > * First Sector Last > * Partition Tag Flags Sector Count Sector Mount Directory > 0 4 00 256 1953503455 1953503710 > 8 11 00 1953503711 16384 1953520094 > > c14d1 > > * First Sector Last > * Partition Tag Flags Sector Count Sector Mount Directory > 0 4 00 256 1953508495 1953508750 > 8 11 00 1953508751 16384 1953525134 > > > c15d0 > * /dev/dsk/c15d0 partition map > * First Sector Last > * Partition Tag Flags Sector Count Sector Mount Directory > 0 4 00 256 1953508495 1953508750 > 8 11 00 1953508751 16384 1953525134 > > Victor wrote: > -------------------- > For c14d0 we have 1953503455 blocks in slice 0, so if we divide it by 512 we''ll get number of 256KB blocks that can fit there, so it is 3815436.435546875 or 3815436 whole 256KB blocks > > For c14t1 and c15d0 we have 1953508495, so it is 3815446.279296875 or 3815446 whole 256KB blocks. > > We see that s0 on c14d0 is ten 256KB blocks smaller than other two. > > Let''s now determine how big s0 ZFS expects to have. We have asize for the raidz in the label - it is 3000574672896. It is equal to the asize of the smallest device multiplied by 3, so it should be divisible by 3 without remainder, and it is: > > 3000574672896 / 3 = 1000191557632 > > So asize of the smallest device is 1000191557632 bytes or 3815428 blocks of 256KB. But we need to factor in ZFS front and back labels and reserved are in the front between front labels and allocatabale area. Front labels a 0.5MB, reserved area in the front is 3.5MB, this gives us 16 blocks of 256KB, plus two labels in the back are 2 blocks of 256KB, so in total we have 18 additional blocks and thus we arrive at required slice of s0 to be 3815446 blocks. > > This is exactly size of c14d1 and c15d0, and c14d0 is different (too small) > > You can use attached dtrace script to verify this calculations. > > format -e to change size of slice 8 (or remove it altogether), and then increase size of s0 so it is big enough to accommodate 3815446 blocks of 256KB > > You can always try another option - remove c14d0 (physically or logically with cfgadm -c unconfigure or just by removing symlink for c14d0s0 from /dev/dsk and /dev/rdsk) and try to say ''zpool import'' to see if it would be happy to import pool in the degraded state > > -------------------- > So I tried to remove c14d0 by removing the symlinks, but that didn''t change anything. So I decided to use "format -e" to remove the slice 8 and increase s0, and that actually worked! Now my pool imported without problems! Removing the disk would''ve worked too. > > Many thanks to Victor! > > /Osvald Ivarsson
Osvald Ivarsson
2009-Oct-06 09:06 UTC
[zfs-discuss] Unable to import pool: invalid vdev configuration
> Hi Osvald, > > Can you comment on how the disks shrank or how the > labeling on these > disks changed? > > We would like to track the issues that causes the > hardware underneath > a live pool to change so that we can figure out how > to prevent pool > failures in the future. > > Thanks, > > Cindy >Hi Cindy! I had recently replaced one of the three identical disks in my raid with a new one, the slightly smaller one, still a 1TB disk. This was named c14d0 in my case. The problems started after a power outage. After the power outage I couldn''t import the pool. I''m pretty sure I had exported and imported the pool, but I''m not sure about this. Other than this, I have no idea what could''ve caused the relabeling/resizing... /Osvald Ivarsson -- This message posted from opensolaris.org
Nigel Smith
2009-Oct-06 09:49 UTC
[zfs-discuss] Unable to import pool: invalid vdev configuration
Hi Cindy Please note the thread from last year, where Eugene Gladchenko discovered that a motherboard BIOS upgrade had suddenly enabled ''Host protected Area'' (HPA) on his hard drives, causing them to shrink in size by 2MB! You can find the thread at these urls: http://markmail.org/message/j7av5b22dke2anui http://markmail.org/thread/rdswnnqlk2f6q47k http://opensolaris.org/jive/thread.jspa?threadID=79749 http://mail.opensolaris.org/pipermail/zfs-discuss/2008-October/022815.html Regards Nigel Smith -- This message posted from opensolaris.org
Cindy Swearingen
2009-Oct-06 15:14 UTC
[zfs-discuss] Unable to import pool: invalid vdev configuration
Hi Osvald, If you physically replaced the failed disk with even a slightly smaller disk in a RAIDZ pool and ran the zpool replace command, you would have seen a message similar to the following: # zpool replace rescamp c0t6d0 c2t2d0 cannot replace c0t6d0 with c2t2d0: device is too small Did you run zpool replace or just physically replace the disk? The pool would have been unhappy at that point and its possible that the power failure wiped the disk labels. Thanks for the information. Cindy On 10/06/09 03:06, Osvald Ivarsson wrote:>> Hi Osvald, >> >> Can you comment on how the disks shrank or how the >> labeling on these >> disks changed? >> >> We would like to track the issues that causes the >> hardware underneath >> a live pool to change so that we can figure out how >> to prevent pool >> failures in the future. >> >> Thanks, >> >> Cindy >> > > Hi Cindy! > > I had recently replaced one of the three identical disks in my raid with a new one, the slightly smaller one, still a 1TB disk. This was named c14d0 in my case. The problems started after a power outage. After the power outage I couldn''t import the pool. I''m pretty sure I had exported and imported the pool, but I''m not sure about this. > > Other than this, I have no idea what could''ve caused the relabeling/resizing... > > /Osvald Ivarsson