Hi, I have a zpool in a degraded state: [19:15]{1}arne at charon:~% pfexec zpool import pool: npool id: 5258305162216370088 state: DEGRADED status: The pool is formatted using an older on-disk version. action: The pool can be imported despite missing or damaged devices. The fault tolerance of the pool may be compromised if imported. config: npool DEGRADED raidz1 DEGRADED c3d0 ONLINE c7d0 ONLINE replacing UNAVAIL insufficient replicas c7d0s0/o FAULTED corrupted data c7d0 FAULTED corrupted data When I try to import the pool the opensolaris box simply hangs. I can still ping it. But nothing else works. ssh only works to the point where the 3way handshake is established. The strange thing is, that there is no kernel panic or any log. The last mmessage from truss zpool import npool are: open("/dev/dsk/c7d0s0", O_RDONLY) = 6 fxstat(2, 6, 0x08043250) = 0 modctl(MODSIZEOF_DEVID, 0x01980080, 0x0804324C, 0xFEA41239, 0xFE8E92C0) = 0 modctl(MODGETDEVID, 0x01980080, 0x0000002A, 0x080D18D0, 0xFE8E92C0) = 0 fxstat(2, 6, 0x08043250) = 0 modctl(MODSIZEOF_MINORNAME, 0x01980080, 0x00006000, 0x0804324C, 0xFE8E92C0) = 0 modctl(MODGETMINORNAME, 0x01980080, 0x00006000, 0x00000002, 0x0808DFC8) = 0 close(6) = 0 ioctl(3, ZFS_IOC_POOL_STATS, 0x080423B0) Err#2 ENOENT ioctl(3, ZFS_IOC_POOL_TRYIMPORT, 0x08042420) = 0 open("/usr/lib/locale/de_DE.UTF-8/LC_MESSAGES/SUNW_OST_OSLIB.mo", O_RDONLY) Err#2 ENOENT I think the systems hangs on ZFS_IOC_POOL_TRYIMPORT Any pointer where I could try to debug/diagnose the problem further? System is already at 110 but 109 behaved the same. Arne
Arne Schwabe wrote:> Hi, > > I have a zpool in a degraded state: > > [19:15]{1}arne at charon:~% pfexec zpool import > > pool: npool > id: 5258305162216370088 > state: DEGRADED > status: The pool is formatted using an older on-disk version. > action: The pool can be imported despite missing or damaged devices. The > fault tolerance of the pool may be compromised if imported. > config: > > npool DEGRADED > raidz1 DEGRADED > c3d0 ONLINE > c7d0 ONLINE > replacing UNAVAIL insufficient replicas > c7d0s0/o FAULTED corrupted data > c7d0 FAULTED corrupted data >This output looks really busted. What were the original disks that you used? Looks like you may have lost one of them. The following output will be useful: # zdb -l /dev/dsk/c3d0s0 # zdb -l /dev/dsk/c7d0s0 Thanks, George> > When I try to import the pool the opensolaris box simply hangs. I can > still ping it. But nothing else works. ssh only works to the point where > the 3way handshake is established. The strange thing is, that there is > no kernel panic or any log. > > The last mmessage from truss zpool import npool are: > > open("/dev/dsk/c7d0s0", O_RDONLY) = 6 > fxstat(2, 6, 0x08043250) = 0 > modctl(MODSIZEOF_DEVID, 0x01980080, 0x0804324C, 0xFEA41239, 0xFE8E92C0) = 0 > modctl(MODGETDEVID, 0x01980080, 0x0000002A, 0x080D18D0, 0xFE8E92C0) = 0 > fxstat(2, 6, 0x08043250) = 0 > modctl(MODSIZEOF_MINORNAME, 0x01980080, 0x00006000, 0x0804324C, > 0xFE8E92C0) = 0 > modctl(MODGETMINORNAME, 0x01980080, 0x00006000, 0x00000002, 0x0808DFC8) = 0 > close(6) = 0 > ioctl(3, ZFS_IOC_POOL_STATS, 0x080423B0) Err#2 ENOENT > ioctl(3, ZFS_IOC_POOL_TRYIMPORT, 0x08042420) = 0 > open("/usr/lib/locale/de_DE.UTF-8/LC_MESSAGES/SUNW_OST_OSLIB.mo", > O_RDONLY) Err#2 ENOENT > > I think the systems hangs on ZFS_IOC_POOL_TRYIMPORT > > Any pointer where I could try to debug/diagnose the problem further? > System is already at 110 but 109 behaved the same. > > Arne > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Am 03.04.2009 2:42 Uhr, schrieb George Wilson:> Arne Schwabe wrote: >> Hi, >> >> I have a zpool in a degraded state: >> >> [19:15]{1}arne at charon:~% pfexec zpool import >> >> pool: npool >> id: 5258305162216370088 >> state: DEGRADED >> status: The pool is formatted using an older on-disk version. >> action: The pool can be imported despite missing or damaged devices. >> The >> fault tolerance of the pool may be compromised if imported. >> config: >> >> npool DEGRADED >> raidz1 DEGRADED >> c3d0 ONLINE >> c7d0 ONLINE >> replacing UNAVAIL insufficient replicas >> c7d0s0/o FAULTED corrupted data >> c7d0 FAULTED corrupted data >> > > This output looks really busted. What were the original disks that you > used? Looks like you may have lost one of them. > > The following output will be useful: > > # zdb -l /dev/dsk/c3d0s0 > # zdb -l /dev/dsk/c7d0s0Yes the "failed c7d0" is beyond hope and the at the moment not physically connected to the system. If I reconnected the drive it will show up as. I removed the drive because I wanted to try to import the pool with the two good drives. But it does make no difference if I import it with or without it. Is there any guid/howto start who to debug zfs/solaris kernel? I have done kernel debugging before but not with solaris. pool: npool id: 5258305162216370088 state: DEGRADED status: The pool is formatted using an older on-disk version. action: The pool can be imported despite missing or damaged devices. The fault tolerance of the pool may be compromised if imported. config: npool DEGRADED raidz1 DEGRADED c3d0 ONLINE c7d0 ONLINE replacing DEGRADED c7d0s0/o FAULTED corrupted data c6d0 ONLINE I think at one moment I must have switched the drives physically when I replaced the failed drive (ironically with another bad drive, which is why I disconnected it again) zdb -l /dev/dsk/c3d0s0 gives: -------------------------------------------- LABEL 0 -------------------------------------------- version=13 name=''npool'' state=1 txg=10901730 pool_guid=5258305162216370088 hostid=5148207 hostname=''charon'' top_guid=17800957881283225684 guid=5737717478922700505 vdev_tree type=''raidz'' id=0 guid=17800957881283225684 nparity=1 metaslab_array=14 metaslab_shift=33 ashift=9 asize=1500262957056 is_log=0 children[0] type=''disk'' id=0 guid=5737717478922700505 path=''/dev/dsk/c3d0s0'' devid=''id1,cmdk at AWDC_WD5000AAJB-00YRA0=_____WD-WCAS81952111/a'' phys_path=''/pci at 0,0/pci-ide at 2,5/ide at 0/cmdk at 0,0:a'' whole_disk=1 DTL=85 children[1] type=''disk'' id=1 guid=17036915785869798182 path=''/dev/dsk/c6d0s0'' devid=''id1,cmdk at ASAMSUNG_HD501LJ=S0MUJ1FPC08381/a'' phys_path=''/pci at 0,0/pci-ide at 5/ide at 0/cmdk at 0,0:a'' whole_disk=1 DTL=35 children[2] type=''replacing'' id=2 guid=10545980583204781570 whole_disk=0 children[0] type=''disk'' id=0 guid=232847032327795094 path=''/dev/dsk/c7d0s0/old'' phys_path=''/pci at 0,0/pci-ide at 5/ide at 1/cmdk at 0,0:a'' whole_disk=1 DTL=339 children[1] type=''disk'' id=1 guid=13182214352713316760 path=''/dev/dsk/c7d0s0'' devid=''id1,cmdk at ASAMSUNG_HD501LJ=S0MUJ1FPC08380/a'' phys_path=''/pci at 0,0/pci-ide at 5/ide at 1/cmdk at 0,0:a'' whole_disk=1 DTL=123 faulted=1 -------------------------------------------- LABEL 1 -------------------------------------------- version=13 name=''npool'' state=1 txg=10901730 pool_guid=5258305162216370088 hostid=5148207 hostname=''charon'' top_guid=17800957881283225684 guid=5737717478922700505 vdev_tree type=''raidz'' id=0 guid=17800957881283225684 nparity=1 metaslab_array=14 metaslab_shift=33 ashift=9 asize=1500262957056 is_log=0 children[0] type=''disk'' id=0 guid=5737717478922700505 path=''/dev/dsk/c3d0s0'' devid=''id1,cmdk at AWDC_WD5000AAJB-00YRA0=_____WD-WCAS81952111/a'' phys_path=''/pci at 0,0/pci-ide at 2,5/ide at 0/cmdk at 0,0:a'' whole_disk=1 DTL=85 children[1] type=''disk'' id=1 guid=17036915785869798182 path=''/dev/dsk/c6d0s0'' devid=''id1,cmdk at ASAMSUNG_HD501LJ=S0MUJ1FPC08381/a'' phys_path=''/pci at 0,0/pci-ide at 5/ide at 0/cmdk at 0,0:a'' whole_disk=1 DTL=35 children[2] type=''replacing'' id=2 guid=10545980583204781570 whole_disk=0 children[0] type=''disk'' id=0 guid=232847032327795094 path=''/dev/dsk/c7d0s0/old'' phys_path=''/pci at 0,0/pci-ide at 5/ide at 1/cmdk at 0,0:a'' whole_disk=1 DTL=339 children[1] type=''disk'' id=1 guid=13182214352713316760 path=''/dev/dsk/c7d0s0'' devid=''id1,cmdk at ASAMSUNG_HD501LJ=S0MUJ1FPC08380/a'' phys_path=''/pci at 0,0/pci-ide at 5/ide at 1/cmdk at 0,0:a'' whole_disk=1 DTL=123 faulted=1 -------------------------------------------- LABEL 2 -------------------------------------------- version=13 name=''npool'' state=1 txg=10901730 pool_guid=5258305162216370088 hostid=5148207 hostname=''charon'' top_guid=17800957881283225684 guid=5737717478922700505 vdev_tree type=''raidz'' id=0 guid=17800957881283225684 nparity=1 metaslab_array=14 metaslab_shift=33 ashift=9 asize=1500262957056 is_log=0 children[0] type=''disk'' id=0 guid=5737717478922700505 path=''/dev/dsk/c3d0s0'' devid=''id1,cmdk at AWDC_WD5000AAJB-00YRA0=_____WD-WCAS81952111/a'' phys_path=''/pci at 0,0/pci-ide at 2,5/ide at 0/cmdk at 0,0:a'' whole_disk=1 DTL=85 children[1] type=''disk'' id=1 guid=17036915785869798182 path=''/dev/dsk/c6d0s0'' devid=''id1,cmdk at ASAMSUNG_HD501LJ=S0MUJ1FPC08381/a'' phys_path=''/pci at 0,0/pci-ide at 5/ide at 0/cmdk at 0,0:a'' whole_disk=1 DTL=35 children[2] type=''replacing'' id=2 guid=10545980583204781570 whole_disk=0 children[0] type=''disk'' id=0 guid=232847032327795094 path=''/dev/dsk/c7d0s0/old'' phys_path=''/pci at 0,0/pci-ide at 5/ide at 1/cmdk at 0,0:a'' whole_disk=1 DTL=339 children[1] type=''disk'' id=1 guid=13182214352713316760 path=''/dev/dsk/c7d0s0'' devid=''id1,cmdk at ASAMSUNG_HD501LJ=S0MUJ1FPC08380/a'' phys_path=''/pci at 0,0/pci-ide at 5/ide at 1/cmdk at 0,0:a'' whole_disk=1 DTL=123 faulted=1 -------------------------------------------- LABEL 3 -------------------------------------------- version=13 name=''npool'' state=1 txg=10901730 pool_guid=5258305162216370088 hostid=5148207 hostname=''charon'' top_guid=17800957881283225684 guid=5737717478922700505 vdev_tree type=''raidz'' id=0 guid=17800957881283225684 nparity=1 metaslab_array=14 metaslab_shift=33 ashift=9 asize=1500262957056 is_log=0 children[0] type=''disk'' id=0 guid=5737717478922700505 path=''/dev/dsk/c3d0s0'' devid=''id1,cmdk at AWDC_WD5000AAJB-00YRA0=_____WD-WCAS81952111/a'' phys_path=''/pci at 0,0/pci-ide at 2,5/ide at 0/cmdk at 0,0:a'' whole_disk=1 DTL=85 children[1] type=''disk'' id=1 guid=17036915785869798182 path=''/dev/dsk/c6d0s0'' devid=''id1,cmdk at ASAMSUNG_HD501LJ=S0MUJ1FPC08381/a'' phys_path=''/pci at 0,0/pci-ide at 5/ide at 0/cmdk at 0,0:a'' whole_disk=1 DTL=35 children[2] type=''replacing'' id=2 guid=10545980583204781570 whole_disk=0 children[0] type=''disk'' id=0 guid=232847032327795094 path=''/dev/dsk/c7d0s0/old'' phys_path=''/pci at 0,0/pci-ide at 5/ide at 1/cmdk at 0,0:a'' whole_disk=1 DTL=339 children[1] type=''disk'' id=1 guid=13182214352713316760 path=''/dev/dsk/c7d0s0'' devid=''id1,cmdk at ASAMSUNG_HD501LJ=S0MUJ1FPC08380/a'' phys_path=''/pci at 0,0/pci-ide at 5/ide at 1/cmdk at 0,0:a'' whole_disk=1 DTL=123 faulted=1 [13:21]arne at charon:~% pfexec zdb -l /dev/dsk/c3d0s0 > /tmp/t1 [13:22]arne at charon:~% pfexec zdb -l /dev/dsk/c7d0s0 > /tmp/t3 [13:21]arne at charon:~% diff -u /tmp/t1 /tmp/t3 --- /tmp/t1 Fr Apr 3 13:19:55 2009 +++ /tmp/t3 Fr Apr 3 13:20:05 2009 @@ -9,7 +9,7 @@ hostid=5148207 hostname=''charon'' top_guid=17800957881283225684 - guid=5737717478922700505 + guid=17036915785869798182 vdev_tree type=''raidz'' id=0 @@ -72,7 +72,7 @@ hostid=5148207 hostname=''charon'' top_guid=17800957881283225684 - guid=5737717478922700505 + guid=17036915785869798182 vdev_tree type=''raidz'' id=0 @@ -135,7 +135,7 @@ hostid=5148207 hostname=''charon'' top_guid=17800957881283225684 - guid=5737717478922700505 + guid=17036915785869798182 vdev_tree type=''raidz'' id=0 @@ -198,7 +198,7 @@ hostid=5148207 hostname=''charon'' top_guid=17800957881283225684 - guid=5737717478922700505 + guid=17036915785869798182 vdev_tree type=''raidz'' id=0
Arne Schwabe wrote:> Am 03.04.2009 2:42 Uhr, schrieb George Wilson: >> Arne Schwabe wrote: >>> Hi, >>> >>> I have a zpool in a degraded state: >>> >>> [19:15]{1}arne at charon:~% pfexec zpool import >>> >>> pool: npool >>> id: 5258305162216370088 >>> state: DEGRADED >>> status: The pool is formatted using an older on-disk version. >>> action: The pool can be imported despite missing or damaged >>> devices. The >>> fault tolerance of the pool may be compromised if imported. >>> config: >>> >>> npool DEGRADED >>> raidz1 DEGRADED >>> c3d0 ONLINE >>> c7d0 ONLINE >>> replacing UNAVAIL insufficient replicas >>> c7d0s0/o FAULTED corrupted data >>> c7d0 FAULTED corrupted data >>> >> >> This output looks really busted. What were the original disks that >> you used? Looks like you may have lost one of them. >> >> The following output will be useful: >> >> # zdb -l /dev/dsk/c3d0s0 >> # zdb -l /dev/dsk/c7d0s0 > Yes the "failed c7d0" is beyond hope and the at the moment not > physically connected to the system. If I reconnected the drive it will > show up as. I removed the drive because I wanted to try to import the > pool with the two good drives. But it does make no difference if I > import it with or without it. Is there any guid/howto start who to > debug zfs/solaris kernel? I have done kernel debugging before but not > with solaris. > > pool: npool id: 5258305162216370088 > state: DEGRADED > status: The pool is formatted using an older on-disk version. > action: The pool can be imported despite missing or damaged devices. The > fault tolerance of the pool may be compromised if imported. > config: > > npool DEGRADED > raidz1 DEGRADED > c3d0 ONLINE > c7d0 ONLINE > replacing DEGRADED > c7d0s0/o FAULTED corrupted data > c6d0 ONLINE > > I think at one moment I must have switched the drives physically when > I replaced the failed drive (ironically with another bad drive, which > is why I disconnected it again) > > zdb -l /dev/dsk/c3d0s0 gives: > > -------------------------------------------- > LABEL 0 > -------------------------------------------- > version=13 > name=''npool'' > state=1 > txg=10901730 > pool_guid=5258305162216370088 > hostid=5148207 > hostname=''charon'' > top_guid=17800957881283225684 > guid=5737717478922700505 > vdev_tree > type=''raidz'' > id=0 > guid=17800957881283225684 > nparity=1 > metaslab_array=14 > metaslab_shift=33 > ashift=9 > asize=1500262957056 > is_log=0 > children[0] > type=''disk'' > id=0 > guid=5737717478922700505 > path=''/dev/dsk/c3d0s0'' > > devid=''id1,cmdk at AWDC_WD5000AAJB-00YRA0=_____WD-WCAS81952111/a'' > phys_path=''/pci at 0,0/pci-ide at 2,5/ide at 0/cmdk at 0,0:a'' > whole_disk=1 > DTL=85 > children[1] > type=''disk'' > id=1 > guid=17036915785869798182 > path=''/dev/dsk/c6d0s0'' > devid=''id1,cmdk at ASAMSUNG_HD501LJ=S0MUJ1FPC08381/a'' > phys_path=''/pci at 0,0/pci-ide at 5/ide at 0/cmdk at 0,0:a'' > whole_disk=1 > DTL=35 > children[2] > type=''replacing'' > id=2 > guid=10545980583204781570 > whole_disk=0 > children[0] > type=''disk'' > id=0 > guid=232847032327795094 > path=''/dev/dsk/c7d0s0/old'' > phys_path=''/pci at 0,0/pci-ide at 5/ide at 1/cmdk at 0,0:a'' > whole_disk=1 > DTL=339 > children[1] > type=''disk'' > id=1 > guid=13182214352713316760 > path=''/dev/dsk/c7d0s0'' > > devid=''id1,cmdk at ASAMSUNG_HD501LJ=S0MUJ1FPC08380/a'' > phys_path=''/pci at 0,0/pci-ide at 5/ide at 1/cmdk at 0,0:a'' > whole_disk=1 > DTL=123 > faulted=1 > -------------------------------------------- > LABEL 1 > -------------------------------------------- > version=13 > name=''npool'' > state=1 > txg=10901730 > pool_guid=5258305162216370088 > hostid=5148207 > hostname=''charon'' > top_guid=17800957881283225684 > guid=5737717478922700505 > vdev_tree > type=''raidz'' > id=0 > guid=17800957881283225684 > nparity=1 > metaslab_array=14 > metaslab_shift=33 > ashift=9 > asize=1500262957056 > is_log=0 > children[0] > type=''disk'' > id=0 > guid=5737717478922700505 > path=''/dev/dsk/c3d0s0'' > > devid=''id1,cmdk at AWDC_WD5000AAJB-00YRA0=_____WD-WCAS81952111/a'' > phys_path=''/pci at 0,0/pci-ide at 2,5/ide at 0/cmdk at 0,0:a'' > whole_disk=1 > DTL=85 > children[1] > type=''disk'' > id=1 > guid=17036915785869798182 > path=''/dev/dsk/c6d0s0'' > devid=''id1,cmdk at ASAMSUNG_HD501LJ=S0MUJ1FPC08381/a'' > phys_path=''/pci at 0,0/pci-ide at 5/ide at 0/cmdk at 0,0:a'' > whole_disk=1 > DTL=35 > children[2] > type=''replacing'' > id=2 > guid=10545980583204781570 > whole_disk=0 > children[0] > type=''disk'' > id=0 > guid=232847032327795094 > path=''/dev/dsk/c7d0s0/old'' > phys_path=''/pci at 0,0/pci-ide at 5/ide at 1/cmdk at 0,0:a'' > whole_disk=1 > DTL=339 > children[1] > type=''disk'' > id=1 > guid=13182214352713316760 > path=''/dev/dsk/c7d0s0'' > > devid=''id1,cmdk at ASAMSUNG_HD501LJ=S0MUJ1FPC08380/a'' > phys_path=''/pci at 0,0/pci-ide at 5/ide at 1/cmdk at 0,0:a'' > whole_disk=1 > DTL=123 > faulted=1 > -------------------------------------------- > LABEL 2 > -------------------------------------------- > version=13 > name=''npool'' > state=1 > txg=10901730 > pool_guid=5258305162216370088 > hostid=5148207 > hostname=''charon'' > top_guid=17800957881283225684 > guid=5737717478922700505 > vdev_tree > type=''raidz'' > id=0 > guid=17800957881283225684 > nparity=1 > metaslab_array=14 > metaslab_shift=33 > ashift=9 > asize=1500262957056 > is_log=0 > children[0] > type=''disk'' > id=0 > guid=5737717478922700505 > path=''/dev/dsk/c3d0s0'' > > devid=''id1,cmdk at AWDC_WD5000AAJB-00YRA0=_____WD-WCAS81952111/a'' > phys_path=''/pci at 0,0/pci-ide at 2,5/ide at 0/cmdk at 0,0:a'' > whole_disk=1 > DTL=85 > children[1] > type=''disk'' > id=1 > guid=17036915785869798182 > path=''/dev/dsk/c6d0s0'' > devid=''id1,cmdk at ASAMSUNG_HD501LJ=S0MUJ1FPC08381/a'' > phys_path=''/pci at 0,0/pci-ide at 5/ide at 0/cmdk at 0,0:a'' > whole_disk=1 > DTL=35 > children[2] > type=''replacing'' > id=2 > guid=10545980583204781570 > whole_disk=0 > children[0] > type=''disk'' > id=0 > guid=232847032327795094 > path=''/dev/dsk/c7d0s0/old'' > phys_path=''/pci at 0,0/pci-ide at 5/ide at 1/cmdk at 0,0:a'' > whole_disk=1 > DTL=339 > children[1] > type=''disk'' > id=1 > guid=13182214352713316760 > path=''/dev/dsk/c7d0s0'' > > devid=''id1,cmdk at ASAMSUNG_HD501LJ=S0MUJ1FPC08380/a'' > phys_path=''/pci at 0,0/pci-ide at 5/ide at 1/cmdk at 0,0:a'' > whole_disk=1 > DTL=123 > faulted=1 > -------------------------------------------- > LABEL 3 > -------------------------------------------- > version=13 > name=''npool'' > state=1 > txg=10901730 > pool_guid=5258305162216370088 > hostid=5148207 > hostname=''charon'' > top_guid=17800957881283225684 > guid=5737717478922700505 > vdev_tree > type=''raidz'' > id=0 > guid=17800957881283225684 > nparity=1 > metaslab_array=14 > metaslab_shift=33 > ashift=9 > asize=1500262957056 > is_log=0 > children[0] > type=''disk'' > id=0 > guid=5737717478922700505 > path=''/dev/dsk/c3d0s0'' > > devid=''id1,cmdk at AWDC_WD5000AAJB-00YRA0=_____WD-WCAS81952111/a'' > phys_path=''/pci at 0,0/pci-ide at 2,5/ide at 0/cmdk at 0,0:a'' > whole_disk=1 > DTL=85 > children[1] > type=''disk'' > id=1 > guid=17036915785869798182 > path=''/dev/dsk/c6d0s0'' > devid=''id1,cmdk at ASAMSUNG_HD501LJ=S0MUJ1FPC08381/a'' > phys_path=''/pci at 0,0/pci-ide at 5/ide at 0/cmdk at 0,0:a'' > whole_disk=1 > DTL=35 > children[2] > type=''replacing'' > id=2 > guid=10545980583204781570 > whole_disk=0 > children[0] > type=''disk'' > id=0 > guid=232847032327795094 > path=''/dev/dsk/c7d0s0/old'' > phys_path=''/pci at 0,0/pci-ide at 5/ide at 1/cmdk at 0,0:a'' > whole_disk=1 > DTL=339 > children[1] > type=''disk'' > id=1 > guid=13182214352713316760 > path=''/dev/dsk/c7d0s0'' > > devid=''id1,cmdk at ASAMSUNG_HD501LJ=S0MUJ1FPC08380/a'' > phys_path=''/pci at 0,0/pci-ide at 5/ide at 1/cmdk at 0,0:a'' > whole_disk=1 > DTL=123 > faulted=1 > > > > [13:21]arne at charon:~% pfexec zdb -l /dev/dsk/c3d0s0 > /tmp/t1 > [13:22]arne at charon:~% pfexec zdb -l /dev/dsk/c7d0s0 > /tmp/t3 > > [13:21]arne at charon:~% diff -u /tmp/t1 /tmp/t3 > --- /tmp/t1 Fr Apr 3 13:19:55 2009 > +++ /tmp/t3 Fr Apr 3 13:20:05 2009 > @@ -9,7 +9,7 @@ > hostid=5148207 > hostname=''charon'' > top_guid=17800957881283225684 > - guid=5737717478922700505 > + guid=17036915785869798182 > vdev_tree > type=''raidz'' > id=0 > @@ -72,7 +72,7 @@ > hostid=5148207 > hostname=''charon'' > top_guid=17800957881283225684 > - guid=5737717478922700505 > + guid=17036915785869798182 > vdev_tree > type=''raidz'' > id=0 > @@ -135,7 +135,7 @@ > hostid=5148207 > hostname=''charon'' > top_guid=17800957881283225684 > - guid=5737717478922700505 > + guid=17036915785869798182 > vdev_tree > type=''raidz'' > id=0 > @@ -198,7 +198,7 @@ > hostid=5148207 > hostname=''charon'' > top_guid=17800957881283225684 > - guid=5737717478922700505 > + guid=17036915785869798182 > vdev_tree > type=''raidz'' > id=0 >According to the label on c3d0 it thinks the pool should look like this: npool DEGRADED raidz1 DEGRADED c3d0 c6d0 c7d0 But instead it''s seeing c7d0 in the place of c6d0. Can you also provide the ''zdb -l'' output for c6d0 and c7d0. As for doing some further debugging, DTrace is your friend. You can trace through the calls to spa_import(). I would start by finding the return value for spa_load() and then look at failure in such places as vdev_validate(). Thanks, George