System description: 1 root UFS with Solaris 10U5 x86 1 raidz pool with 3 disks: (c6t1d0s0, c7t0d0s0, c7t1d0s0) Description: Just before the death of my motherboard, I''ve installed OpenSolaris 2008.05 - x86. Why do you ask, because I needed to test that it was the motherboard dying and not any other hardware/software. That in mind, I''ve replaced my motherboard and since OpenSolais was already installed, I''ve decided to give it a try. #zpool import -f zfs i/o error The only thing that I''ve noticed is that my device ids changed from c6t1d0s0 to c4t1d0s0. So I''ve decided to switch back to Solaris 10U5, but the same thing happens. i/o error Since I know that my disks are operational and hadn''t been accessed since the board replacement, I assume that my data is still available but not seen by zfs because ids have changed. Can someone please tell me how can I get back my data ? This message posted from opensolaris.org
Victor Pajor wrote:> System description: > 1 root UFS with Solaris 10U5 x86 > 1 raidz pool with 3 disks: (c6t1d0s0, c7t0d0s0, c7t1d0s0) > > Description: > Just before the death of my motherboard, I''ve installed OpenSolaris 2008.05 - x86. > Why do you ask, because I needed to test that it was the motherboard dying and not any other hardware/software. > That in mind, I''ve replaced my motherboard and since OpenSolais was already installed, I''ve decided to give it a try. > > #zpool import -f zfs > i/o error > > The only thing that I''ve noticed is that my device ids changed from c6t1d0s0 to c4t1d0s0. > > So I''ve decided to switch back to Solaris 10U5, but the same thing happens. > i/o error > > Since I know that my disks are operational and hadn''t been accessed since the board replacement, I assume that my data is still available but not seen by zfs because ids have changed. >No, ZFS will find the disks. Something else is wrong.> Can someone please tell me how can I get back my data ? >The "I/O error" generally means that a device cannot be read. Try a simple "zpool import" and see what pools it thinks are available. -- richard
Thank you for your fast reply. You where right. There is something else wrong. # zpool import pool: zfs id: 3801622416844369872 state: FAULTED status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. The pool may be active on on another system, but can be imported using the ''-f'' flag. see: http://www.sun.com/msg/ZFS-8000-5E config: zfs FAULTED corrupted data raidz1 ONLINE c1t1d0 ONLINE c7t0d0 UNAVAIL corrupted data c7t1d0 UNAVAIL corrupted data NOW !!! How can that be ? It was running ok before I changed the motheboard. Right before changing it, the system crashed, zfs isn''t supposed to handle this ? How can I get the data back. Or diagnose further the problem ? This message posted from opensolaris.org
Another thing config: zfs FAULTED corrupted data raidz1 ONLINE c1t1d0 ONLINE c7t0d0 UNAVAIL corrupted data c7t1d0 UNAVAIL corrupted data c70d0 & c71d0 don''t exist, it''s normal. they are c2t0d0 & c2t1d0 AVAILABLE DISK SELECTIONS: 0. c1t0d0 <DEFAULT cyl 4424 alt 2 hd 255 sec 63> /pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6/sd at 0,0 1. c1t1d0 <SEAGATE-ST336754LW-0005-34.18GB> /pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6/sd at 1,0 2. c2t0d0 <SEAGATE-ST336753LW-0005-34.18GB> /pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 0,0 3. c2t1d0 <SEAGATE-ST336753LW-HPS2-33.92GB> /pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 1,0 This message posted from opensolaris.org
On 21 June, 2008 - Victor Pajor sent me these 0,9K bytes:> Another thing > > config: > > zfs FAULTED corrupted data > raidz1 ONLINE > c1t1d0 ONLINE > c7t0d0 UNAVAIL corrupted data > c7t1d0 UNAVAIL corrupted data > > c70d0 & c71d0 don''t exist, it''s normal. they are c2t0d0 & c2t1d0 > > AVAILABLE DISK SELECTIONS: > 0. c1t0d0 <DEFAULT cyl 4424 alt 2 hd 255 sec 63> > /pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6/sd at 0,0 > 1. c1t1d0 <SEAGATE-ST336754LW-0005-34.18GB> > /pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6/sd at 1,0 > 2. c2t0d0 <SEAGATE-ST336753LW-0005-34.18GB> > /pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 0,0 > 3. c2t1d0 <SEAGATE-ST336753LW-HPS2-33.92GB> > /pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 1,0zpool export zfs;zpool import zfs /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
# zpool export zfs cannot open ''zfs'': no such pool any command other than zpool import will give "connot open ''zfs'': no such pool" I can''t seem to find any useful information on this type of error. Did anyone have this kind of problem ? This message posted from opensolaris.org
When I mean about the error is: Where a system crashes, zfs just loses its references and thinks that disks are not available. When in fact the same disk worked perfectly just before the motherboard crash. Not just asking. Isn''t ZFS supposed to cope with this kind of crash ? There must be a way of diagnosing what is going on. Here is the description of the label when I change configuration of the hdd. What I did is just add another SCSI controller and added 2 disks. bash-3.00# zdb -l /dev/rdsk/c4t1d0s0 -------------------------------------------- LABEL 0 -------------------------------------------- version=4 name=''zfs'' state=0 txg=7855332 pool_guid=3801622416844369872 hostid=345240675 hostname=''sun'' top_guid=4004063599069763239 guid=4086156223654637831 vdev_tree type=''raidz'' id=0 guid=4004063599069763239 nparity=1 metaslab_array=13 metaslab_shift=30 ashift=9 asize=109220462592 is_log=0 children[0] type=''disk'' id=0 guid=4086156223654637831 path=''/dev/dsk/c6t1d0s0'' devid=''id1,sd at SSEAGATE_ST336754LW______3KQ22RXB00009711HGC8/a'' phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6/sd at 1,0:a'' whole_disk=1 DTL=69 children[1] type=''disk'' id=1 guid=13320021127057678234 path=''/dev/dsk/c7t0d0s0'' devid=''id1,sd at SSEAGATE_ST336753LW______3HX1QGTC0000741492Y9/a'' phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 0,0:a'' whole_disk=1 DTL=68 children[2] type=''disk'' id=2 guid=52666612524563381 path=''/dev/dsk/c7t1d0s0'' devid=''id1,sd at SSEAGATE_ST336753LW______3HX2P47Y000075034EBS/a'' phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 1,0:a'' whole_disk=1 DTL=22 -------------------------------------------- LABEL 1 -------------------------------------------- version=4 name=''zfs'' state=0 txg=7855332 pool_guid=3801622416844369872 hostid=345240675 hostname=''sun'' top_guid=4004063599069763239 guid=4086156223654637831 vdev_tree type=''raidz'' id=0 guid=4004063599069763239 nparity=1 metaslab_array=13 metaslab_shift=30 ashift=9 asize=109220462592 is_log=0 children[0] type=''disk'' id=0 guid=4086156223654637831 path=''/dev/dsk/c6t1d0s0'' devid=''id1,sd at SSEAGATE_ST336754LW______3KQ22RXB00009711HGC8/a'' phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6/sd at 1,0:a'' whole_disk=1 DTL=69 children[1] type=''disk'' id=1 guid=13320021127057678234 path=''/dev/dsk/c7t0d0s0'' devid=''id1,sd at SSEAGATE_ST336753LW______3HX1QGTC0000741492Y9/a'' phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 0,0:a'' whole_disk=1 DTL=68 children[2] type=''disk'' id=2 guid=52666612524563381 path=''/dev/dsk/c7t1d0s0'' devid=''id1,sd at SSEAGATE_ST336753LW______3HX2P47Y000075034EBS/a'' phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 1,0:a'' whole_disk=1 DTL=22 -------------------------------------------- LABEL 2 -------------------------------------------- version=4 name=''zfs'' state=0 txg=7855332 pool_guid=3801622416844369872 hostid=345240675 hostname=''sun'' top_guid=4004063599069763239 guid=4086156223654637831 vdev_tree type=''raidz'' id=0 guid=4004063599069763239 nparity=1 metaslab_array=13 metaslab_shift=30 ashift=9 asize=109220462592 is_log=0 children[0] type=''disk'' id=0 guid=4086156223654637831 path=''/dev/dsk/c6t1d0s0'' devid=''id1,sd at SSEAGATE_ST336754LW______3KQ22RXB00009711HGC8/a'' phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6/sd at 1,0:a'' whole_disk=1 DTL=69 children[1] type=''disk'' id=1 guid=13320021127057678234 path=''/dev/dsk/c7t0d0s0'' devid=''id1,sd at SSEAGATE_ST336753LW______3HX1QGTC0000741492Y9/a'' phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 0,0:a'' whole_disk=1 DTL=68 children[2] type=''disk'' id=2 guid=52666612524563381 path=''/dev/dsk/c7t1d0s0'' devid=''id1,sd at SSEAGATE_ST336753LW______3HX2P47Y000075034EBS/a'' phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 1,0:a'' whole_disk=1 DTL=22 -------------------------------------------- LABEL 3 -------------------------------------------- version=4 name=''zfs'' state=0 txg=7855332 pool_guid=3801622416844369872 hostid=345240675 hostname=''sun'' top_guid=4004063599069763239 guid=4086156223654637831 vdev_tree type=''raidz'' id=0 guid=4004063599069763239 nparity=1 metaslab_array=13 metaslab_shift=30 ashift=9 asize=109220462592 is_log=0 children[0] type=''disk'' id=0 guid=4086156223654637831 path=''/dev/dsk/c6t1d0s0'' devid=''id1,sd at SSEAGATE_ST336754LW______3KQ22RXB00009711HGC8/a'' phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6/sd at 1,0:a'' whole_disk=1 DTL=69 children[1] type=''disk'' id=1 guid=13320021127057678234 path=''/dev/dsk/c7t0d0s0'' devid=''id1,sd at SSEAGATE_ST336753LW______3HX1QGTC0000741492Y9/a'' phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 0,0:a'' whole_disk=1 DTL=68 children[2] type=''disk'' id=2 guid=52666612524563381 path=''/dev/dsk/c7t1d0s0'' devid=''id1,sd at SSEAGATE_ST336753LW______3HX2P47Y000075034EBS/a'' phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 1,0:a'' whole_disk=1 DTL=22 This message posted from opensolaris.org
Victor Pajor wrote:> When I mean about the error is: > > Where a system crashes, zfs just loses its references and thinks that disks are not available. > When in fact the same disk worked perfectly just before the motherboard crash. > > Not just asking. Isn''t ZFS supposed to cope with this kind of crash ? > > There must be a way of diagnosing what is going on. >I believe this is working as designed. A cache is kept in /etc/zfs/zpool.cache which contains a list of the devices and pools which should be imported automatically at boot time. The alternative is to scan every device, which does not scale well to large systems and can cause consternation for shared storage clusters. When you changed the motherboard, you also changed the device list, which is why the pools were not imported automatically at boot. This is an unusual case, but the solution is to export (thus removing the entries from zpool.cache) and import (adding new entries to zpool.cache) -- richard> Here is the description of the label when I change configuration of the hdd. > What I did is just add another SCSI controller and added 2 disks. > > > bash-3.00# zdb -l /dev/rdsk/c4t1d0s0 > -------------------------------------------- > LABEL 0 > -------------------------------------------- > version=4 > name=''zfs'' > state=0 > txg=7855332 > pool_guid=3801622416844369872 > hostid=345240675 > hostname=''sun'' > top_guid=4004063599069763239 > guid=4086156223654637831 > vdev_tree > type=''raidz'' > id=0 > guid=4004063599069763239 > nparity=1 > metaslab_array=13 > metaslab_shift=30 > ashift=9 > asize=109220462592 > is_log=0 > children[0] > type=''disk'' > id=0 > guid=4086156223654637831 > path=''/dev/dsk/c6t1d0s0'' > devid=''id1,sd at SSEAGATE_ST336754LW______3KQ22RXB00009711HGC8/a'' > phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6/sd at 1,0:a'' > whole_disk=1 > DTL=69 > children[1] > type=''disk'' > id=1 > guid=13320021127057678234 > path=''/dev/dsk/c7t0d0s0'' > devid=''id1,sd at SSEAGATE_ST336753LW______3HX1QGTC0000741492Y9/a'' > phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 0,0:a'' > whole_disk=1 > DTL=68 > children[2] > type=''disk'' > id=2 > guid=52666612524563381 > path=''/dev/dsk/c7t1d0s0'' > devid=''id1,sd at SSEAGATE_ST336753LW______3HX2P47Y000075034EBS/a'' > phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 1,0:a'' > whole_disk=1 > DTL=22 > -------------------------------------------- > LABEL 1 > -------------------------------------------- > version=4 > name=''zfs'' > state=0 > txg=7855332 > pool_guid=3801622416844369872 > hostid=345240675 > hostname=''sun'' > top_guid=4004063599069763239 > guid=4086156223654637831 > vdev_tree > type=''raidz'' > id=0 > guid=4004063599069763239 > nparity=1 > metaslab_array=13 > metaslab_shift=30 > ashift=9 > asize=109220462592 > is_log=0 > children[0] > type=''disk'' > id=0 > guid=4086156223654637831 > path=''/dev/dsk/c6t1d0s0'' > devid=''id1,sd at SSEAGATE_ST336754LW______3KQ22RXB00009711HGC8/a'' > phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6/sd at 1,0:a'' > whole_disk=1 > DTL=69 > children[1] > type=''disk'' > id=1 > guid=13320021127057678234 > path=''/dev/dsk/c7t0d0s0'' > devid=''id1,sd at SSEAGATE_ST336753LW______3HX1QGTC0000741492Y9/a'' > phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 0,0:a'' > whole_disk=1 > DTL=68 > children[2] > type=''disk'' > id=2 > guid=52666612524563381 > path=''/dev/dsk/c7t1d0s0'' > devid=''id1,sd at SSEAGATE_ST336753LW______3HX2P47Y000075034EBS/a'' > phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 1,0:a'' > whole_disk=1 > DTL=22 > -------------------------------------------- > LABEL 2 > -------------------------------------------- > version=4 > name=''zfs'' > state=0 > txg=7855332 > pool_guid=3801622416844369872 > hostid=345240675 > hostname=''sun'' > top_guid=4004063599069763239 > guid=4086156223654637831 > vdev_tree > type=''raidz'' > id=0 > guid=4004063599069763239 > nparity=1 > metaslab_array=13 > metaslab_shift=30 > ashift=9 > asize=109220462592 > is_log=0 > children[0] > type=''disk'' > id=0 > guid=4086156223654637831 > path=''/dev/dsk/c6t1d0s0'' > devid=''id1,sd at SSEAGATE_ST336754LW______3KQ22RXB00009711HGC8/a'' > phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6/sd at 1,0:a'' > whole_disk=1 > DTL=69 > children[1] > type=''disk'' > id=1 > guid=13320021127057678234 > path=''/dev/dsk/c7t0d0s0'' > devid=''id1,sd at SSEAGATE_ST336753LW______3HX1QGTC0000741492Y9/a'' > phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 0,0:a'' > whole_disk=1 > DTL=68 > children[2] > type=''disk'' > id=2 > guid=52666612524563381 > path=''/dev/dsk/c7t1d0s0'' > devid=''id1,sd at SSEAGATE_ST336753LW______3HX2P47Y000075034EBS/a'' > phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 1,0:a'' > whole_disk=1 > DTL=22 > -------------------------------------------- > LABEL 3 > -------------------------------------------- > version=4 > name=''zfs'' > state=0 > txg=7855332 > pool_guid=3801622416844369872 > hostid=345240675 > hostname=''sun'' > top_guid=4004063599069763239 > guid=4086156223654637831 > vdev_tree > type=''raidz'' > id=0 > guid=4004063599069763239 > nparity=1 > metaslab_array=13 > metaslab_shift=30 > ashift=9 > asize=109220462592 > is_log=0 > children[0] > type=''disk'' > id=0 > guid=4086156223654637831 > path=''/dev/dsk/c6t1d0s0'' > devid=''id1,sd at SSEAGATE_ST336754LW______3KQ22RXB00009711HGC8/a'' > phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6/sd at 1,0:a'' > whole_disk=1 > DTL=69 > children[1] > type=''disk'' > id=1 > guid=13320021127057678234 > path=''/dev/dsk/c7t0d0s0'' > devid=''id1,sd at SSEAGATE_ST336753LW______3HX1QGTC0000741492Y9/a'' > phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 0,0:a'' > whole_disk=1 > DTL=68 > children[2] > type=''disk'' > id=2 > guid=52666612524563381 > path=''/dev/dsk/c7t1d0s0'' > devid=''id1,sd at SSEAGATE_ST336753LW______3HX2P47Y000075034EBS/a'' > phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 1,0:a'' > whole_disk=1 > DTL=22 > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Here is what I found out. AVAILABLE DISK SELECTIONS: 0. c5t0d0 <DEFAULT cyl 4424 alt 2 hd 255 sec 63> /pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6/sd at 0,0 1. c5t1d0 <SEAGATE-ST336754LW-0005-34.18GB> /pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6/sd at 1,0 2. c6t0d0 <SEAGATE-ST336753LW-0005-34.18GB> /pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 0,0 3. c6t1d0 <SEAGATE-ST336753LW-HPS2-33.92GB> /pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 1,0 4. c7t0d0 <DEFAULT cyl 8921 alt 2 hd 255 sec 63> /pci at e,0/pci1022,7450 at b/pci9005,f620 at 4/sd at 0,0 5. c7t1d0 <DEFAULT cyl 8921 alt 2 hd 255 sec 63> /pci at e,0/pci1022,7450 at b/pci9005,f620 at 4/sd at 1,0 Created a little script #!/bin/bash for i in $( ls /dev/rdsk ); do echo $i zdb -l /dev/rdsk/$i done Here is some output of this command: (1) #zdb -l /dev/rdsk/c5t1d0s0 -------------------------------------------- LABEL 0 -------------------------------------------- version=4 name=''zfs'' state=0 txg=7855332 pool_guid=3801622416844369872 hostid=345240675 hostname=''sun'' top_guid=4004063599069763239 guid=4086156223654637831 vdev_tree type=''raidz'' id=0 guid=4004063599069763239 nparity=1 metaslab_array=13 metaslab_shift=30 ashift=9 asize=109220462592 is_log=0 children[0] type=''disk'' id=0 guid=4086156223654637831 path=''/dev/dsk/c6t1d0s0'' devid=''id1,sd at SSEAGATE_ST336754LW______3KQ22RXB00009711HGC8/a'' phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6/sd at 1,0:a'' whole_disk=1 DTL=69 children[1] type=''disk'' id=1 guid=13320021127057678234 path=''/dev/dsk/c7t0d0s0'' devid=''id1,sd at SSEAGATE_ST336753LW______3HX1QGTC0000741492Y9/a'' phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 0,0:a'' whole_disk=1 DTL=68 children[2] type=''disk'' id=2 guid=52666612524563381 path=''/dev/dsk/c7t1d0s0'' devid=''id1,sd at SSEAGATE_ST336753LW______3HX2P47Y000075034EBS/a'' phys_path=''/pci at e,0/pci1022,7450 at b/pci10f1,2895 at 6,1/sd at 1,0:a'' whole_disk=1 DTL=22 ... LABEL 1 & LABEL 2 & LABEL 3 are omitted for clarity (2) #zdb -l /dev/rdsk/c6t1d0s0 (3) #zdb -l /dev/rdsk/c7t0d0s0 (4) #zdb -l /dev/rdsk/c7t1d0s0 all three commands give this: -------------------------------------------- LABEL 0 -------------------------------------------- failed to unpack label 0 -------------------------------------------- LABEL 1 -------------------------------------------- failed to unpack label 1 -------------------------------------------- LABEL 2 -------------------------------------------- failed to unpack label 2 -------------------------------------------- LABEL 3 -------------------------------------------- failed to unpack label 3 (5) #zdb -l /dev/rdsk/c7t0d0p0 -------------------------------------------- LABEL 0 -------------------------------------------- version=4 name=''data'' state=0 txg=2333244 pool_guid=18349152765965118757 hostid=409943152 hostname=''opensolaris'' top_guid=4131806235391152254 guid=13715042150527401204 vdev_tree type=''mirror'' id=0 guid=4131806235391152254 metaslab_array=14 metaslab_shift=29 ashift=9 asize=73402941440 is_log=0 children[0] type=''disk'' id=0 guid=4088711380714589637 path=''/dev/dsk/c7t1d0p0'' devid=''id1,sd at SSEAGATE_ST373307LC______3HZ0Q50500007329MKUV/q'' phys_path=''/pci at e,0/pci1022,7450 at b/pci9005,f620 at 4/sd at 1,0:q'' whole_disk=0 children[1] type=''disk'' id=1 guid=13715042150527401204 path=''/dev/dsk/c7t0d0p0'' devid=''id1,sd at SSEAGATE_ST373307LC______3HZ0Q6K000007328JAGW/q'' phys_path=''/pci at e,0/pci1022,7450 at b/pci9005,f620 at 4/sd at 0,0:q'' whole_disk=0 LABEL 1 & LABEL 2 & LABEL 3 are omitted for clarity Now here''s my question. when executing command (1), isn''t children[0]''s path supposed to be /dev/rdsk/c5t1d0s0 ? and not /dev/dsk/c6t1d0s0. children[1] & children[1] are also not in synch. They refer to disks that are used by another zpool which is mirrored. If I look at this, it seems that the label is screwed and in the same time I''m screwed too. Right ? This message posted from opensolaris.org
By the looks of things, I don''t think that I will have any answers. So the moral of the story is (if your data is valuable): 1 - Never trust your hardware or software, unless it''s fully redundant. 2 - ALWAYS have an external backup !!!! because, even in best of times, SHIT HAPPENS. This message posted from opensolaris.org
Can you try just deleting the zpool.cache file and let it rebuild on import? I would guess a listing of your old devices were in there when the system came back up with new stuff. The OS stayed the same. This message posted from opensolaris.org
# rm /etc/zfs/zpool.cache # zpool import pool: zfs id: 3801622416844369872 state: FAULTED status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. The pool may be active on on another system, but can be imported using the ''-f'' flag. see: http://www.sun.com/msg/ZFS-8000-5E config: zfs FAULTED corrupted data raidz1 ONLINE c5t1d0 ONLINE c7t0d0 UNAVAIL corrupted data c7t1d0 UNAVAIL corrupted data This message posted from opensolaris.org
I''ll have to do some thunkin'' on this. We just need to get back one of the disks, both would be great, but one more would do the trick. After all other avenues have been tried, one thing that you can try is to use the 2008.05 livecd and boot into the livecd without installing the OS. Import the pool and see if you have any better luck. If not, you can try the zdb -l again under the livecd as there have been bugs with that in the past on older versions of ZFS code. Will edit this message if I can think of something else to try. This message posted from opensolaris.org
Booted from 2008.05 and the error was the same as before: corrupted data for both last disks. zdb -l was the same as before: read label from disk 1 but not from disks 2 & 3. This message posted from opensolaris.org
I found out what was my problem. It''s hardware related. My two disks where on a SCSI channel that didn''t work properly. It wasn''t a ZFS problem. Thank you everybody who replied. My Bad. This message posted from opensolaris.org