Hello, I have a dual boot with Windows 7 64 bit enterprise edition and Opensolaris build 134. This is on Sun Ultra 40 M1 workstation. Three hard drives, 2 in ZFS mirror, 1 is shared with Windows. Last 2 days I was working in Windows. I didn''t touch the hard drives in any way except I once opened Disk Management to figure out why a external USB hard drive is not being listed. That''s it. That is the only thing related to disk I can recall that I did in last several days. Today I booted into Opensolaris and my mirrored pool is gone. I did a zpool status and it gave me zfs 8000-3C error, saying my pool is unavailable. Since I am able to boot & access browser, I tried a zpool import without arguments, with trying to export my pool, more fiddling. Now I can''t get zpool status to show my pool. Help me. How do I recover my old pool back? I know its there somewhere. Thanks in advance ------------------------------------------------------------------------ This is the result of fmdump -eV Jul 16 2010 15:17:43.657125275 ereport.fs.zfs.vdev.open_failed nvlist version: 0 ??? class = ereport.fs.zfs.vdev.open_failed ??? ena = 0x14c954e68900801 ??? detector = (embedded nvlist) ??? nvlist version: 0 ??? ??? version = 0x0 ??? ??? scheme = zfs ??? ??? pool = 0x4406b127a905c5be ??? ??? vdev = 0xe7dce33be87eeca7 ??? (end detector) ??? pool = rsgis ??? pool_guid = 0x4406b127a905c5be ??? pool_context = 1 ??? pool_failmode = wait ??? vdev_guid = 0xe7dce33be87eeca7 ??? vdev_type = disk ??? vdev_path = /dev/dsk/c9t0d0s0 ??? vdev_devid = id1,sd at AHITACHI_HDS7225SCSUN250G_0719BN9E3K=VFA100R1DN9E3K/a ??? parent_guid = 0xb89f3c5a72a22939 ??? parent_type = mirror ??? prev_state = 0x1 ??? __ttl = 0x1 ??? __tod = 0x4c40be67 0x272aef9b Jul 16 2010 15:17:43.657125080 ereport.fs.zfs.vdev.open_failed nvlist version: 0 ??? class = ereport.fs.zfs.vdev.open_failed ??? ena = 0x14c954e68900801 ??? detector = (embedded nvlist) ??? nvlist version: 0 ??? ??? version = 0x0 ??? ??? scheme = zfs ??? ??? pool = 0x4406b127a905c5be ??? ??? vdev = 0x6f08aad645681b14 ??? (end detector) ??? pool = rsgis ??? pool_guid = 0x4406b127a905c5be ??? pool_context = 1 ??? pool_failmode = wait ??? vdev_guid = 0x6f08aad645681b14 ??? vdev_type = disk ??? vdev_path = /dev/dsk/c8t0d0s0 ??? vdev_devid = id1,sd at AHITACHI_HDS7225SBSUN250G_0615NE18BJ=VDS41DT4EE18BJ/a ??? parent_guid = 0xb89f3c5a72a22939 ??? parent_type = mirror ??? prev_state = 0x1 ??? __ttl = 0x1 ??? __tod = 0x4c40be67 0x272aeed8 Jul 16 2010 15:17:43.657125769 ereport.fs.zfs.vdev.no_replicas nvlist version: 0 ??? class = ereport.fs.zfs.vdev.no_replicas ??? ena = 0x14c954e68900801 ??? detector = (embedded nvlist) ??? nvlist version: 0 ??? ??? version = 0x0 ??? ??? scheme = zfs ??? ??? pool = 0x4406b127a905c5be ??? ??? vdev = 0xb89f3c5a72a22939 ??? (end detector) ??? pool = rsgis ??? pool_guid = 0x4406b127a905c5be ??? pool_context = 1 ??? pool_failmode = wait ??? vdev_guid = 0xb89f3c5a72a22939 ??? vdev_type = mirror ??? parent_guid = 0x4406b127a905c5be ??? parent_type = root ??? prev_state = 0x1 ??? __ttl = 0x1 ??? __tod = 0x4c40be67 0x272af189 Jul 16 2010 15:17:43.657125226 ereport.fs.zfs.zpool nvlist version: 0 ??? class = ereport.fs.zfs.zpool ??? ena = 0x14c954e68900801 ??? detector = (embedded nvlist) ??? nvlist version: 0 ??? ??? version = 0x0 ??? ??? scheme = zfs ??? ??? pool = 0x4406b127a905c5be ??? (end detector) ??? pool = rsgis ??? pool_guid = 0x4406b127a905c5be ??? pool_context = 1 ??? pool_failmode = wait ??? __ttl = 0x1 ??? __tod = 0x4c40be67 0x272aef6a Jul 16 2010 15:25:55.572108990 ereport.fs.zfs.vdev.open_failed nvlist version: 0 ??? class = ereport.fs.zfs.vdev.open_failed ??? ena = 0x1588f5aa2b00801 ??? detector = (embedded nvlist) ??? nvlist version: 0 ??? ??? version = 0x0 ??? ??? scheme = zfs ??? ??? pool = 0x4406b127a905c5be ??? ??? vdev = 0x6f08aad645681b14 ??? (end detector) ??? pool = rsgis ??? pool_guid = 0x4406b127a905c5be ??? pool_context = 1 ??? pool_failmode = wait ??? vdev_guid = 0x6f08aad645681b14 ??? vdev_type = disk ??? vdev_path = /dev/dsk/c8t0d0s0 ??? vdev_devid = id1,sd at AHITACHI_HDS7225SBSUN250G_0615NE18BJ=VDS41DT4EE18BJ/a ??? parent_guid = 0xb89f3c5a72a22939 ??? parent_type = mirror ??? prev_state = 0x1 ??? __ttl = 0x1 ??? __tod = 0x4c40c053 0x2219b0be Jul 16 2010 15:25:55.572108617 ereport.fs.zfs.vdev.open_failed nvlist version: 0 ??? class = ereport.fs.zfs.vdev.open_failed ??? ena = 0x1588f5aa2b00801 ??? detector = (embedded nvlist) ??? nvlist version: 0 ??? ??? version = 0x0 ??? ??? scheme = zfs ??? ??? pool = 0x4406b127a905c5be ??? ??? vdev = 0xe7dce33be87eeca7 ??? (end detector) ??? pool = rsgis ??? pool_guid = 0x4406b127a905c5be ??? pool_context = 1 ??? pool_failmode = wait ??? vdev_guid = 0xe7dce33be87eeca7 ??? vdev_type = disk ??? vdev_path = /dev/dsk/c9t0d0s0 ??? vdev_devid = id1,sd at AHITACHI_HDS7225SCSUN250G_0719BN9E3K=VFA100R1DN9E3K/a ??? parent_guid = 0xb89f3c5a72a22939 ??? parent_type = mirror ??? prev_state = 0x1 ??? __ttl = 0x1 ??? __tod = 0x4c40c053 0x2219af49 Jul 16 2010 15:25:55.572108598 ereport.fs.zfs.vdev.no_replicas nvlist version: 0 ??? class = ereport.fs.zfs.vdev.no_replicas ??? ena = 0x1588f5aa2b00801 ??? detector = (embedded nvlist) ??? nvlist version: 0 ??? ??? version = 0x0 ??? ??? scheme = zfs ??? ??? pool = 0x4406b127a905c5be ??? ??? vdev = 0xb89f3c5a72a22939 ??? (end detector) ??? pool = rsgis ??? pool_guid = 0x4406b127a905c5be ??? pool_context = 1 ??? pool_failmode = wait ??? vdev_guid = 0xb89f3c5a72a22939 ??? vdev_type = mirror ??? parent_guid = 0x4406b127a905c5be ??? parent_type = root ??? prev_state = 0x1 ??? __ttl = 0x1 ??? __tod = 0x4c40c053 0x2219af36 Jul 16 2010 15:25:55.572108718 ereport.fs.zfs.zpool nvlist version: 0 ??? class = ereport.fs.zfs.zpool ??? ena = 0x1588f5aa2b00801 ??? detector = (embedded nvlist) ??? nvlist version: 0 ??? ??? version = 0x0 ??? ??? scheme = zfs ??? ??? pool = 0x4406b127a905c5be ??? (end detector) ??? pool = rsgis ??? pool_guid = 0x4406b127a905c5be ??? pool_context = 1 ??? pool_failmode = wait ??? __ttl = 0x1 ??? __tod = 0x4c40c053 0x2219afae Jul 16 2010 15:51:47.599791679 ereport.fs.zfs.vdev.open_failed nvlist version: 0 ??? class = ereport.fs.zfs.vdev.open_failed ??? ena = 0x1495a271ee00c01 ??? detector = (embedded nvlist) ??? nvlist version: 0 ??? ??? version = 0x0 ??? ??? scheme = zfs ??? ??? pool = 0x4406b127a905c5be ??? ??? vdev = 0x6f08aad645681b14 ??? (end detector) ??? pool = rsgis ??? pool_guid = 0x4406b127a905c5be ??? pool_context = 1 ??? pool_failmode = wait ??? vdev_guid = 0x6f08aad645681b14 ??? vdev_type = disk ??? vdev_path = /dev/dsk/c8t0d0s0 ??? vdev_devid = id1,sd at AHITACHI_HDS7225SBSUN250G_0615NE18BJ=VDS41DT4EE18BJ/a ??? parent_guid = 0xb89f3c5a72a22939 ??? parent_type = mirror ??? prev_state = 0x1 ??? __ttl = 0x1 ??? __tod = 0x4c40c663 0x23c0183f Jul 16 2010 15:51:47.599791560 ereport.fs.zfs.vdev.open_failed nvlist version: 0 ??? class = ereport.fs.zfs.vdev.open_failed ??? ena = 0x1495a271ee00c01 ??? detector = (embedded nvlist) ??? nvlist version: 0 ??? ??? version = 0x0 ??? ??? scheme = zfs ??? ??? pool = 0x4406b127a905c5be ??? ??? vdev = 0xe7dce33be87eeca7 ??? (end detector) ??? pool = rsgis ??? pool_guid = 0x4406b127a905c5be ??? pool_context = 1 ??? pool_failmode = wait ??? vdev_guid = 0xe7dce33be87eeca7 ??? vdev_type = disk ??? vdev_path = /dev/dsk/c9t0d0s0 ??? vdev_devid = id1,sd at AHITACHI_HDS7225SCSUN250G_0719BN9E3K=VFA100R1DN9E3K/a ??? parent_guid = 0xb89f3c5a72a22939 ??? parent_type = mirror ??? prev_state = 0x1 ??? __ttl = 0x1 ??? __tod = 0x4c40c663 0x23c017c8 Jul 16 2010 15:51:47.599792025 ereport.fs.zfs.vdev.no_replicas nvlist version: 0 ??? class = ereport.fs.zfs.vdev.no_replicas ??? ena = 0x1495a271ee00c01 ??? detector = (embedded nvlist) ??? nvlist version: 0 ??? ??? version = 0x0 ??? ??? scheme = zfs ??? ??? pool = 0x4406b127a905c5be ??? ??? vdev = 0xb89f3c5a72a22939 ??? (end detector) ??? pool = rsgis ??? pool_guid = 0x4406b127a905c5be ??? pool_context = 1 ??? pool_failmode = wait ??? vdev_guid = 0xb89f3c5a72a22939 ??? vdev_type = mirror ??? parent_guid = 0x4406b127a905c5be ??? parent_type = root ??? prev_state = 0x1 ??? __ttl = 0x1 ??? __tod = 0x4c40c663 0x23c01999 Jul 16 2010 15:51:47.599791994 ereport.fs.zfs.zpool nvlist version: 0 ??? class = ereport.fs.zfs.zpool ??? ena = 0x1495a271ee00c01 ??? detector = (embedded nvlist) ??? nvlist version: 0 ??? ??? version = 0x0 ??? ??? scheme = zfs ??? ??? pool = 0x4406b127a905c5be ??? (end detector) ??? pool = rsgis ??? pool_guid = 0x4406b127a905c5be ??? pool_context = 1 ??? pool_failmode = wait ??? __ttl = 0x1 ??? __tod = 0x4c40c663 0x23c0197a --------------------
Hello, I have a dual boot with Windows 7 64 bit enterprise edition and Opensolaris build 134. This is on Sun Ultra 40 M1 workstation. Three hard drives, 2 in ZFS mirror, 1 is shared with Windows. Last 2 days I was working in Windows. I didn''t touch the hard drives in any way except I once opened Disk Management to figure out why a external USB hard drive is not being listed. That''s it. That is the only thing related to disk I can recall that I did in last several days. Today I booted into Opensolaris and my mirrored pool is gone. I did a zpool status and it gave me zfs 8000-3C error, saying my pool is unavailable. Since I am able to boot & access browser, I tried a zpool import without arguments, with trying to export my pool, more fiddling. Now I can''t get zpool status to show my pool. Help me. How do I recover my old pool back? I know its there somewhere. Thanks in advance ------------------------------------------------------------------------ This is the result of fmdump -eV Jul 16 2010 15:17:43.657125275 ereport.fs.zfs.vdev.open_failed nvlist version: 0 class = ereport.fs.zfs.vdev.open_failed ena = 0x14c954e68900801 detector = (embedded nvlist) nvlist version: 0 version = 0x0 scheme = zfs pool = 0x4406b127a905c5be vdev = 0xe7dce33be87eeca7 (end detector) pool = rsgis pool_guid = 0x4406b127a905c5be pool_context = 1 pool_failmode = wait vdev_guid = 0xe7dce33be87eeca7 vdev_type = disk vdev_path = /dev/dsk/c9t0d0s0 vdev_devid = id1,sd at AHITACHI_HDS7225SCSUN250G_0719BN9E3K=VFA100R1DN9E3K/a parent_guid = 0xb89f3c5a72a22939 parent_type = mirror prev_state = 0x1 __ttl = 0x1 __tod = 0x4c40be67 0x272aef9b Jul 16 2010 15:17:43.657125080 ereport.fs.zfs.vdev.open_failed nvlist version: 0 class = ereport.fs.zfs.vdev.open_failed ena = 0x14c954e68900801 detector = (embedded nvlist) nvlist version: 0 version = 0x0 scheme = zfs pool = 0x4406b127a905c5be vdev = 0x6f08aad645681b14 (end detector) pool = rsgis pool_guid = 0x4406b127a905c5be pool_context = 1 pool_failmode = wait vdev_guid = 0x6f08aad645681b14 vdev_type = disk vdev_path = /dev/dsk/c8t0d0s0 vdev_devid = id1,sd at AHITACHI_HDS7225SBSUN250G_0615NE18BJ=VDS41DT4EE18BJ/a parent_guid = 0xb89f3c5a72a22939 parent_type = mirror prev_state = 0x1 __ttl = 0x1 __tod = 0x4c40be67 0x272aeed8 Jul 16 2010 15:17:43.657125769 ereport.fs.zfs.vdev.no_replicas nvlist version: 0 class = ereport.fs.zfs.vdev.no_replicas ena = 0x14c954e68900801 detector = (embedded nvlist) nvlist version: 0 version = 0x0 scheme = zfs pool = 0x4406b127a905c5be vdev = 0xb89f3c5a72a22939 (end detector) pool = rsgis pool_guid = 0x4406b127a905c5be pool_context = 1 pool_failmode = wait vdev_guid = 0xb89f3c5a72a22939 vdev_type = mirror parent_guid = 0x4406b127a905c5be parent_type = root prev_state = 0x1 __ttl = 0x1 __tod = 0x4c40be67 0x272af189 Jul 16 2010 15:17:43.657125226 ereport.fs.zfs.zpool nvlist version: 0 class = ereport.fs.zfs.zpool ena = 0x14c954e68900801 detector = (embedded nvlist) nvlist version: 0 version = 0x0 scheme = zfs pool = 0x4406b127a905c5be (end detector) pool = rsgis pool_guid = 0x4406b127a905c5be pool_context = 1 pool_failmode = wait __ttl = 0x1 __tod = 0x4c40be67 0x272aef6a Jul 16 2010 15:25:55.572108990 ereport.fs.zfs.vdev.open_failed nvlist version: 0 class = ereport.fs.zfs.vdev.open_failed ena = 0x1588f5aa2b00801 detector = (embedded nvlist) nvlist version: 0 version = 0x0 scheme = zfs pool = 0x4406b127a905c5be vdev = 0x6f08aad645681b14 (end detector) pool = rsgis pool_guid = 0x4406b127a905c5be pool_context = 1 pool_failmode = wait vdev_guid = 0x6f08aad645681b14 vdev_type = disk vdev_path = /dev/dsk/c8t0d0s0 vdev_devid = id1,sd at AHITACHI_HDS7225SBSUN250G_0615NE18BJ=VDS41DT4EE18BJ/a parent_guid = 0xb89f3c5a72a22939 parent_type = mirror prev_state = 0x1 __ttl = 0x1 __tod = 0x4c40c053 0x2219b0be Jul 16 2010 15:25:55.572108617 ereport.fs.zfs.vdev.open_failed nvlist version: 0 class = ereport.fs.zfs.vdev.open_failed ena = 0x1588f5aa2b00801 detector = (embedded nvlist) nvlist version: 0 version = 0x0 scheme = zfs pool = 0x4406b127a905c5be vdev = 0xe7dce33be87eeca7 (end detector) pool = rsgis pool_guid = 0x4406b127a905c5be pool_context = 1 pool_failmode = wait vdev_guid = 0xe7dce33be87eeca7 vdev_type = disk vdev_path = /dev/dsk/c9t0d0s0 vdev_devid = id1,sd at AHITACHI_HDS7225SCSUN250G_0719BN9E3K=VFA100R1DN9E3K/a parent_guid = 0xb89f3c5a72a22939 parent_type = mirror prev_state = 0x1 __ttl = 0x1 __tod = 0x4c40c053 0x2219af49 Jul 16 2010 15:25:55.572108598 ereport.fs.zfs.vdev.no_replicas nvlist version: 0 class = ereport.fs.zfs.vdev.no_replicas ena = 0x1588f5aa2b00801 detector = (embedded nvlist) nvlist version: 0 version = 0x0 scheme = zfs pool = 0x4406b127a905c5be vdev = 0xb89f3c5a72a22939 (end detector) pool = rsgis pool_guid = 0x4406b127a905c5be pool_context = 1 pool_failmode = wait vdev_guid = 0xb89f3c5a72a22939 vdev_type = mirror parent_guid = 0x4406b127a905c5be parent_type = root prev_state = 0x1 __ttl = 0x1 __tod = 0x4c40c053 0x2219af36 Jul 16 2010 15:25:55.572108718 ereport.fs.zfs.zpool nvlist version: 0 class = ereport.fs.zfs.zpool ena = 0x1588f5aa2b00801 detector = (embedded nvlist) nvlist version: 0 version = 0x0 scheme = zfs pool = 0x4406b127a905c5be (end detector) pool = rsgis pool_guid = 0x4406b127a905c5be pool_context = 1 pool_failmode = wait __ttl = 0x1 __tod = 0x4c40c053 0x2219afae Jul 16 2010 15:51:47.599791679 ereport.fs.zfs.vdev.open_failed nvlist version: 0 class = ereport.fs.zfs.vdev.open_failed ena = 0x1495a271ee00c01 detector = (embedded nvlist) nvlist version: 0 version = 0x0 scheme = zfs pool = 0x4406b127a905c5be vdev = 0x6f08aad645681b14 (end detector) pool = rsgis pool_guid = 0x4406b127a905c5be pool_context = 1 pool_failmode = wait vdev_guid = 0x6f08aad645681b14 vdev_type = disk vdev_path = /dev/dsk/c8t0d0s0 vdev_devid = id1,sd at AHITACHI_HDS7225SBSUN250G_0615NE18BJ=VDS41DT4EE18BJ/a parent_guid = 0xb89f3c5a72a22939 parent_type = mirror prev_state = 0x1 __ttl = 0x1 __tod = 0x4c40c663 0x23c0183f Jul 16 2010 15:51:47.599791560 ereport.fs.zfs.vdev.open_failed nvlist version: 0 class = ereport.fs.zfs.vdev.open_failed ena = 0x1495a271ee00c01 detector = (embedded nvlist) nvlist version: 0 version = 0x0 scheme = zfs pool = 0x4406b127a905c5be vdev = 0xe7dce33be87eeca7 (end detector) pool = rsgis pool_guid = 0x4406b127a905c5be pool_context = 1 pool_failmode = wait vdev_guid = 0xe7dce33be87eeca7 vdev_type = disk vdev_path = /dev/dsk/c9t0d0s0 vdev_devid = id1,sd at AHITACHI_HDS7225SCSUN250G_0719BN9E3K=VFA100R1DN9E3K/a parent_guid = 0xb89f3c5a72a22939 parent_type = mirror prev_state = 0x1 __ttl = 0x1 __tod = 0x4c40c663 0x23c017c8 Jul 16 2010 15:51:47.599792025 ereport.fs.zfs.vdev.no_replicas nvlist version: 0 class = ereport.fs.zfs.vdev.no_replicas ena = 0x1495a271ee00c01 detector = (embedded nvlist) nvlist version: 0 version = 0x0 scheme = zfs pool = 0x4406b127a905c5be vdev = 0xb89f3c5a72a22939 (end detector) pool = rsgis pool_guid = 0x4406b127a905c5be pool_context = 1 pool_failmode = wait vdev_guid = 0xb89f3c5a72a22939 vdev_type = mirror parent_guid = 0x4406b127a905c5be parent_type = root prev_state = 0x1 __ttl = 0x1 __tod = 0x4c40c663 0x23c01999 Jul 16 2010 15:51:47.599791994 ereport.fs.zfs.zpool nvlist version: 0 class = ereport.fs.zfs.zpool ena = 0x1495a271ee00c01 detector = (embedded nvlist) nvlist version: 0 version = 0x0 scheme = zfs pool = 0x4406b127a905c5be (end detector) pool = rsgis pool_guid = 0x4406b127a905c5be pool_context = 1 pool_failmode = wait __ttl = 0x1 __tod = 0x4c40c663 0x23c0197a --------------------
On Sat, Jul 17, 2010 at 10:55 AM, Amit Kulkarni <amitkulz at yahoo.com> wrote:> I did a zpool status and it gave me zfs 8000-3C error, saying my pool is unavailable. Since I am able to boot & access browser, I tried a zpool import without arguments, with trying to export my pool, more fiddling. Now I can''t get zpool status to show my pool.> ? ?vdev_path = /dev/dsk/c9t0d0s0 > ? ?vdev_devid = id1,sd at AHITACHI_HDS7225SCSUN250G_0719BN9E3K=VFA100R1DN9E3K/a > ? ?parent_guid = 0xb89f3c5a72a22939Does format(1M) show the devices where they once where ? -- Giovanni Tirloni gtirloni at sysdroid.com
> > I did a zpool status and it gave me zfs 8000-3C error, > saying my pool is unavailable. Since I am able to boot & > access browser, I tried a zpool import without arguments, > with trying to export my pool, more fiddling. Now I can''t > get zpool status to show my pool. > > > ? ?vdev_path = /dev/dsk/c9t0d0s0 > > ? ?vdev_devid > id1,sd at AHITACHI_HDS7225SCSUN250G_0719BN9E3K=VFA100R1DN9E3K/a > > ? ?parent_guid = 0xb89f3c5a72a22939 > > Does format(1M) show the devices where they once where ? >I don''t know if the devices are renumbered. How do you know if the devices are changed? Here is output of format, the middle one is the boot drive and selection 0 & 2 are the ZFS mirrors AVAILABLE DISK SELECTIONS: 0. c8t0d0 <ATA-HITACHIHDS7225S-A94A cyl 30398 alt 2 hd 255 sec 63> /pci at 0,0/pci108e,534a at 7/disk at 0,0 1. c8t1d0 <DEFAULT cyl 15010 alt 2 hd 255 sec 63> /pci at 0,0/pci108e,534a at 7/disk at 1,0 2. c9t0d0 <ATA-HITACHIHDS7225S-A7BA cyl 30398 alt 2 hd 255 sec 63> /pci at 0,0/pci108e,534a at 8/disk at 0,0 Thanks
On Sat, Jul 17, 2010 at 3:07 PM, Amit Kulkarni <amitkulz at yahoo.com> wrote:> I don''t know if the devices are renumbered. How do you know if the devices are changed? > > Here is output of format, the middle one is the boot drive and selection 0 & 2 are the ZFS mirrors > > AVAILABLE DISK SELECTIONS: > ? ? ? 0. c8t0d0 <ATA-HITACHIHDS7225S-A94A cyl 30398 alt 2 hd 255 sec 63> > ? ? ? ? ?/pci at 0,0/pci108e,534a at 7/disk at 0,0 > ? ? ? 1. c8t1d0 <DEFAULT cyl 15010 alt 2 hd 255 sec 63> > ? ? ? ? ?/pci at 0,0/pci108e,534a at 7/disk at 1,0 > ? ? ? 2. c9t0d0 <ATA-HITACHIHDS7225S-A7BA cyl 30398 alt 2 hd 255 sec 63> > ? ? ? ? ?/pci at 0,0/pci108e,534a at 8/disk at 0,0It seems that the devices that ZFS is trying to open exist. I wonder why it''s failing. Please send the output of: zpool status zpool import zdb -C (dump config) zdb -l /dev/dsk/c8t0d0s0 (dump label contents) zdb -l /dev/dsk/c9t0d0s0 (dump label contents) check /var/adm/messages Perhaps with the additional information someone here can help you better. I don''t have any experience with Windows 7 to guarantee that it hasn''t messes with the disk contents. -- Giovanni Tirloni gtirloni at sysdroid.com
Hi, The other pool rsgis has disappeared due to some zfs import/export I attempted. The data lost can be recreated except for some downloaded pdf''s. I should have done a zpool history before fiddling with import/export. Can somebody tell me if there is a way to recall history of the pool rsgis? Does zfs keep it around somewhere. Time slider is not enabled on this machine. Giovanni, thanks for your kind help. amit> Please send the output of: > > zpool statusamit at pilloo:~# zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c8t1d0s0 ONLINE 0 0 0 errors: No known data errors> zpool importno output it just suceeded> zdb -C (dump config)amit at pilloo:~# zdb -C rpool: version: 22 name: ''rpool'' state: 0 txg: 49081 pool_guid: 5580583254536803350 hostid: 9169892 hostname: '''' vdev_children: 1 vdev_tree: type: ''root'' id: 0 guid: 5580583254536803350 children[0]: type: ''disk'' id: 0 guid: 3070933690672745079 path: ''/dev/dsk/c8t1d0s0'' devid: ''id1,sd at SATA_____HITACHI_HDS7225S______VFA100R1DN9GWK/a'' phys_path: ''/pci at 0,0/pci108e,534a at 7/disk at 1,0:a'' whole_disk: 0 metaslab_array: 23 metaslab_shift: 30 ashift: 9 asize: 123448328192 is_log: 0 DTL: 52> zdb -l /dev/dsk/c8t0d0s0 (dump label contents)amit at pilloo:~# zdb -l /dev/dsk/c8t0d0s0 cannot open ''/dev/dsk/c8t0d0s0'': I/O error> zdb -l /dev/dsk/c9t0d0s0 (dump label contents)amit at pilloo:~# zdb -l /dev/dsk/c9t0d0s0 cannot open ''/dev/dsk/c9t0d0s0'': I/O error> check /var/adm/messagesJul 16 15:17:56 pilloo fmd: [ID 377184 daemon.error] SUNW-MSG-ID: ZFS-8000-D3, TYPE: Fault, VER: 1, SEVERITY: Major Jul 16 15:17:56 pilloo EVENT-TIME: Fri Jul 16 15:17:56 CDT 2010 Jul 16 15:17:56 pilloo PLATFORM: Sun-Ultra-40-Workstation, CSN: 0630FH500D, HOSTNAME: pilloo Jul 16 15:17:56 pilloo SOURCE: zfs-diagnosis, REV: 1.0 Jul 16 15:17:56 pilloo EVENT-ID: 5c31f877-b24f-6754-ffb3-fc6ff7c73ab7 Jul 16 15:17:56 pilloo DESC: A ZFS device failed. Refer to http://sun.com/msg/ZFS-8000-D3 for more information. Jul 16 15:17:56 pilloo AUTO-RESPONSE: No automated response will occur. Jul 16 15:17:56 pilloo IMPACT: Fault tolerance of the pool may be compromised. Jul 16 15:17:56 pilloo REC-ACTION: Run ''zpool status -x'' and replace the bad device. Jul 16 15:17:57 pilloo fmd: [ID 377184 daemon.error] SUNW-MSG-ID: ZFS-8000-D3, TYPE: Fault, VER: 1, SEVERITY: Major Jul 16 15:17:57 pilloo EVENT-TIME: Fri Jul 16 15:17:56 CDT 2010 Jul 16 15:17:57 pilloo PLATFORM: Sun-Ultra-40-Workstation, CSN: 0630FH500D, HOSTNAME: pilloo Jul 16 15:17:57 pilloo SOURCE: zfs-diagnosis, REV: 1.0 Jul 16 15:17:57 pilloo EVENT-ID: f1a868c2-ae1e-cd41-ac94-ce6ff96671be Jul 16 15:17:57 pilloo DESC: A ZFS device failed. Refer to http://sun.com/msg/ZFS-8000-D3 for more information. Jul 16 15:17:57 pilloo AUTO-RESPONSE: No automated response will occur. Jul 16 15:17:57 pilloo IMPACT: Fault tolerance of the pool may be compromised. Jul 16 15:17:57 pilloo REC-ACTION: Run ''zpool status -x'' and replace the bad device. Jul 16 15:17:57 pilloo fmd: [ID 377184 daemon.error] SUNW-MSG-ID: ZFS-8000-CS, TYPE: Fault, VER: 1, SEVERITY: Major Jul 16 15:17:57 pilloo EVENT-TIME: Fri Jul 16 15:17:57 CDT 2010 Jul 16 15:17:57 pilloo PLATFORM: Sun-Ultra-40-Workstation, CSN: 0630FH500D, HOSTNAME: pilloo Jul 16 15:17:57 pilloo SOURCE: zfs-diagnosis, REV: 1.0 Jul 16 15:17:57 pilloo EVENT-ID: 608e6cb0-2226-c11a-8576-9e67f52a2240 Jul 16 15:17:57 pilloo DESC: A ZFS pool failed to open. Refer to http://sun.com/msg/ZFS-8000-CS for more information. Jul 16 15:17:57 pilloo AUTO-RESPONSE: No automated response will occur. Jul 16 15:17:57 pilloo IMPACT: The pool data is unavailable Jul 16 15:17:57 pilloo REC-ACTION: Run ''zpool status -x'' and attach any missing devices, follow Jul 16 15:17:57 pilloo any provided recovery instructions or restore from backup.> > Perhaps with the additional information someone here can > help you > better. I don''t have any experience with Windows 7 to > guarantee that > it hasn''t messes with the disk contents. > > -- > Giovanni Tirloni > gtirloni at sysdroid.com >
I think that Device Manager in Windows 7 doesn''t do any harm. Instead I used this utility to try and format an external USB hard drive. http://www.ridgecrop.demon.co.uk/fat32format.htm I used the GUI format http://www.ridgecrop.demon.co.uk/guiformat.htm I clicked and started this GUI format without inserting the USB hard drive and it messed up the ZFS mirror. Entirely my fault, but just posting it out there so there is some record of it. I don''t know what the utility may have modified. --- On Sat, 7/17/10, Giovanni Tirloni <gtirloni at sysdroid.com> wrote:> From: Giovanni Tirloni <gtirloni at sysdroid.com> > Subject: Re: [zfs-discuss] Lost zpool after reboot > To: "Amit Kulkarni" <amitkulz at yahoo.com> > Cc: zfs-discuss at opensolaris.org > Date: Saturday, July 17, 2010, 6:23 PM > On Sat, Jul 17, 2010 at 3:07 PM, Amit > Kulkarni <amitkulz at yahoo.com> > wrote: > > I don''t know if the devices are renumbered. How do you > know if the devices are changed? > > > > Here is output of format, the middle one is the boot > drive and selection 0 & 2 are the ZFS mirrors > > > > AVAILABLE DISK SELECTIONS: > > ? ? ? 0. c8t0d0 <ATA-HITACHIHDS7225S-A94A cyl > 30398 alt 2 hd 255 sec 63> > > ? ? ? ? ?/pci at 0,0/pci108e,534a at 7/disk at 0,0 > > ? ? ? 1. c8t1d0 <DEFAULT cyl 15010 alt 2 hd 255 > sec 63> > > ? ? ? ? ?/pci at 0,0/pci108e,534a at 7/disk at 1,0 > > ? ? ? 2. c9t0d0 <ATA-HITACHIHDS7225S-A7BA cyl > 30398 alt 2 hd 255 sec 63> > > ? ? ? ? ?/pci at 0,0/pci108e,534a at 8/disk at 0,0 > > It seems that the devices that ZFS is trying to open exist. > I wonder > why it''s failing. > > Please send the output of: > > zpool status > zpool import > zdb -C (dump config) > zdb -l /dev/dsk/c8t0d0s0 (dump label contents) > zdb -l /dev/dsk/c9t0d0s0 (dump label contents) > check /var/adm/messages > > Perhaps with the additional information someone here can > help you > better. I don''t have any experience with Windows 7 to > guarantee that > it hasn''t messes with the disk contents. > > -- > Giovanni Tirloni > gtirloni at sysdroid.com >