I had the rpool with two sata disks in the miror. Solaris 10 5.10 Generic_141415-08 i86pc i386 i86pc Unfortunately the first disk with grub loader has failed with unrecoverable block write/read errors. Now I have the problem to import rpool after the first disk has failed. So I decided to do: "zpool import -f rpool" only with second disk, but it''s hangs and the system is rebooting. I have tried to import to the fresh Solaris 10 05/09, also I''ve tried Solaris CD in single user mode and OpenSolaris 2009.11 live CD but all systems are panic and restarting. I can see that pool is existing. Labels are readable from both disks. What can I do to check and to recover data from the second disk. I have few equal disks that I can use to make a clone of the second disk. Would it be possible to do it with dd command and than try to use this clone to find out how to recover from this situation? Regards, Vladimir -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090812/9afd8142/attachment.html>
I wonder if one prob is that you already have an rpool when you are booted of the CD. could you do zpool import rpool rpool2 to rename? also if system keeps rebooting on crash you could add these to your /etc/system (but not if you are booting from disk) set zfs:zfs_recover=1 set aok=1 that solved a import/reboot loop I had a few months -----Original Message----- From: zfs-discuss-bounces at opensolaris.org on behalf of Vladimir Novakovic Sent: Wed 8/12/2009 8:49 AM To: zfs-discuss at opensolaris.org Subject: [zfs-discuss] zpool import -f rpool hangs I had the rpool with two sata disks in the miror. Solaris 10 5.10 Generic_141415-08 i86pc i386 i86pc Unfortunately the first disk with grub loader has failed with unrecoverable block write/read errors. Now I have the problem to import rpool after the first disk has failed. So I decided to do: "zpool import -f rpool" only with second disk, but it''s hangs and the system is rebooting. I have tried to import to the fresh Solaris 10 05/09, also I''ve tried Solaris CD in single user mode and OpenSolaris 2009.11 live CD but all systems are panic and restarting. I can see that pool is existing. Labels are readable from both disks. What can I do to check and to recover data from the second disk. I have few equal disks that I can use to make a clone of the second disk. Would it be possible to do it with dd command and than try to use this clone to find out how to recover from this situation? Regards, Vladimir -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090812/c759534b/attachment.html>
Hi David, thank for a tip. but I could''t use just "zpool import rpool" because it always said that pool is already used by other system, than I could only try with a force switch "-f". Under Solaris 10 that I''m using now to recover this rpool I have rpool named as mypool01 it should not collide with a affected rpool that I want to recover and import or? So, "zpool import rpool rpool2" should only rename and import the pool with a new name? I''m wondering if I need to use -f to force it that I do not lose or damage something if "zpool import -f rpool rpool2" hangs again. Anyway, I will try this that you''ve suggested with settings in the /etc/system and to mount using zpool import -f -R /mnt rpool Do you know is there anyway to make the safe clone of affected disk? Regards, Vladimir On Wed, Aug 12, 2009 at 3:24 PM, HUGE | David Stahl <dstahl at hugeinc.com>wrote:> I wonder if one prob is that you already have an rpool when you are > booted of the CD. > could you do > zpool import rpool rpool2 > to rename? > > also if system keeps rebooting on crash you could add these to your > /etc/system (but not if you are booting from disk) > set zfs:zfs_recover=1 > set aok=1 > > that solved a import/reboot loop I had a few months > > > > -----Original Message----- > From: zfs-discuss-bounces at opensolaris.org on behalf of Vladimir Novakovic > Sent: Wed 8/12/2009 8:49 AM > To: zfs-discuss at opensolaris.org > Subject: [zfs-discuss] zpool import -f rpool hangs > > I had the rpool with two sata disks in the miror. Solaris 10 5.10 > Generic_141415-08 i86pc i386 i86pc > > Unfortunately the first disk with grub loader has failed with unrecoverable > block write/read errors. > > Now I have the problem to import rpool after the first disk has failed. > > So I decided to do: "zpool import -f rpool" only with second disk, but it''s > hangs and the system is rebooting. > > I have tried to import to the fresh Solaris 10 05/09, also I''ve tried > Solaris CD in single user mode and OpenSolaris 2009.11 live CD but all > systems are panic and restarting. > > I can see that pool is existing. Labels are readable from both disks. > > What can I do to check and to recover data from the second disk. > > I have few equal disks that I can use to make a clone of the second disk. > > Would it be possible to do it with dd command and than try to use this > clone > to find out how to recover from this situation? > > > Regards, > Vladimir > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090812/c3b6f2ad/attachment.html>
Make sure you reboot after adding that to /etc/system for making a safe clone I know there must be other ways to do it, but you could make a new zpool and do a zfs send/receive from old zvols to new ones. zfs snapshot -r yourzvol at snapshot zfs send -R yourzvol at snapshot | zfs recv -vFd yournewzvol -- HUGE David Stahl Sr. Systems Administrator 718 233 9164 / F 718 625 5157 www.hugeinc.com <http://www.hugeinc.com> From: Vladimir Novakovic <vnovakov at gmail.com> Date: Wed, 12 Aug 2009 17:45:11 +0200 To: zfs-discuss <zfs-discuss at opensolaris.org> Subject: Re: [zfs-discuss] zpool import -f rpool hangs Hi David, thank for a tip. but I could''t use just "zpool import rpool" because it always said that pool is already used by other system, than I could only try with a force switch "-f". Under Solaris 10 that I''m using now to recover this rpool I have rpool named as mypool01 it should not collide with a affected rpool that I want to recover and import or? So, "zpool import rpool rpool2" should only rename and import the pool with a new name? I''m wondering if I need to use -f to force it that I do not lose or damage something if "zpool import -f rpool rpool2" hangs again. Anyway, I will try this that you''ve suggested with settings in the /etc/system and to mount using zpool import -f -R /mnt rpool Do you know is there anyway to make the safe clone of affected disk? Regards, Vladimir On Wed, Aug 12, 2009 at 3:24 PM, HUGE | David Stahl <dstahl at hugeinc.com> wrote:> I wonder if one prob is that you already have an rpool when you are booted of > the CD. > ? could you do > zpool import rpool rpool2 > to rename? > > also if system keeps rebooting on crash you could add these to your > /etc/system? (but not if you are booting from disk) > set zfs:zfs_recover=1 > set aok=1 > > that solved a import/reboot loop I had a few months > > > > -----Original Message----- > From: zfs-discuss-bounces at opensolaris.org on behalf of Vladimir Novakovic > Sent: Wed 8/12/2009 8:49 AM > To: zfs-discuss at opensolaris.org > Subject: [zfs-discuss] zpool import -f rpool hangs > > I had the rpool with two sata disks in the miror. Solaris 10 5.10 > Generic_141415-08 i86pc i386 i86pc > > Unfortunately the first disk with grub loader has failed with unrecoverable > block write/read errors. > > Now I have the problem to import rpool after the first disk has failed. > > So I decided to do: "zpool import -f rpool" only with second disk, but it''s > hangs and the system is rebooting. > > I have tried to import to the fresh Solaris 10 05/09, also I''ve tried > Solaris CD in single user mode and OpenSolaris 2009.11 live CD but all > systems are panic and restarting. > > I can see that pool is existing. Labels are readable from both disks. > > What can I do to check and to recover data from the second disk. > > I have few equal disks that I can use to make a clone of the second disk. > > Would it be possible to do it with dd command and than try to use this clone > to find out how to recover from this situation? > > > Regards, > Vladimir >_______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090812/309feff3/attachment.html>
I tried to rename and import rpool and to use those /etc/system settings,
but without success. :-(
I''ve tried also to do this use installed OpenSolaris 5.11 snv_111b and
I
have the same result as with Solaris 10.
vladimir at opensolaris:~# zpool import
pool: rpool
id: 8451126758019843293
state: DEGRADED
status: The pool was last accessed by another system.
action: The pool can be imported despite missing or damaged devices. The
fault tolerance of the pool may be compromised if imported.
see: http://www.sun.com/msg/ZFS-8000-EY
config:
rpool DEGRADED
mirror DEGRADED
c8d0s0 UNAVAIL cannot open
c10d0s0 ONLINE
Command "zpool import -f 8451126758019843293 rpool2" didn''t
change the name
of the pool in rpool2, but it updated labels with hostid before server
kernel panic came:
old labels have:
hostid=459264558
hostname=''''
new label have:
hostid=12870168
hostname=''opensolaris''
I didn''t notice other changes in labels.
I''m lost now but I still have a hope that I will be able to recover and
import this rpool.
Is there any other possibility to test and to check zfs consistency for this
pool using some tools as zdb. The problem is hat system is panic and
rebooting. So I''m not sure how to check and catch those outputs because
I
have several seconds, to monitor "zpool import" PID.
Please find below a messages output in the moment of hanging and the labels
list at the end.
Regards,
Vladimir
<cut>
vladimir at opensolaris:/# less /var/adm/messages
Aug 13 06:34:32 opensolaris zfs: [ID 517898 kern.warning] WARNING:
can''t
open objset for rpool2/ROOT/s10x_u6wos_07b
Aug 13 06:34:32 opensolaris unix: [ID 836849 kern.notice]
Aug 13 06:34:32 opensolaris ^Mpanic[cpu1]/thread=d507edc0:
Aug 13 06:34:32 opensolaris genunix: [ID 697804 kern.notice]
vmem_hash_delete(d2404690, fe9471bf, 1411362336): bad free
Aug 13 06:34:32 opensolaris unix: [ID 100000 kern.notice]
Aug 13 06:34:32 opensolaris genunix: [ID 353471 kern.notice] d507ea28
genunix:vmem_hash_delete+d2 (d2404690, fe9471bf,)
Aug 13 06:34:32 opensolaris genunix: [ID 353471 kern.notice] d507ea68
genunix:vmem_xfree+29 (d2404690, fe9471bf,)
Aug 13 06:34:32 opensolaris genunix: [ID 353471 kern.notice] d507ea88
genunix:vmem_free+21 (d2404690, fe9471bf,)
Aug 13 06:34:32 opensolaris genunix: [ID 353471 kern.notice] d507eac8
genunix:kmem_free+36 (fe9471bf, 541fae20,)
Aug 13 06:34:32 opensolaris genunix: [ID 353471 kern.notice] d507eb08
zfs:dmu_buf_rele_array+a6 (fe9471bf, d507eb88,)
Aug 13 06:34:32 opensolaris genunix: [ID 353471 kern.notice] d507eb68
zfs:dmu_write+160 (d6d19a98, be, 0, 98)
Aug 13 06:34:32 opensolaris genunix: [ID 353471 kern.notice] d507ec08
zfs:space_map_sync+304 (dee3a838, 1, dee3a6)
Aug 13 06:34:32 opensolaris genunix: [ID 353471 kern.notice] d507ec78
zfs:metaslab_sync+284 (dee3a680, 970be, 0,)
Aug 13 06:34:32 opensolaris genunix: [ID 353471 kern.notice] d507ecb8
zfs:vdev_sync+c6 (dd858000, 970be, 0)
Aug 13 06:34:32 opensolaris genunix: [ID 353471 kern.notice] d507ed28
zfs:spa_sync+3d0 (d9fcf700, 970be, 0,)
Aug 13 06:34:32 opensolaris genunix: [ID 353471 kern.notice] d507eda8
zfs:txg_sync_thread+308 (e5914380, 0)
Aug 13 06:34:32 opensolaris genunix: [ID 353471 kern.notice] d507edb8
unix:thread_start+8 ()
Aug 13 06:34:32 opensolaris unix: [ID 100000 kern.notice]
Aug 13 06:34:32 opensolaris genunix: [ID 672855 kern.notice] syncing file
systems...
Aug 13 06:34:32 opensolaris genunix: [ID 904073 kern.notice] done
Aug 13 06:34:33 opensolaris genunix: [ID 111219 kern.notice] dumping to
/dev/zvol/dsk/rpool/dump, offset 65536, content: kernel
Aug 13 06:34:50 opensolaris genunix: [ID 409368 kern.notice] ^M100% done:
122747 pages dumped, compression ratio 1.87,
Aug 13 06:34:50 opensolaris genunix: [ID 851671 kern.notice] dump succeeded
Aug 13 06:35:21 opensolaris genunix: [ID 540533 kern.notice] ^MSunOS Release
5.11 Version snv_111b 32-bit
Aug 13 06:35:21 opensolaris genunix: [ID 943908 kern.notice] Copyright
1983-2009 Sun Microsystems, Inc. All rights reserved.
vladimir at opensolaris:/# zdb -l /dev/dsk/c10d0s0
--------------------------------------------
LABEL 0
--------------------------------------------
version=10
name=''rpool''
state=0
txg=618685
pool_guid=8451126758019843293
hostid=12870168
hostname=''opensolaris''
top_guid=12565539731591116699
guid=7091554162966221179
vdev_tree
type=''mirror''
id=0
guid=12565539731591116699
whole_disk=0
metaslab_array=15
metaslab_shift=31
ashift=9
asize=400031744000
is_log=0
children[0]
type=''disk''
id=0
guid=12840919567323880481
path=''/dev/dsk/c8d0s0''
devid=''id1,cmdk at
AWDC_WD4000YR-01PLB0=_____WD-WMAMY1259386/a''
phys_path=''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at
0,0:a''
whole_disk=0
DTL=71
children[1]
type=''disk''
id=1
guid=7091554162966221179
path=''/dev/dsk/c9d0s0''
devid=''id1,cmdk at
AWDC_WD4000YR-01PLB0=_____WD-WMAMY1346334/a''
phys_path=''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at
0,0:a''
whole_disk=0
DTL=70
--------------------------------------------
LABEL 1
--------------------------------------------
version=10
name=''rpool''
state=0
txg=618685
pool_guid=8451126758019843293
hostid=12870168
hostname=''opensolaris''
top_guid=12565539731591116699
guid=7091554162966221179
vdev_tree
type=''mirror''
id=0
guid=12565539731591116699
whole_disk=0
metaslab_array=15
metaslab_shift=31
ashift=9
asize=400031744000
is_log=0
children[0]
type=''disk''
id=0
guid=12840919567323880481
path=''/dev/dsk/c8d0s0''
devid=''id1,cmdk at
AWDC_WD4000YR-01PLB0=_____WD-WMAMY1259386/a''
phys_path=''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at
0,0:a''
whole_disk=0
DTL=71
children[1]
type=''disk''
id=1
guid=7091554162966221179
path=''/dev/dsk/c9d0s0''
devid=''id1,cmdk at
AWDC_WD4000YR-01PLB0=_____WD-WMAMY1346334/a''
phys_path=''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at
0,0:a''
whole_disk=0
DTL=70
--------------------------------------------
LABEL 2
--------------------------------------------
version=10
name=''rpool''
state=0
txg=618685
pool_guid=8451126758019843293
hostid=12870168
hostname=''opensolaris''
top_guid=12565539731591116699
guid=7091554162966221179
vdev_tree
type=''mirror''
id=0
guid=12565539731591116699
whole_disk=0
metaslab_array=15
metaslab_shift=31
ashift=9
asize=400031744000
is_log=0
children[0]
type=''disk''
id=0
guid=12840919567323880481
path=''/dev/dsk/c8d0s0''
devid=''id1,cmdk at
AWDC_WD4000YR-01PLB0=_____WD-WMAMY1259386/a''
phys_path=''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at
0,0:a''
whole_disk=0
DTL=71
children[1]
type=''disk''
id=1
guid=7091554162966221179
path=''/dev/dsk/c9d0s0''
devid=''id1,cmdk at
AWDC_WD4000YR-01PLB0=_____WD-WMAMY1346334/a''
phys_path=''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at
0,0:a''
whole_disk=0
DTL=70
--------------------------------------------
LABEL 3
--------------------------------------------
version=10
name=''rpool''
state=0
txg=618685
pool_guid=8451126758019843293
hostid=12870168
hostname=''opensolaris''
top_guid=12565539731591116699
guid=7091554162966221179
vdev_tree
type=''mirror''
id=0
guid=12565539731591116699
whole_disk=0
metaslab_array=15
metaslab_shift=31
ashift=9
asize=400031744000
is_log=0
children[0]
type=''disk''
id=0
guid=12840919567323880481
path=''/dev/dsk/c8d0s0''
devid=''id1,cmdk at
AWDC_WD4000YR-01PLB0=_____WD-WMAMY1259386/a''
phys_path=''/pci at 0,0/pci-ide at 1f,2/ide at 0/cmdk at
0,0:a''
whole_disk=0
DTL=71
children[1]
type=''disk''
id=1
guid=7091554162966221179
path=''/dev/dsk/c9d0s0''
devid=''id1,cmdk at
AWDC_WD4000YR-01PLB0=_____WD-WMAMY1346334/a''
phys_path=''/pci at 0,0/pci-ide at 1f,2/ide at 1/cmdk at
0,0:a''
whole_disk=0
DTL=70
</cut>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090813/5d2b9730/attachment.html>