(From the README)
# Jeb Campbell <jebc at c4solutions.net>
NOTE: This is last resort if you need your data now. This worked for me, and
I hope it works for you. If you have any reservations, please wait for Sun
to release something official, and don''t blame me if your data is gone.
PS -- This worked for me b/c I didn''t try and replace the log on a
running
system. My log got borked on a system crash, but others have had data loss
after trying to zpool replace the log device.
Please compile (and read the source) if you can, but just in case:
md5sum logfix
fc00c9494769abbc4e309d2efb13d11b logfix
Currently (as of 6/5/2008) if a log device gets wiped or lost from a pool,
you are no longer able to import the pool.
The technical reason is that the log device info is not stored anywhere else,
so if you try to import the pool, the checksum of vdevs won''t match.
Perhaps ZFS shouldn''t use log and cache devices in computing the
pool''s
checksum...
Here is what you will need to recover:
** /etc/zfs/zpool.cache from your install **
To get this, I had to boot from a livecd, then:
# zpool import -f rpool
# mount -F zfs rpool/ROOT/opensolaris /mnt
# cp /mnt/etc/zfs/zpool.cache /mnt/etc/zfs/zpool.cache.log
Then you can copy that somewhere else, but save that copy...
** guid of log device **
The *only* place that guid exists is zpool.cache.log (assuming the log
device is wiped. So stash that file 5 places if you need to...
Back to extracting the guid, this worked for me:
# cp /mnt/etc/zfs/zpool.cache.log /etc/zfs
# cd /etc/zfs
# mv zpool.cache zpool.cache.running; cp zpool.cache.log zpool.cache; \
zdb -C > cache_dump; cp zpool.cache.running zpool.cache
We just slipped the saved file in, dump it, then restore the old one.
(I think zdb can load an alt cache file, but I couldn''t do it)
Now examine the cache_dump file and find your pool, then log device.
We are looking for the "guid" of the log device. Once you have it,
again save it 10 places (and not on the livecd).
** format your new log device **
You still need a log device to bring this back up. Anything will do
if you are going to migrate your data off this setup. I chose to keep
my log device, now that I have a way to restore it.
# cd /tmp
# dd if=/dev/zero of=junk bs=1024k count=64
# zpool create junkpool /tmp/junk log your_new_log_device
# zpool export junkpool
** Find your old disks **
You need one of your pool''s devices to read the vdev_label. It will
generally look something like this /dev/rdsk/cXtXdXs0 (or cXdXs0).
Check out the label with:
# zdb -l /dev/rdsk/cXtXdXs0
(You can also find your old disks with zpool import on a livecd)
** Fix it up! **
For disk based log devices:
# ./logfix /dev/rdsk/cXtXdXs0 /dev/rdsk/${your_new_log_device}s0 guid
For file based log devices (this will be slow -- get the data off...):
# ./logfix /dev/rdsk/cXtXdXs0 /path/your_new_log_file guid
** Import your pool **
Please before we go any further, COPY YOUR GUID SOMEWHERE SAFE!!!
# zpool import -f pool
This might take a while as ZFS does it''s thing and checks everything
out.
I hope everything went ok! -- Jeb
Source and binary attached.
This message posted from opensolaris.org
-------------- next part --------------
A non-text attachment was scrubbed...
Name: logfix.tgz
Type: application/x-gzip
Size: 7406 bytes
Desc: not available
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080605/194305b0/attachment.bin>
Sorry for reviving this old thread.
I even have this problem on my (productive) backup server. I lost my system-hdd
and my separate ZIL-device while the system crashs and now I''m in
trouble. The old system was running under the least version of osol/dev
(snv_134) with zfs v22.
After the server crashs I was very optimistic of solving the problems the same
day. It''s a long time ago.
I was setting up a new systen (osol 2009.06 and updating to the lastest version
of osol/dev - snv_134 - with deduplication) and then I tried to import my backup
zpool, but it does not work.
# zpool import
pool: tank1
id: 5048704328421749681
state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
see: http://www.sun.com/msg/ZFS-8000-EY
config:
tank1 UNAVAIL missing device
raidz2-0 ONLINE
c7t5d0 ONLINE
c7t0d0 ONLINE
c7t6d0 ONLINE
c7t3d0 ONLINE
c7t1d0 ONLINE
c7t4d0 ONLINE
c7t2d0 ONLINE
# zpool import -f tank1
cannot import ''tank1'': one or more devices is currently
unavailable
Destroy and re-create the pool from
a backup source
Any other option (-F, -X, -V, -D) and any combination of them doesn''t
helps too.
I can not add / attach / detach / remove a vdev and the ZIL-device either,
because the system tells me: there is no zpool ''tank1''.
In the last ten days I read a lot of threads, guides to solve problems and best
practice documentations with ZFS and so on, but I do not found a solution for my
problem. I created a fake-zpool with separate ZIL-device to combine the new
ZIL-file with my old zpool for importing them, but it doesn''t work in
course of the different GUID and checksum (the name I was modifiing by an binary
editor).
The output of:
eee at opensolaris:~# zdb -e tank1
Configuration for import:
vdev_children: 2
version: 22
pool_guid: 5048704328421749681
name: ''tank1''
state: 0
hostid: 946038
hostname: ''opensolaris''
vdev_tree:
type: ''root''
id: 0
guid: 5048704328421749681
children[0]:
type: ''raidz''
id: 0
guid: 16723866123388081610
nparity: 2
metaslab_array: 23
metaslab_shift: 30
ashift: 9
asize: 7001340903424
is_log: 0
create_txg: 4
children[0]:
type: ''disk''
id: 0
guid: 6858138566678362598
phys_path: ''/pci at 0,0/pci8086,244e at
1e/pci11ab,11ab at 9/disk at 0,0:a''
whole_disk: 1
DTL: 4345
create_txg: 4
path: ''/dev/dsk/c7t5d0s0''
devid: ''id1,sd at
SATA_____SAMSUNG_HD103UJ_______S13PJ1BQ709050/a''
children[1]:
type: ''disk''
id: 1
guid: 16136237447458434520
phys_path: ''/pci at 0,0/pci8086,244e at
1e/pci11ab,11ab at 9/disk at 1,0:a''
whole_disk: 1
DTL: 4344
create_txg: 4
path: ''/dev/dsk/c7t0d0s0''
devid: ''id1,sd at
SATA_____SAMSUNG_HD103UJ_______S13PJDWQ317311/a''
children[2]:
type: ''disk''
id: 2
guid: 10876853602231471126
phys_path: ''/pci at 0,0/pci8086,244e at
1e/pci11ab,11ab at 9/disk at 2,0:a''
whole_disk: 1
DTL: 4343
create_txg: 4
path: ''/dev/dsk/c7t6d0s0''
devid: ''id1,sd at
SATA_____Hitachi_HDT72101______STF604MH14S56W/a''
children[3]:
type: ''disk''
id: 3
guid: 2384677379114262201
phys_path: ''/pci at 0,0/pci8086,244e at
1e/pci11ab,11ab at 9/disk at 3,0:a''
whole_disk: 1
DTL: 4342
create_txg: 4
path: ''/dev/dsk/c7t3d0s0''
devid: ''id1,sd at
SATA_____SAMSUNG_HD103UJ_______S13PJ1NQ811135/a''
children[4]:
type: ''disk''
id: 4
guid: 15143849195434333247
phys_path: ''/pci at 0,0/pci8086,244e at
1e/pci11ab,11ab at 9/disk at 4,0:a''
whole_disk: 1
DTL: 4341
create_txg: 4
path: ''/dev/dsk/c7t1d0s0''
devid: ''id1,sd at
SATA_____Hitachi_HDT72101______STF604MH16V73W/a''
children[5]:
type: ''disk''
id: 5
guid: 11627603446133164653
phys_path: ''/pci at 0,0/pci8086,244e at
1e/pci11ab,11ab at 9/disk at 5,0:a''
whole_disk: 1
DTL: 4340
create_txg: 4
path: ''/dev/dsk/c7t4d0s0''
devid: ''id1,sd at
SATA_____SAMSUNG_HD103UJ_______S13PJDWQ317308/a''
children[6]:
type: ''disk''
id: 6
guid: 15036924286456611863
phys_path: ''/pci at 0,0/pci8086,244e at
1e/pci11ab,11ab at 9/disk at 6,0:a''
whole_disk: 1
DTL: 4338
create_txg: 4
path: ''/dev/dsk/c7t2d0s0''
devid: ''id1,sd at
SATA_____Hitachi_HDS72101______JP2921HQ0KMEZA/a''
children[1]:
type: ''missing''
id: 1
guid: 0
doesn''t gave me the GUID of the old ZIL-device and ends without a
prompt (the prozess hangs up).
Now I have add a ZIL-device, same as the old device, to the fake-zpool, export
them and tried to compile logfix, but that fails too.
eee at opensolaris:~/Downloads/logfix# make
make: Fatal error in reader: Makefile, line 9: Unexpected end of line seen
I''m just a (advanced) user, not a developer, so I can''t handle
with C-Sourcecode and my knowlegde about osol system is just lousy.
I need some help, please!
Thanks for any replies.
Best regards
Ron
--
This message posted from opensolaris.org
R. Eulenberg wrote:> Sorry for reviving this old thread. > > I even have this problem on my (productive) backup server. I lost my system-hdd and my separate ZIL-device while the system crashs and now I''m in trouble. The old system was running under the least version of osol/dev (snv_134) with zfs v22. > After the server crashs I was very optimistic of solving the problems the same day. It''s a long time ago. > I was setting up a new systen (osol 2009.06 and updating to the lastest version of osol/dev - snv_134 - with deduplication) and then I tried to import my backup zpool, but it does not work. > > # zpool import > pool: tank1 > id: 5048704328421749681 > state: UNAVAIL > status: The pool was last accessed by another system. > action: The pool cannot be imported due to damaged devices or data. > see: http://www.sun.com/msg/ZFS-8000-EY > config: > > tank1 UNAVAIL missing device > raidz2-0 ONLINE > c7t5d0 ONLINE > c7t0d0 ONLINE > c7t6d0 ONLINE > c7t3d0 ONLINE > c7t1d0 ONLINE > c7t4d0 ONLINE > c7t2d0 ONLINE > > # zpool import -f tank1 > cannot import ''tank1'': one or more devices is currently unavailable > Destroy and re-create the pool from > a backup source > > Any other option (-F, -X, -V, -D) and any combination of them doesn''t helps too. > I can not add / attach / detach / remove a vdev and the ZIL-device either, because the system tells me: there is no zpool ''tank1''. > In the last ten days I read a lot of threads, guides to solve problems and best practice documentations with ZFS and so on, but I do not found a solution for my problem. I created a fake-zpool with separate ZIL-device to combine the new ZIL-file with my old zpool for importing them, but it doesn''t work in course of the different GUID and checksum (the name I was modifiing by an binary editor). > The output of: > eee at opensolaris:~# zdb -e tank1 > > Configuration for import: > vdev_children: 2 > version: 22 > pool_guid: 5048704328421749681 > name: ''tank1'' > state: 0 > hostid: 946038 > hostname: ''opensolaris'' > vdev_tree: > type: ''root'' > id: 0 > guid: 5048704328421749681 > children[0]: > type: ''raidz'' > id: 0 > guid: 16723866123388081610 > nparity: 2 > metaslab_array: 23 > metaslab_shift: 30 > ashift: 9 > asize: 7001340903424 > is_log: 0 > create_txg: 4 > children[0]: > type: ''disk'' > id: 0 > guid: 6858138566678362598 > phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 9/disk at 0,0:a'' > whole_disk: 1 > DTL: 4345 > create_txg: 4 > path: ''/dev/dsk/c7t5d0s0'' > devid: ''id1,sd at SATA_____SAMSUNG_HD103UJ_______S13PJ1BQ709050/a'' > children[1]: > type: ''disk'' > id: 1 > guid: 16136237447458434520 > phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 9/disk at 1,0:a'' > whole_disk: 1 > DTL: 4344 > create_txg: 4 > path: ''/dev/dsk/c7t0d0s0'' > devid: ''id1,sd at SATA_____SAMSUNG_HD103UJ_______S13PJDWQ317311/a'' > children[2]: > type: ''disk'' > id: 2 > guid: 10876853602231471126 > phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 9/disk at 2,0:a'' > whole_disk: 1 > DTL: 4343 > create_txg: 4 > path: ''/dev/dsk/c7t6d0s0'' > devid: ''id1,sd at SATA_____Hitachi_HDT72101______STF604MH14S56W/a'' > children[3]: > type: ''disk'' > id: 3 > guid: 2384677379114262201 > phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 9/disk at 3,0:a'' > whole_disk: 1 > DTL: 4342 > create_txg: 4 > path: ''/dev/dsk/c7t3d0s0'' > devid: ''id1,sd at SATA_____SAMSUNG_HD103UJ_______S13PJ1NQ811135/a'' > children[4]: > type: ''disk'' > id: 4 > guid: 15143849195434333247 > phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 9/disk at 4,0:a'' > whole_disk: 1 > DTL: 4341 > create_txg: 4 > path: ''/dev/dsk/c7t1d0s0'' > devid: ''id1,sd at SATA_____Hitachi_HDT72101______STF604MH16V73W/a'' > children[5]: > type: ''disk'' > id: 5 > guid: 11627603446133164653 > phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 9/disk at 5,0:a'' > whole_disk: 1 > DTL: 4340 > create_txg: 4 > path: ''/dev/dsk/c7t4d0s0'' > devid: ''id1,sd at SATA_____SAMSUNG_HD103UJ_______S13PJDWQ317308/a'' > children[6]: > type: ''disk'' > id: 6 > guid: 15036924286456611863 > phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 9/disk at 6,0:a'' > whole_disk: 1 > DTL: 4338 > create_txg: 4 > path: ''/dev/dsk/c7t2d0s0'' > devid: ''id1,sd at SATA_____Hitachi_HDS72101______JP2921HQ0KMEZA/a'' > children[1]: > type: ''missing'' > id: 1 > guid: 0 > > doesn''t gave me the GUID of the old ZIL-device and ends without a prompt (the prozess hangs up). > Now I have add a ZIL-device, same as the old device, to the fake-zpool, export them and tried to compile logfix, but that fails too. > > eee at opensolaris:~/Downloads/logfix# make > make: Fatal error in reader: Makefile, line 9: Unexpected end of line seen > > I''m just a (advanced) user, not a developer, so I can''t handle with C-Sourcecode and my knowlegde about osol system is just lousy. > > I need some help, please! > Thanks for any replies. > > Best regards > Ron >Hi, I just recovered from a very similar zfs crash. What I did was: Added the following to /etc/system. This apparently sets zdb into write mode. set /zfs/:zfs_recover=/1/ set aok=/1/ Then ran the following command: zdb -e -bcsvL <zpool-name> Regards, Sigbjorn
On Jun 4, 2010, at 5:01 PM, Sigbj?rn Lie wrote:> > R. Eulenberg wrote: >> Sorry for reviving this old thread. >> >> I even have this problem on my (productive) backup server. I lost my system-hdd and my separate ZIL-device while the system crashs and now I''m in trouble. The old system was running under the least version of osol/dev (snv_134) with zfs v22. After the server crashs I was very optimistic of solving the problems the same day. It''s a long time ago. >> I was setting up a new systen (osol 2009.06 and updating to the lastest version of osol/dev - snv_134 - with deduplication) and then I tried to import my backup zpool, but it does not work. >> >> # zpool import >> pool: tank1 >> id: 5048704328421749681 >> state: UNAVAIL >> status: The pool was last accessed by another system. >> action: The pool cannot be imported due to damaged devices or data. >> see: http://www.sun.com/msg/ZFS-8000-EY >> config: >> >> tank1 UNAVAIL missing device >> raidz2-0 ONLINE >> c7t5d0 ONLINE >> c7t0d0 ONLINE >> c7t6d0 ONLINE >> c7t3d0 ONLINE >> c7t1d0 ONLINE >> c7t4d0 ONLINE >> c7t2d0 ONLINE >> >> # zpool import -f tank1 >> cannot import ''tank1'': one or more devices is currently unavailable >> Destroy and re-create the pool from >> a backup source >> >> Any other option (-F, -X, -V, -D) and any combination of them doesn''t helps too. >> I can not add / attach / detach / remove a vdev and the ZIL-device either, because the system tells me: there is no zpool ''tank1''. >> In the last ten days I read a lot of threads, guides to solve problems and best practice documentations with ZFS and so on, but I do not found a solution for my problem. I created a fake-zpool with separate ZIL-device to combine the new ZIL-file with my old zpool for importing them, but it doesn''t work in course of the different GUID and checksum (the name I was modifiing by an binary editor). >> The output of: >> eee at opensolaris:~# zdb -e tank1 >> >> Configuration for import: >> vdev_children: 2 >> version: 22 >> pool_guid: 5048704328421749681 >> name: ''tank1'' >> state: 0 >> hostid: 946038 >> hostname: ''opensolaris'' >> vdev_tree: >> type: ''root'' >> id: 0 >> guid: 5048704328421749681 >> children[0]: >> type: ''raidz'' >> id: 0 >> guid: 16723866123388081610 >> nparity: 2 >> metaslab_array: 23 >> metaslab_shift: 30 >> ashift: 9 >> asize: 7001340903424 >> is_log: 0 >> create_txg: 4 >> children[0]: >> type: ''disk'' >> id: 0 >> guid: 6858138566678362598 >> phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 9/disk at 0,0:a'' >> whole_disk: 1 >> DTL: 4345 >> create_txg: 4 >> path: ''/dev/dsk/c7t5d0s0'' >> devid: ''id1,sd at SATA_____SAMSUNG_HD103UJ_______S13PJ1BQ709050/a'' >> children[1]: >> type: ''disk'' >> id: 1 >> guid: 16136237447458434520 >> phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 9/disk at 1,0:a'' >> whole_disk: 1 >> DTL: 4344 >> create_txg: 4 >> path: ''/dev/dsk/c7t0d0s0'' >> devid: ''id1,sd at SATA_____SAMSUNG_HD103UJ_______S13PJDWQ317311/a'' >> children[2]: >> type: ''disk'' >> id: 2 >> guid: 10876853602231471126 >> phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 9/disk at 2,0:a'' >> whole_disk: 1 >> DTL: 4343 >> create_txg: 4 >> path: ''/dev/dsk/c7t6d0s0'' >> devid: ''id1,sd at SATA_____Hitachi_HDT72101______STF604MH14S56W/a'' >> children[3]: >> type: ''disk'' >> id: 3 >> guid: 2384677379114262201 >> phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 9/disk at 3,0:a'' >> whole_disk: 1 >> DTL: 4342 >> create_txg: 4 >> path: ''/dev/dsk/c7t3d0s0'' >> devid: ''id1,sd at SATA_____SAMSUNG_HD103UJ_______S13PJ1NQ811135/a'' >> children[4]: >> type: ''disk'' >> id: 4 >> guid: 15143849195434333247 >> phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 9/disk at 4,0:a'' >> whole_disk: 1 >> DTL: 4341 >> create_txg: 4 >> path: ''/dev/dsk/c7t1d0s0'' >> devid: ''id1,sd at SATA_____Hitachi_HDT72101______STF604MH16V73W/a'' >> children[5]: >> type: ''disk'' >> id: 5 >> guid: 11627603446133164653 >> phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 9/disk at 5,0:a'' >> whole_disk: 1 >> DTL: 4340 >> create_txg: 4 >> path: ''/dev/dsk/c7t4d0s0'' >> devid: ''id1,sd at SATA_____SAMSUNG_HD103UJ_______S13PJDWQ317308/a'' >> children[6]: >> type: ''disk'' >> id: 6 >> guid: 15036924286456611863 >> phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 9/disk at 6,0:a'' >> whole_disk: 1 >> DTL: 4338 >> create_txg: 4 >> path: ''/dev/dsk/c7t2d0s0'' >> devid: ''id1,sd at SATA_____Hitachi_HDS72101______JP2921HQ0KMEZA/a'' >> children[1]: >> type: ''missing'' >> id: 1 >> guid: 0 >> >> doesn''t gave me the GUID of the old ZIL-device and ends without a prompt (the prozess hangs up).>> Now I have add a ZIL-device, same as the old device, to the fake-zpool, export them and tried to compile logfix, but that fails too. >> eee at opensolaris:~/Downloads/logfix# make >> make: Fatal error in reader: Makefile, line 9: Unexpected end of line seen >> >> I''m just a (advanced) user, not a developer, so I can''t handle with C-Sourcecode and my knowlegde about osol system is just lousy. >> >> I need some help, please! >> Thanks for any replies. >> >> Best regards Ron >> > > Hi, > > I just recovered from a very similar zfs crash. What I did was: > > Added the following to /etc/system. This apparently sets zdb into write mode. > > set /zfs/:zfs_recover=/1/ > set aok=/1/This is not going to help in this case. Btw, before applying these parameters, it is good to make sure that you fully understand why it is needed and what consequences may be.> > Then ran the following command: > zdb -e -bcsvL <zpool-name> > > Regards, > Sigbjorn > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Victor Latushkin wrote:> On Jun 4, 2010, at 5:01 PM, Sigbj?rn Lie wrote: > > >> R. Eulenberg wrote: >> >>> Sorry for reviving this old thread. >>> >>> I even have this problem on my (productive) backup server. I lost my system-hdd and my separate ZIL-device while the system crashs and now I''m in trouble. The old system was running under the least version of osol/dev (snv_134) with zfs v22. After the server crashs I was very optimistic of solving the problems the same day. It''s a long time ago. >>> I was setting up a new systen (osol 2009.06 and updating to the lastest version of osol/dev - snv_134 - with deduplication) and then I tried to import my backup zpool, but it does not work. >>> >>> # zpool import >>> pool: tank1 >>> id: 5048704328421749681 >>> state: UNAVAIL >>> status: The pool was last accessed by another system. >>> action: The pool cannot be imported due to damaged devices or data. >>> see: http://www.sun.com/msg/ZFS-8000-EY >>> config: >>> >>> tank1 UNAVAIL missing device >>> raidz2-0 ONLINE >>> c7t5d0 ONLINE >>> c7t0d0 ONLINE >>> c7t6d0 ONLINE >>> c7t3d0 ONLINE >>> c7t1d0 ONLINE >>> c7t4d0 ONLINE >>> c7t2d0 ONLINE >>> >>> # zpool import -f tank1 >>> cannot import ''tank1'': one or more devices is currently unavailable >>> Destroy and re-create the pool from >>> a backup source >>> >>> Any other option (-F, -X, -V, -D) and any combination of them doesn''t helps too. >>> I can not add / attach / detach / remove a vdev and the ZIL-device either, because the system tells me: there is no zpool ''tank1''. >>> In the last ten days I read a lot of threads, guides to solve problems and best practice documentations with ZFS and so on, but I do not found a solution for my problem. I created a fake-zpool with separate ZIL-device to combine the new ZIL-file with my old zpool for importing them, but it doesn''t work in course of the different GUID and checksum (the name I was modifiing by an binary editor). >>> The output of: >>> eee at opensolaris:~# zdb -e tank1 >>> >>> Configuration for import: >>> vdev_children: 2 >>> version: 22 >>> pool_guid: 5048704328421749681 >>> name: ''tank1'' >>> state: 0 >>> hostid: 946038 >>> hostname: ''opensolaris'' >>> vdev_tree: >>> type: ''root'' >>> id: 0 >>> guid: 5048704328421749681 >>> children[0]: >>> type: ''raidz'' >>> id: 0 >>> guid: 16723866123388081610 >>> nparity: 2 >>> metaslab_array: 23 >>> metaslab_shift: 30 >>> ashift: 9 >>> asize: 7001340903424 >>> is_log: 0 >>> create_txg: 4 >>> children[0]: >>> type: ''disk'' >>> id: 0 >>> guid: 6858138566678362598 >>> phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 9/disk at 0,0:a'' >>> whole_disk: 1 >>> DTL: 4345 >>> create_txg: 4 >>> path: ''/dev/dsk/c7t5d0s0'' >>> devid: ''id1,sd at SATA_____SAMSUNG_HD103UJ_______S13PJ1BQ709050/a'' >>> children[1]: >>> type: ''disk'' >>> id: 1 >>> guid: 16136237447458434520 >>> phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 9/disk at 1,0:a'' >>> whole_disk: 1 >>> DTL: 4344 >>> create_txg: 4 >>> path: ''/dev/dsk/c7t0d0s0'' >>> devid: ''id1,sd at SATA_____SAMSUNG_HD103UJ_______S13PJDWQ317311/a'' >>> children[2]: >>> type: ''disk'' >>> id: 2 >>> guid: 10876853602231471126 >>> phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 9/disk at 2,0:a'' >>> whole_disk: 1 >>> DTL: 4343 >>> create_txg: 4 >>> path: ''/dev/dsk/c7t6d0s0'' >>> devid: ''id1,sd at SATA_____Hitachi_HDT72101______STF604MH14S56W/a'' >>> children[3]: >>> type: ''disk'' >>> id: 3 >>> guid: 2384677379114262201 >>> phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 9/disk at 3,0:a'' >>> whole_disk: 1 >>> DTL: 4342 >>> create_txg: 4 >>> path: ''/dev/dsk/c7t3d0s0'' >>> devid: ''id1,sd at SATA_____SAMSUNG_HD103UJ_______S13PJ1NQ811135/a'' >>> children[4]: >>> type: ''disk'' >>> id: 4 >>> guid: 15143849195434333247 >>> phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 9/disk at 4,0:a'' >>> whole_disk: 1 >>> DTL: 4341 >>> create_txg: 4 >>> path: ''/dev/dsk/c7t1d0s0'' >>> devid: ''id1,sd at SATA_____Hitachi_HDT72101______STF604MH16V73W/a'' >>> children[5]: >>> type: ''disk'' >>> id: 5 >>> guid: 11627603446133164653 >>> phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 9/disk at 5,0:a'' >>> whole_disk: 1 >>> DTL: 4340 >>> create_txg: 4 >>> path: ''/dev/dsk/c7t4d0s0'' >>> devid: ''id1,sd at SATA_____SAMSUNG_HD103UJ_______S13PJDWQ317308/a'' >>> children[6]: >>> type: ''disk'' >>> id: 6 >>> guid: 15036924286456611863 >>> phys_path: ''/pci at 0,0/pci8086,244e at 1e/pci11ab,11ab at 9/disk at 6,0:a'' >>> whole_disk: 1 >>> DTL: 4338 >>> create_txg: 4 >>> path: ''/dev/dsk/c7t2d0s0'' >>> devid: ''id1,sd at SATA_____Hitachi_HDS72101______JP2921HQ0KMEZA/a'' >>> children[1]: >>> type: ''missing'' >>> id: 1 >>> guid: 0 >>> >>> doesn''t gave me the GUID of the old ZIL-device and ends without a prompt (the prozess hangs up). >>> > > >>> Now I have add a ZIL-device, same as the old device, to the fake-zpool, export them and tried to compile logfix, but that fails too. >>> eee at opensolaris:~/Downloads/logfix# make >>> make: Fatal error in reader: Makefile, line 9: Unexpected end of line seen >>> >>> I''m just a (advanced) user, not a developer, so I can''t handle with C-Sourcecode and my knowlegde about osol system is just lousy. >>> >>> I need some help, please! >>> Thanks for any replies. >>> >>> Best regards Ron >>> >>> >> Hi, >> >> I just recovered from a very similar zfs crash. What I did was: >> >> Added the following to /etc/system. This apparently sets zdb into write mode. >> >> set /zfs/:zfs_recover=/1/ >> set aok=/1/ >> > > This is not going to help in this case. Btw, before applying these parameters, it is good to make sure that you fully understand why it is needed and what consequences may be. > > >> Then ran the following command: >> zdb -e -bcsvL <zpool-name> >> >> Regards, >> Sigbjorn >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > >That''s a valid point. Hence why I started the thread about ZFS recovery documentation. Or maybe I have missed out on some info like what David pointed out for me ?
Hi, thanks for helping me. I wanted to solve the problem with logfix, but I''m not able to compile this. Where could I find a newer version of logfix to try it again. Regards, Ron -- This message posted from opensolaris.org
Hi, yesterday I changed the /etc/system file and ran: zdb -e -bcsvL tank1 without an output and without a prompt (prozess hangs up) and the same result of running: zdb -eC tank1 Regards Ron -- This message posted from opensolaris.org