I have a a problem with zpool import after having problems with 2 disks in RAID 5 (hardware raid). There are some bad blocks on that disks. #zpool import .. state: FAULTED status: The pool metadata is corrupted. .. #zdb -l /dev/rdsk/c4t600C0FF00000000009258F4855B59001d0s0 is OK. I managed to find that uberblock is ok, but import fails on reading first dataset. All 3 blocks gots checksum error ( output from mdb ).> 0x495ef80::print -a -t mirror_map_t mm_child[0]{ 495ef90 vdev_t *mm_child[0].mc_vd = 0x49b20c0 495ef98 uint64_t mm_child[0].mc_offset = 0x2023216000 495efa0 int mm_child[0].mc_error = 0x32 495efa4 short mm_child[0].mc_tried = 0x1 495efa6 short mm_child[0].mc_skipped = 0 }> 0x495ef80::print -a -t mirror_map_t mm_child[1]{ 495efa8 vdev_t *mm_child[1].mc_vd = 0x49b20c0 495efb0 uint64_t mm_child[1].mc_offset = 0x166234f3a00 495efb8 int mm_child[1].mc_error = 0x32 495efbc short mm_child[1].mc_tried = 0x1 495efbe short mm_child[1].mc_skipped = 0 }> 0x495ef80::print -a -t mirror_map_t mm_child[2]{ 495efc0 vdev_t *mm_child[2].mc_vd = 0x49b20c0 495efc8 uint64_t mm_child[2].mc_offset = 0x2ba4ac88a00 495efd0 int mm_child[2].mc_error = 0x32 495efd4 short mm_child[2].mc_tried = 0x1 495efd6 short mm_child[2].mc_skipped = 0 } What can I do to get my data back ? Is there a way to import zpool using other uberblock ( from previous txg ) ? --Lukas Karwacki This message posted from opensolaris.org
I managed to recover my data after 3 days fighting. Few system changes: - disable ZIL - enable readonly mode - disable zil_replay during mount - change function that chooses uberblock On snv_78 #mdb -kw> zil_disable/W 1zil_disable: 0 = 0x1> spa_mode/W 1spa_mode: 0x3 = 0x1> vdev_uberblock_compare+0x49/W 1vdev_uberblock_compare+0x49: 0xffffffff = 0x1> vdev_uberblock_compare+0x3b/W 1vdev_uberblock_compare+0x3b: 0xffffffff = 0x1> zfsvfs_setup+0x60/v 0xebzfsvfs_setup+0x60: 0x74 = 0xeb These changes allowed me to import zpool and now I''m coping data. After disk failure matrix controller have cleared caches and some data was lost. ZFS is Copy-on-Write system, so all I had to do to recover data is change uberblock ( first ZFS block ). I think ZFS should be able to change uberblock when zpool import fails. This message posted from opensolaris.org
Hi Lukas, I''ve encountered a problem similar to yours where a zfs pool became unaccessible after a reboot with the error "The pool metadata is corrupted." In my case I''m running on Solaris 10 8/07 127112-11. Can you explain how you determined the offsets for modifying vdev_uberblock_compare, and if it is possible to do so without source code since this is on Solaris 10? I''d also be willing to put the drive on a nevada box but the current download is build 85 so I suspect there will be some difference with your build 78 examples anyways. You''re the only person I''ve seen who has been able to recover from this error so I would greatly appreciate more detail on how you got access to your data again. Thanks! James Lick> I managed to recover my data after 3 days fighting. >...> > > vdev_uberblock_compare+0x49/W 1 > vdev_uberblock_compare+0x49: 0xffffffff > 0x1 > _uberblock_compare+0x3b/W 1 > vdev_uberblock_compare+0x3b: 0xffffffff > 0x1 > fs_setup+0x60/v 0xeb > zfsvfs_setup+0x60: 0x74 = 0xeb > > These changes allowed me to import zpool and now I''m > coping data.This message posted from opensolaris.org
Hi There, Is there any chance you could go into a little more detail, perhaps even document the procedure, for the benefit of others experiencing a similar problem? We had a mirrored array, which after a powercut shows the zpool as faulted, and are keen to find a way to recover the zpool. This message posted from opensolaris.org
> Hi There, > > Is there any chance you could go into a little more > detail, perhaps even document the procedure, for the > benefit of others experiencing a similar problem?I have some spare time this weekend and will try to give more details. This message posted from opensolaris.org