Benjamin Brumaire
2008-Apr-29 05:33 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
Hi, my system (solaris b77) was physically destroyed and i loosed data saved in a zpool mirror. The only thing left is a dettached vdev from the pool. I''m aware that uberblock is gone and that i can''t import the pool. But i still hope their is a way or a tool (like tct http://www.porcupine.org/forensics/) i can go too recover at least partially some data) thanks in advance for any hints. bbr This message posted from opensolaris.org
Jeff Bonwick
2008-Apr-29 06:17 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
If your entire pool consisted of a single mirror of two disks, A and B, and you detached B at some point in the past, you *should* be able to recover the pool as it existed when you detached B. However, I just tried that experiment on a test pool and it didn''t work. I will investigate further and get back to you. I suspect it''s perfectly doable, just currently disallowed due to some sort of error check that''s a little more conservative than necessary. Keep that disk! Jeff On Mon, Apr 28, 2008 at 10:33:32PM -0700, Benjamin Brumaire wrote:> Hi, > > my system (solaris b77) was physically destroyed and i loosed data saved in a zpool mirror. The only thing left is a dettached vdev from the pool. I''m aware that uberblock is gone and that i can''t import the pool. But i still hope their is a way or a tool (like tct http://www.porcupine.org/forensics/) i can go too recover at least partially some data) > > thanks in advance for any hints. > > bbr > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Benjamin Brumaire
2008-Apr-29 07:15 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
Jeff thank you very much for taking time to look at this. My entire pool consisted of a single mirror of two slices on different disks A and B. I attach a third slice on disk C and wait for resilver and then detach it. Now disks A and B burned and I have only disk C at hand. bbr This message posted from opensolaris.org
Jeff Bonwick
2008-Apr-29 09:41 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
Urgh. This is going to be harder than I thought -- not impossible, just hard. When we detach a disk from a mirror, we write a new label to indicate that the disk is no longer in use. As a side effect, this zeroes out all the old uberblocks. That''s the bad news -- you have no uberblocks. The good news is that the uberblock only contains one field that''s hard to reconstruct: ub_rootbp, which points to the root of the block tree. The root block *itself* is still there -- we just have to find it. The root block has a known format: it''s a compressed objset_phys_t, almost certainly one sector in size (could be two, but very unlikely because the root objset_phys_t is highly compressible). It should be possible to write a program that scans the disk, reading each sector and attempting to decompress it. If it decompresses into exactly 1K (size of an uncompressed objset_phys_t), then we can look at all the fields to see if they look plausible. Among all candidates we find, the one whose embedded meta-dnode has the highest birth time in its dn_blkptr is the one we want. I need to get some sleep now, but I''ll code this up in a couple of days and we can take it from there. If this is time-sensitive, let me know and I''ll see if I can find someone else to drive it. [ I''ve got a bunch of commitments tomorrow, plus I''m supposed to be on vacation... typical... ;-) ] Jeff On Tue, Apr 29, 2008 at 12:15:21AM -0700, Benjamin Brumaire wrote:> Jeff thank you very much for taking time to look at this. > > My entire pool consisted of a single mirror of two slices on different disks A and B. I attach a third slice on disk C and wait for resilver and then detach it. Now disks A and B burned and I have only disk C at hand. > > bbr > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jeff Bonwick
2008-Apr-29 09:55 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
Urgh. This is going to be harder than I thought -- not impossible, just hard. When we detach a disk from a mirror, we write a new label to indicate that the disk is no longer in use. As a side effect, this zeroes out all the old uberblocks. That''s the bad news -- you have no uberblocks. The good news is that the uberblock only contains one field that''s hard to reconstruct: ub_rootbp, which points to the root of the block tree. The root block *itself* is still there -- we just have to find it. The root block has a known format: it''s a compressed objset_phys_t, almost certainly one sector in size (could be two, but very unlikely because the root objset_phys_t is highly compressible). It should be possible to write a program that scans the disk, reading each sector and attempting to decompress it. If it decompresses into exactly 1K (size of an uncompressed objset_phys_t), then we can look at all the fields to see if they look plausible. Among all candidates we find, the one whose embedded meta-dnode has the highest birth time in its dn_blkptr is the one we want. I need to get some sleep now, but I''ll code this up in a couple of days and we can take it from there. If this is time-sensitive, let me know and I''ll see if I can find someone else to drive it. [ I''ve got a bunch of commitments tomorrow, plus I''m supposed to be on vacation... typical... ;-) ] Jeff On Tue, Apr 29, 2008 at 12:15:21AM -0700, Benjamin Brumaire wrote:> Jeff thank you very much for taking time to look at this. > > My entire pool consisted of a single mirror of two slices on different disks A and B. I attach a third slice on disk C and wait for resilver and then detach it. Now disks A and B burned and I have only disk C at hand. > > bbr > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Benjamin Brumaire
2008-Apr-29 12:34 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
If I understand you correctly the steps to follow are: read each sector (dd bs=512 count=1 split=n is enough?) decompress it (any tools implementing the algo lzjb?) size = 1024? structure might be objset_phys_t? take the oldest birth time as the root block construction of the uberblocks Unfortunately I can''t help with a C program but if I will be happy to support you in any other way. Don''t consider it''s time sensitive, those data are very important but I can continue my business without it. Again thanks you very much for your help. I really appreciate. bbr This message posted from opensolaris.org
John R. Sconiers II
2008-Apr-30 05:12 UTC
[zfs-discuss] import pooling when device is misisng
I did a fresh install of Nevada. I have two zpools that contains the devices c0t0d0s4 and c0t1d0s4. Couldn''t find a way to attach the missing device without it being imported. Any help would be appreciated........ bash-3.2# zpool import pool: nfs-share id: 6871731259521181476 state: UNAVAIL status: One or more devices are missing from the system. action: The pool cannot be imported. Attach the missing devices and try again. see: http://www.sun.com/msg/ZFS-8000-6X config: nfs-share UNAVAIL missing device c0t0d0s4 ONLINE Additional devices are known to be part of this pool, though their exact configuration cannot be determined. bash-3.2# zpool import -Df nfs-share cannot import ''nfs-share'': no such pool available bash-3.2# -- ********************************************************* John R. Sconiers II, MISM, SCSA, SCNA, SCSECA, SCSASC SUN Microsystems TSC National Storage Support Engineer TSC NSSE 708-203-9228 Cell Phone 708-838-7097 access line / fax Chicago, IL USA "History is a nightmare from which I am trying to awake."-James Joyce *********************************************************
Benjamin Brumaire
2008-May-02 10:31 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
Hi, while diving deeply in zfs in order to recover data I found that every uberblock in label0 does have the same ub_rootbp and a zeroed ub_txg. Does it means only ub_txg was touch while detaching? Hoping it is the case, I modified ub_txg from one uberblock to match the tgx from the label and now I try to calculate the new SHA256 checksum but I failed. Can someone explain what I did wrong? And of course how to do it correctly? bbr The example is from a valid uberblock which belongs an other pool. Dumping the active uberblock in Label 0: # dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=1024 | od -x 1024+0 records in 1024+0 records out 0000000 b10c 00ba 0000 0000 0009 0000 0000 0000 0000020 8bf2 0000 0000 0000 8eef f6db c46f 4dcc 0000040 bba8 481a 0000 0000 0001 0000 0000 0000 0000060 05e6 0003 0000 0000 0001 0000 0000 0000 0000100 05e6 005b 0000 0000 0001 0000 0000 0000 0000120 44e9 00b2 0000 0000 0001 0000 0703 800b 0000140 0000 0000 0000 0000 0000 0000 0000 0000 0000160 0000 0000 0000 0000 8bf2 0000 0000 0000 0000200 0018 0000 0000 0000 a981 2f65 0008 0000 0000220 e734 adf2 037a 0000 cedc d398 c063 0000 0000240 da03 8a6e 26fc 001c 0000 0000 0000 0000 0000260 0000 0000 0000 0000 0000 0000 0000 0000 * 0001720 0000 0000 0000 0000 7a11 b10c da7a 0210 0001740 3836 20fb e2a7 a737 a947 feed 43c5 c045 0001760 82a8 133d 0ba7 9ce7 e5d5 64e2 2474 3b03 0002000 Checksum is at pos 01740 01760 I try to calculate it assuming only uberblock is relevant. #dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=168 | digest -a sha256 168+0 records in 168+0 records out 710306650facf818e824db5621be394f3b3fe934107bdfc861bbc82cb9e1bbf3 Helas not matching :-( This message posted from opensolaris.org
Darren J Moffat
2008-May-02 11:24 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
Benjamin Brumaire wrote:> I try to calculate it assuming only uberblock is relevant. > #dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=168 | digest -a sha256 > 168+0 records in > 168+0 records out > 710306650facf818e824db5621be394f3b3fe934107bdfc861bbc82cb9e1bbf3Is this on SPARC or x86 ? ZFS stores the SHA256 checksum in 4 words in big endian format, see http://src.opensolaris.org/source/xref/zfs-crypto/gate/usr/src/uts/common/fs/zfs/sha256.c -- Darren J Moffat
Benjamin Brumaire
2008-May-02 11:38 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
it is on x86. Does it means that I have to split the output from digest in 4 words (each 8 bytes) and reverse each before comparing with the stored value? bbr This message posted from opensolaris.org
Jeff Bonwick
2008-May-04 05:48 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
Oh, you''re right! Well, that will simplify things! All we have to do is convince a few bits of code to ignore ub_txg == 0. I''ll try a couple of things and get back to you in a few hours... Jeff On Fri, May 02, 2008 at 03:31:52AM -0700, Benjamin Brumaire wrote:> Hi, > > while diving deeply in zfs in order to recover data I found that every uberblock in label0 does have the same ub_rootbp and a zeroed ub_txg. Does it means only ub_txg was touch while detaching? > > Hoping it is the case, I modified ub_txg from one uberblock to match the tgx from the label and now I try to calculate the new SHA256 checksum but I failed. Can someone explain what I did wrong? And of course how to do it correctly? > > bbr > > > The example is from a valid uberblock which belongs an other pool. > > Dumping the active uberblock in Label 0: > > # dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=1024 | od -x > 1024+0 records in > 1024+0 records out > 0000000 b10c 00ba 0000 0000 0009 0000 0000 0000 > 0000020 8bf2 0000 0000 0000 8eef f6db c46f 4dcc > 0000040 bba8 481a 0000 0000 0001 0000 0000 0000 > 0000060 05e6 0003 0000 0000 0001 0000 0000 0000 > 0000100 05e6 005b 0000 0000 0001 0000 0000 0000 > 0000120 44e9 00b2 0000 0000 0001 0000 0703 800b > 0000140 0000 0000 0000 0000 0000 0000 0000 0000 > 0000160 0000 0000 0000 0000 8bf2 0000 0000 0000 > 0000200 0018 0000 0000 0000 a981 2f65 0008 0000 > 0000220 e734 adf2 037a 0000 cedc d398 c063 0000 > 0000240 da03 8a6e 26fc 001c 0000 0000 0000 0000 > 0000260 0000 0000 0000 0000 0000 0000 0000 0000 > * > 0001720 0000 0000 0000 0000 7a11 b10c da7a 0210 > 0001740 3836 20fb e2a7 a737 a947 feed 43c5 c045 > 0001760 82a8 133d 0ba7 9ce7 e5d5 64e2 2474 3b03 > 0002000 > > Checksum is at pos 01740 01760 > > I try to calculate it assuming only uberblock is relevant. > #dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=168 | digest -a sha256 > 168+0 records in > 168+0 records out > 710306650facf818e824db5621be394f3b3fe934107bdfc861bbc82cb9e1bbf3 > > Helas not matching :-( > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jeff Bonwick
2008-May-04 08:21 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
OK, here you go. I''ve successfully recovered a pool from a detached device using the attached binary. You can verify its integrity against the following MD5 hash: # md5sum labelfix ab4f33d99fdb48d9d20ee62b49f11e20 labelfix It takes just one argument -- the disk to repair: # ./labelfix /dev/rdsk/c0d1s4 If all goes according to plan, your old pool should be importable. If you do a zpool status -v, it will complain that the old mirrors are no longer there. You can clean that up by detaching them: # zpool detach mypool <guid> where <guid> is the long integer that zpool status -v reports as the name of the missing device. Good luck, and please let us know how it goes! Jeff On Sat, May 03, 2008 at 10:48:34PM -0700, Jeff Bonwick wrote:> Oh, you''re right! Well, that will simplify things! All we have to do > is convince a few bits of code to ignore ub_txg == 0. I''ll try a > couple of things and get back to you in a few hours... > > Jeff > > On Fri, May 02, 2008 at 03:31:52AM -0700, Benjamin Brumaire wrote: > > Hi, > > > > while diving deeply in zfs in order to recover data I found that every uberblock in label0 does have the same ub_rootbp and a zeroed ub_txg. Does it means only ub_txg was touch while detaching? > > > > Hoping it is the case, I modified ub_txg from one uberblock to match the tgx from the label and now I try to calculate the new SHA256 checksum but I failed. Can someone explain what I did wrong? And of course how to do it correctly? > > > > bbr > > > > > > The example is from a valid uberblock which belongs an other pool. > > > > Dumping the active uberblock in Label 0: > > > > # dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=1024 | od -x > > 1024+0 records in > > 1024+0 records out > > 0000000 b10c 00ba 0000 0000 0009 0000 0000 0000 > > 0000020 8bf2 0000 0000 0000 8eef f6db c46f 4dcc > > 0000040 bba8 481a 0000 0000 0001 0000 0000 0000 > > 0000060 05e6 0003 0000 0000 0001 0000 0000 0000 > > 0000100 05e6 005b 0000 0000 0001 0000 0000 0000 > > 0000120 44e9 00b2 0000 0000 0001 0000 0703 800b > > 0000140 0000 0000 0000 0000 0000 0000 0000 0000 > > 0000160 0000 0000 0000 0000 8bf2 0000 0000 0000 > > 0000200 0018 0000 0000 0000 a981 2f65 0008 0000 > > 0000220 e734 adf2 037a 0000 cedc d398 c063 0000 > > 0000240 da03 8a6e 26fc 001c 0000 0000 0000 0000 > > 0000260 0000 0000 0000 0000 0000 0000 0000 0000 > > * > > 0001720 0000 0000 0000 0000 7a11 b10c da7a 0210 > > 0001740 3836 20fb e2a7 a737 a947 feed 43c5 c045 > > 0001760 82a8 133d 0ba7 9ce7 e5d5 64e2 2474 3b03 > > 0002000 > > > > Checksum is at pos 01740 01760 > > > > I try to calculate it assuming only uberblock is relevant. > > #dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=168 | digest -a sha256 > > 168+0 records in > > 168+0 records out > > 710306650facf818e824db5621be394f3b3fe934107bdfc861bbc82cb9e1bbf3 > > > > Helas not matching :-( > > > > > > This message posted from opensolaris.org > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss at opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-------------- next part -------------- A non-text attachment was scrubbed... Name: labelfix Type: application/octet-stream Size: 15252 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080504/9e6a5662/attachment.obj>
Jeff Bonwick
2008-May-04 08:42 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
Oh, and here''s the source code, for the curious: #include <devid.h> #include <dirent.h> #include <errno.h> #include <libintl.h> #include <stdlib.h> #include <string.h> #include <sys/stat.h> #include <unistd.h> #include <fcntl.h> #include <stddef.h> #include <sys/vdev_impl.h> /* * Write a label block with a ZBT checksum. */ static void label_write(int fd, uint64_t offset, uint64_t size, void *buf) { zio_block_tail_t *zbt, zbt_orig; zio_cksum_t zc; zbt = (zio_block_tail_t *)((char *)buf + size) - 1; zbt_orig = *zbt; ZIO_SET_CHECKSUM(&zbt->zbt_cksum, offset, 0, 0, 0); zio_checksum(ZIO_CHECKSUM_LABEL, &zc, buf, size); VERIFY(pwrite64(fd, buf, size, offset) == size); *zbt = zbt_orig; } int main(int argc, char **argv) { int fd; vdev_label_t vl; nvlist_t *config; uberblock_t *ub = (uberblock_t *)vl.vl_uberblock; uint64_t txg; char *buf; size_t buflen; VERIFY(argc == 2); VERIFY((fd = open(argv[1], O_RDWR)) != -1); VERIFY(pread64(fd, &vl, sizeof (vdev_label_t), 0) = sizeof (vdev_label_t)); VERIFY(nvlist_unpack(vl.vl_vdev_phys.vp_nvlist, sizeof (vl.vl_vdev_phys.vp_nvlist), &config, 0) == 0); VERIFY(nvlist_lookup_uint64(config, ZPOOL_CONFIG_POOL_TXG, &txg) == 0); VERIFY(txg == 0); VERIFY(ub->ub_txg == 0); VERIFY(ub->ub_rootbp.blk_birth != 0); txg = ub->ub_rootbp.blk_birth; ub->ub_txg = txg; VERIFY(nvlist_remove_all(config, ZPOOL_CONFIG_POOL_TXG) == 0); VERIFY(nvlist_add_uint64(config, ZPOOL_CONFIG_POOL_TXG, txg) == 0); buf = vl.vl_vdev_phys.vp_nvlist; buflen = sizeof (vl.vl_vdev_phys.vp_nvlist); VERIFY(nvlist_pack(config, &buf, &buflen, NV_ENCODE_XDR, 0) == 0); label_write(fd, offsetof(vdev_label_t, vl_uberblock), 1ULL << UBERBLOCK_SHIFT, ub); label_write(fd, offsetof(vdev_label_t, vl_vdev_phys), VDEV_PHYS_SIZE, &vl.vl_vdev_phys); fsync(fd); return (0); } Jeff On Sun, May 04, 2008 at 01:21:27AM -0700, Jeff Bonwick wrote:> OK, here you go. I''ve successfully recovered a pool from a detached > device using the attached binary. You can verify its integrity > against the following MD5 hash: > > # md5sum labelfix > ab4f33d99fdb48d9d20ee62b49f11e20 labelfix > > It takes just one argument -- the disk to repair: > > # ./labelfix /dev/rdsk/c0d1s4 > > If all goes according to plan, your old pool should be importable. > If you do a zpool status -v, it will complain that the old mirrors > are no longer there. You can clean that up by detaching them: > > # zpool detach mypool <guid> > > where <guid> is the long integer that zpool status -v reports > as the name of the missing device. > > Good luck, and please let us know how it goes! > > Jeff > > On Sat, May 03, 2008 at 10:48:34PM -0700, Jeff Bonwick wrote: > > Oh, you''re right! Well, that will simplify things! All we have to do > > is convince a few bits of code to ignore ub_txg == 0. I''ll try a > > couple of things and get back to you in a few hours... > > > > Jeff > > > > On Fri, May 02, 2008 at 03:31:52AM -0700, Benjamin Brumaire wrote: > > > Hi, > > > > > > while diving deeply in zfs in order to recover data I found that every uberblock in label0 does have the same ub_rootbp and a zeroed ub_txg. Does it means only ub_txg was touch while detaching? > > > > > > Hoping it is the case, I modified ub_txg from one uberblock to match the tgx from the label and now I try to calculate the new SHA256 checksum but I failed. Can someone explain what I did wrong? And of course how to do it correctly? > > > > > > bbr > > > > > > > > > The example is from a valid uberblock which belongs an other pool. > > > > > > Dumping the active uberblock in Label 0: > > > > > > # dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=1024 | od -x > > > 1024+0 records in > > > 1024+0 records out > > > 0000000 b10c 00ba 0000 0000 0009 0000 0000 0000 > > > 0000020 8bf2 0000 0000 0000 8eef f6db c46f 4dcc > > > 0000040 bba8 481a 0000 0000 0001 0000 0000 0000 > > > 0000060 05e6 0003 0000 0000 0001 0000 0000 0000 > > > 0000100 05e6 005b 0000 0000 0001 0000 0000 0000 > > > 0000120 44e9 00b2 0000 0000 0001 0000 0703 800b > > > 0000140 0000 0000 0000 0000 0000 0000 0000 0000 > > > 0000160 0000 0000 0000 0000 8bf2 0000 0000 0000 > > > 0000200 0018 0000 0000 0000 a981 2f65 0008 0000 > > > 0000220 e734 adf2 037a 0000 cedc d398 c063 0000 > > > 0000240 da03 8a6e 26fc 001c 0000 0000 0000 0000 > > > 0000260 0000 0000 0000 0000 0000 0000 0000 0000 > > > * > > > 0001720 0000 0000 0000 0000 7a11 b10c da7a 0210 > > > 0001740 3836 20fb e2a7 a737 a947 feed 43c5 c045 > > > 0001760 82a8 133d 0ba7 9ce7 e5d5 64e2 2474 3b03 > > > 0002000 > > > > > > Checksum is at pos 01740 01760 > > > > > > I try to calculate it assuming only uberblock is relevant. > > > #dd if=/dev/dsk/c0d1s4 bs=1 iseek=247808 count=168 | digest -a sha256 > > > 168+0 records in > > > 168+0 records out > > > 710306650facf818e824db5621be394f3b3fe934107bdfc861bbc82cb9e1bbf3 > > > > > > Helas not matching :-( > > > > > > > > > This message posted from opensolaris.org > > > _______________________________________________ > > > zfs-discuss mailing list > > > zfs-discuss at opensolaris.org > > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss at opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Cyril Plisko
2008-May-04 10:34 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
On Sun, May 4, 2008 at 11:42 AM, Jeff Bonwick <Jeff.Bonwick at sun.com> wrote:> Oh, and here''s the source code, for the curious: >[snipped]> > label_write(fd, offsetof(vdev_label_t, vl_uberblock), > 1ULL << UBERBLOCK_SHIFT, ub); > > label_write(fd, offsetof(vdev_label_t, vl_vdev_phys), > VDEV_PHYS_SIZE, &vl.vl_vdev_phys);Jeff, is it enough to overwrite only one label ? Isn''t there four of them ?> > fsync(fd); > > return (0); > } >-- Regards, Cyril
Mario Goebbels
2008-May-04 11:23 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
> Oh, and here''s the source code, for the curious:The forensics project will be all over this, I hope, and wrap it up in a nice command line tool. -mg
Benjamin Brumaire
2008-May-04 17:01 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
Well, thanks your program, I could recover the data on the detach disk. Now I m copying the data on other disks and resilver it inside the pool. Warm words aren''t enough to express how I feel. This community is great. Thanks you very much. bbr This message posted from opensolaris.org
Robert Milkowski
2008-May-06 07:15 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
Hello Cyril, Sunday, May 4, 2008, 11:34:28 AM, you wrote: CP> On Sun, May 4, 2008 at 11:42 AM, Jeff Bonwick <Jeff.Bonwick at sun.com> wrote:>> Oh, and here''s the source code, for the curious: >>CP> [snipped]>> >> label_write(fd, offsetof(vdev_label_t, vl_uberblock), >> 1ULL << UBERBLOCK_SHIFT, ub); >> >> label_write(fd, offsetof(vdev_label_t, vl_vdev_phys), >> VDEV_PHYS_SIZE, &vl.vl_vdev_phys);CP> Jeff, CP> is it enough to overwrite only one label ? Isn''t there four of them ? If checksum is ok IIRC the last one (most recent timestamp) is going to be used. -- Best regards, Robert Milkowski mailto:milek at task.gda.pl http://milek.blogspot.com
Darren J Moffat
2008-May-06 10:16 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
Great tool, any chance we can have it integrated into zpool(1M) so that it can find and "fixup" on import detached vdevs as new pools ? I''d think it would be reasonable to extend the meaning of ''zpool import -D'' to list detached vdevs as well as destroyed pools. -- Darren J Moffat
Jeff Bonwick
2008-May-07 07:45 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
Yes, I think that would be useful. Something like ''zpool revive'' or ''zpool undead''. It would not be completely general-purpose -- in a pool with multiple mirror devices, it could only work if all replicas were detached in the same txg -- but for the simple case of a single top-level mirror vdev, or a clean ''zpool split'', it''s actually pretty straightforward. Jeff On Tue, May 06, 2008 at 11:16:25AM +0100, Darren J Moffat wrote:> Great tool, any chance we can have it integrated into zpool(1M) so that > it can find and "fixup" on import detached vdevs as new pools ? > > I''d think it would be reasonable to extend the meaning of > ''zpool import -D'' to list detached vdevs as well as destroyed pools. > > -- > Darren J Moffat
Darren J Moffat
2008-May-07 09:44 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
Jeff Bonwick wrote:> Yes, I think that would be useful. Something like ''zpool revive'' > or ''zpool undead''.Why a new subcommand when ''zpool import'' got ''-D'' to revive destroyed pools ? > It would not be completely general-purpose --> in a pool with multiple mirror devices, it could only work if > all replicas were detached in the same txg -- but for the simple > case of a single top-level mirror vdev, or a clean ''zpool split'', > it''s actually pretty straightforward.zpool split is the functionality need - take a side of a mirror and make a new unmirrored pool from it. However I think many people are likely to attempt ''zpool detach'' because of experience with volume managers such as SVM (ODS, LVM what ever you want to call it this week) where you type ''metadetach''. Though of course that won''t work in the case where there is actually a stripe of mirrors so ''zpool split'' is need to deal with the non trivial case anyway. -- Darren J Moffat
Robert Milkowski
2008-May-08 07:39 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
Hello Darren, Tuesday, May 6, 2008, 11:16:25 AM, you wrote: DJM> Great tool, any chance we can have it integrated into zpool(1M) so that DJM> it can find and "fixup" on import detached vdevs as new pools ? I remember long time ago some posts about ''zpool split'' so one could split a pool in two (assuming pool is mirrored). -- Best regards, Robert Milkowski mailto:milek at task.gda.pl http://milek.blogspot.com
Jesus Cea
2008-May-09 04:03 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Darren J Moffat wrote: | Great tool, any chance we can have it integrated into zpool(1M) so that | it can find and "fixup" on import detached vdevs as new pools ? | | I''d think it would be reasonable to extend the meaning of | ''zpool import -D'' to list detached vdevs as well as destroyed pools. +inf :-) - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at jcea.es - http://www.jcea.es/ _/_/ _/_/ _/_/ _/_/ _/_/ jabber / xmpp:jcea at jabber.org _/_/ _/_/ _/_/_/_/_/ ~ _/_/ _/_/ _/_/ _/_/ _/_/ "Things are not so easy" _/_/ _/_/ _/_/ _/_/ _/_/ _/_/ "My name is Dump, Core Dump" _/_/_/ _/_/_/ _/_/ _/_/ "El amor es poner tu felicidad en la felicidad de otro" - Leibniz -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.8 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iQCVAwUBSCPNGZlgi5GaxT1NAQLXowQAnF/fWQ5SmBzRait+9wgVJdKEQ9Phh5D3 py3Bq75yQb4ljQ2PLbT1hU7QgNxavCLjx8NTz5pfnT9+m7E4SG5kQdfXXHgPMfHd 7Mp1ckRtcVZh+XWj2ESe/4ZDIIz/EvaeL4j7j9uFpDVWXGNPNZx1LyGcBuxt8uya jdchjKgwyZM=xPth -----END PGP SIGNATURE-----
Ron Halstead
2008-Oct-09 15:33 UTC
[zfs-discuss] recovering data from a dettach mirrored vdev
Jeff, Sorry this is so late. Thanks for the labelfix binary. I would like to have one compiled for sparc. I tried compiling your source code but it threw up with many errors. I''m not a programmer and reading the source code means absolutely nothing to me. One error was: cc labelfix.c "labelfix.c", line 1: #include directive missing file name Many more of those plus others. Which compiler did you use? I tried gcc and SUNWspro with the same results. This tool would really be handy at work as almost all of our Solaris 10 machines have mirrored zpools for data. Hope you can help. --ron -- This message posted from opensolaris.org
I''m wondering if this bug is fixed and if not, what is the bug number:> If your entire pool consisted of a single mirror of > two disks, A and B, > and you detached B at some point in the past, you > *should* be able to > recover the pool as it existed when you detached B. > However, I just > ried that experiment on a test pool and it didn''t > work.PS: Thanks for helping that guy (just a fellow user) out :) -- This message posted from opensolaris.org
I was wondering if this ever made to zfs as a fix for bad labels? On Wed, 7 May 2008, Jeff Bonwick wrote:> Yes, I think that would be useful. Something like ''zpool revive'' > or ''zpool undead''. It would not be completely general-purpose -- > in a pool with multiple mirror devices, it could only work if > all replicas were detached in the same txg -- but for the simple > case of a single top-level mirror vdev, or a clean ''zpool split'', > it''s actually pretty straightforward. > > Jeff > > On Tue, May 06, 2008 at 11:16:25AM +0100, Darren J Moffat wrote: >> Great tool, any chance we can have it integrated into zpool(1M) so that >> it can find and "fixup" on import detached vdevs as new pools ? >> >> I''d think it would be reasonable to extend the meaning of >> ''zpool import -D'' to list detached vdevs as well as destroyed pools. >> >> -- >> Darren J Moffat > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > !DSPAM:122,482161a8460825014478! >