is it possible to recover a file system that existed prior to zpool create pool2 device I had a mirror on device which I detached and then issued the create command hoping it would give me my old file system. thank you all. -- This message posted from opensolaris.org
On Tue, 09 Jun 2009 17:51:25 PDT, stephen bond <no-reply at opensolaris.org> wrote:>is it possible to recover a file system that existed prior to > >zpool create pool2 device > >I had a mirror on device which I detached and then issued >the create command hoping it would give me my old file system.That''s close to impossible using that device alone, all labels and ueberblocks have been overwritten. Your best chance is to destroy pool2 and attach the device to the original pool again as a mirror device. It should resilver by itself. If the original pool is lost, your data is lost. Then, you can detach it and import it in some other system as an unmirrored pool. In other words: you don''t have to create a pool to access one side of a mirror. After all, it;s a mirror, so the pool is already in place.>thank you all.Good luck. -- ( Kees Nuyt ) c[_]
Kees, is it possible to get at least the contents of /export/home ? that is supposedly a separate file system. is there a way to look for files using some low level disk reading tool. If you are old enough to remember the 80s there was stuff like PCTools that could read anywhere on the disk. I need some text files, which should be easy to recover. Are there any rules on how zfs structures itself? masybe the old file allocation table still exists and just needs to be restored. thank you very much Stephen -- This message posted from opensolaris.org
On Fri, 19 Jun 2009 11:50:07 PDT, stephen bond <no-reply at opensolaris.org> wrote:>Kees, > >is it possible to get at least the contents of /export/home ? > >that is supposedly a separate file system.That doesn''t mean that data is in one particular spot on the disk. The blocks of the zfilesystems can be interspersed.>is there a >way to look for files using some low level disk reading >tool. If you are old enough to remember the 80s >there was stuff like PCTools that could read anywhere >on the disk.I am old enough. I was the proud owner of a 20 MByte harddisk back then (~1983). Disks were so much smaller, you could practically scroll most of the contents in a few hours. The on disk data structures are much more complicated now.>I need some text files, which should be >easy to recover.You could read the device using dd and pipe it block by block into some smart filter that skips blocks with gibberish and saves anything that looks like text. You can try to search blocks for typical phrases you know are in the text and filter blocks on that property. sed or awk or your friends.>Are there any rules on how zfs structures itself? >maybe the old file allocation table still exists >and just needs to be restored.You''ll have to understand the internals, the on-disk format is documented, but not easy to grasp. zdb is the program you''d use to analyse the zpool.>thank you very much >StephenGood luck. -- ( Kees Nuyt ) c[_]
Kees Nuyt wrote:> On Fri, 19 Jun 2009 11:50:07 PDT, stephen bond > <no-reply at opensolaris.org> wrote: > > >> Kees, >> >> is it possible to get at least the contents of /export/home ? >> >> that is supposedly a separate file system. >> > > That doesn''t mean that data is in one particular spot on the > disk. The blocks of the zfilesystems can be interspersed. >You can try a recovery tool that supports file carving. This technique looks for files based on their signatures while ignoring damaged, nonexistent, or unsupported partition and/or filesystem info. Works best on small files, but gets worse as file sizes increase (or more accurately, gets worse as file fragmentation increases). Should work well for files smaller than the stripe size, but possibly not at all for compressed files unless you are using a data recovery app that understands ZFS compression formats (I don''t know of any myself). Disable or otherwise do not run scrub or any other command that may write to the array until you have exhausted your recovery options or no longer care to keep trying. EasyRecovery supports file carving as does RecoverMyFiles, and TestDisk. I''m sure there are others too. Not all programs actually call it file carving. The effectiveness of the programs may vary so it is worthwhile to try any demo versions. The programs will need direct block level access to the drive...network shares won''t work You can run the recovery software on whatever OS it needs, and based on what you are asking for, you don''t need to seek recovery software that is explicitly Solaris compatible.>> is there a >> way to look for files using some low level disk reading >> tool. If you are old enough to remember the 80s >> there was stuff like PCTools that could read anywhere >> on the disk. >> > > I am old enough. I was the proud owner of a 20 MByte > harddisk back then (~1983). > Disks were so much smaller, you could practically scroll > most of the contents in a few hours. > The on disk data structures are much more complicated now. >I recall using a 12.5 Mhz 286 Amdek (Wyse) PC with a 20 mb 3600 rpm Miniscribe MFM drive. A quick Google search for this item says its transfer rate specs were 0.625 MB/sec, which sounds about right IIRC (if you chose the optimal interleave when formatting. If you had the wrong interleave performance suffered, however I also recall that the drive also made less noise. I think I even ran that drive at a suboptimal interleave for a while simply because it was quieter...you could say it was an early indirect form of AAM (acoustic management). To put that drive capacity and transfer rate into comparison with a modern drive, you could theoretically fill the 20 mb drive in 20/0.625=32 seconds. A 500 GB (base 10) SATA2 drive (WD5000AAKS) has an average write rate of 68 MB/sec. 466*1024/68=7012 seconds to fill. Capacity growth is significantly out pacing read/write performance, which I''ve seen summed up as modern drives are becoming like the tapes of yesteryear. Those data recovery tools took advantage of the filesystem''s design that it only erased the index entry (sometimes only a single character in the filename) in the FAT. When NTFS came out, it took a few years for unerase and general purpose NTFS recovery to be possible. This was actually a concern of mine and one reason I delayed using NTFS by default on several Windows 2000/XP systems. I waited until good recovery tools were available before I committed to the new filesystem (in spite of it being journaled, there initially just weren''t any recovery tools available in case things went horribly wrong, Live CDs were not yet available, and there weren''t any read/write NTFS tools available for DOS or Linux). In short, graceful degradation and the availability of recovery tools is important in selecting a filesystem, particularly when used on a desktop that may not have regular backups. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090619/753886c9/attachment.html>
Thank you ! This is exactly what I was looking for and although this is zfs (not a Windows FAT) the time it takes to create a new pool (instantaneous) means all data is still there and only the table of contents was maybe erased. as unix directories are files, I suspect even the old structure may be available. it just created a new file for the new pool. will read the zfs docs and report in this thread what I find out. -- This message posted from opensolaris.org
I''ve got the same problem. Did you find any solution? -- This message posted from opensolaris.org
None of the file recovery tools work with zfs. Testdisk is most advanced and the author is looking at incorporating zfs, but when will it happen nobody knows. I want to try with dd. Can anybody give me an example of how to read bytes cylinder by cylinder? Filtering the output is easy and I will gladly share a small Java utility that calls dd or whatever is suitable for raw disk access and dumps recognized text. -- This message posted from opensolaris.org
Kees, can you provide an example of how to read from dd cylinder by cylinder? also if a file is fragmented is there a marker at the end of the first piece telling where is the second? Thank you stephen -- This message posted from opensolaris.org
stephen bond wrote:> can you provide an example of how to read from dd cylinder by cylinder?What''s a cylinder? That''s a meaningless term these days. You dd byte ranges. Pick whatever byte range you want. If you want mythical cylinders, fetch the cylinder size from "format" and use that as your block size for dd. But the disks all lie about that, and remap sectors anyway, so I don''t see why you would possibly care... -- Carson
Carson, please provide an example how to read bytes. I talk about cylinder because I don''t know better. I need to read from a partition which shows as /dev/hda3 under Gparted with starting sector xxxx ending sector zzzzz. under solaris I think it becomes /dev/dsk/c0d0p3 I tried dd if=/dev/dsk/c0d0p3 ibs=256 hoping to read the first 256 bytes but it started streaming everything and beeping non-stop. even ^C could not stop the beeping. Thank you Stephen -- This message posted from opensolaris.org
On 11 July, 2009 - stephen bond sent me these 0,6K bytes:> Carson, > > please provide an example how to read bytes. I talk about cylinder because I don''t know better. > I need to read from a partition which shows as /dev/hda3 under Gparted with starting sector xxxx ending sector zzzzz. > > under solaris I think it becomes /dev/dsk/c0d0p3 > > I tried > dd if=/dev/dsk/c0d0p3 ibs=256 > hoping to read the first 256 bytes but it started streaming everything > and beeping non-stop. even ^C could not stop the beeping.Add of=/where/you/want/the/data unless you want the raw dump to your screen, which is what you just got.. Add count=1 if you just want one block.. etc. /Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se