Hi I''m looking into forensic aspects of ZFS, in particular ways to use ZFS tools to investigate ZFS file systems without writing to the pools. I''m working on a test suite of file system images within VTOC partitions. At the moment, these only have 1 file system per pool per VTOC partition for simplicity''s sake, and I''m using Solaris 10 6/06, which may not be the most up-to-date. At the bottom are details of the tests. The problem: I was not able to use a loopback device on a file system image (see TEST section). Here are some questions: * Am I missing something? * Is there support for lofiadm in a more recent version of ZFS? * Or is there any other way to safely mount a file system image? Thanks for your help. Regards Mark GOOD NEWS It looks as if the zfs mount options can stop updates of file system metadata (ie mount times etc) and file metadata (no writing of file access times). Quote from man zfs 25 Apr 2006 p. 11 ("Temporary Mount Point Properties") : ... these options can be set on a per-mount basis using the -o option, without affecting the property that is stored on disk. The values specified on the command line will override the values stored in the dataset. The -nosuid option is an alias for "nodevices,nosetuid". These proper- ties are reported as "temporary" by the "zfs get" command. TEST 26.07.2007 Forensic mounting of ZFS File Systems. Loopback device does not seem to work with ZFS using "zfs mount" or legacy "mount". However, temporary command-line options can prevent mounts from writing to a file system. MAKE COPY root at sol10 /export/home# cp t1_fs1.dd t1_fs1.COPY.dd CHECKSUMS root at sol10 /export/home# gsha1sum t1* 5c08a7edfe3d04f5fff6d37c6691e85c3745629f t1_fs1.COPY.dd 5c08a7edfe3d04f5fff6d37c6691e85c3745629f t1_fs1.dd CHECKSUM RAW DEV FOR FS1 root at sol10 /export/home# gsha1sum /dev/dsk/c0t1d0s1 5c08a7edfe3d04f5fff6d37c6691e85c3745629f /dev/dsk/c0t1d0s1 root at sol10 /export/home# PREPARE LOOPBACK DEVICE note need full path for file root at sol10 /export/home# lofiadm -a /export/home/t1_fs1.COPY.dd /dev/lofi/1 root at sol10 /export/home# lofiadm Block Device File /dev/lofi/1 /export/home/t1_fs1.COPY.dd root at sol10 /export/home# ZFS MOUNT OF LOOPBACK DEVICE DOESNT WORK root at sol10 /export/home# zfs mount -o noexec,nosuid,noatime,nodevices,ro /dev/lofi/1 /fs1 too many arguments usage: [...] root at sol10 /export/home# zfs mount -o ro,noatime /dev/lofi/1 cannot open ''/dev/lofi/1'': invalid filesystem name NOR DOES LEGACY MOUNT root at sol10 /export/home# mount -F zfs -o noexec,nosuid,noatime,nodevices,ro /dev/lofi/1 /fs1 cannot open ''/dev/lofi/1'': invalid filesystem name TRY MOUNT OF NORMAL FS root at sol10 /export/home# mount -o noexec,nosuid,noatime,nodevices,ro fs1 /fs1 root at sol10 /export/home# ls -lR /fs1 /fs1: total 520 -rw-r--r-- 1 mark staff 234179 Jul 17 20:17 gutenberg.org_martin_luther_treatise_on_good_works_with_intro_gwork10.txt drwxr-xr-x 3 root root 5 Jul 26 14:12 level_1 /fs1/level_1: total 1822 -rwxr-xr-x 1 mark staff 834236 Jul 17 20:16 imgp2219.jpg -rw-r--r-- 1 mark staff 1388 Jul 17 20:15 imgp2219.jpg.head.tail.xxd drwxr-xr-x 2 root root 5 Jul 26 14:12 level_2 /fs1/level_1/level_2: total 1038 -rw-r--r-- 1 mark staff 234179 Jul 17 20:17 gutenberg.org_martin_luther_treatise_on_good_works_with_intro_gwork10.txt -rw-r--r-- 1 mark staff 173713 Jul 17 20:15 imgp2219.small.jpg -rw-r--r-- 1 mark staff 1388 Jul 17 20:15 imgp2219.small.jpg.head.tail.xxd MUCK AROUND A BIT root at sol10 /export/home# file /fs1/gutenberg.org_martin_luther_treatise_on_good_works_with_intro_gwork10.txt /fs1/gutenberg.org_martin_luther_treatise_on_good_works_with_intro_gwork10.txt: ascii text root at sol10 /export/home# root at sol10 /export/home# head /fs1/gutenberg.org_martin_luther_treatise_on_good_works_with_intro_gwork10.txt *****The Project Gutenberg Etext of A treatise on Good Works***** #2 in our series by Dr. Martin Luther Copyright laws are changing all over the world, be sure to check the copyright laws for your country before posting these files! Please take a look at the important information in this header. We encourage you to keep this file on your own disk, keeping an electronic path open for the next readers. Do not remove this. root at sol10 /export/home# root at sol10 /export/home# rm /fs1/gutenberg.org_martin_luther_treatise_on_good_works_with_intro_gwork10.txt rm: /fs1/gutenberg.org_martin_luther_treatise_on_good_works_with_intro_gwork10.txt: override protection 644 (yes/no)? y rm: /fs1/gutenberg.org_martin_luther_treatise_on_good_works_with_intro_gwork10.txt not removed: Read-only file system root at sol10 /export/home# root at sol10 /export/home# root at sol10 /export/home# ls -la /fs1/ total 543 drwxr-xr-x 3 root sys 4 Jul 26 14:13 . drwxr-xr-x 26 mark staff 512 Jul 26 14:06 .. -rw-r--r-- 1 mark staff 234179 Jul 17 20:17 gutenberg.org_martin_luther_treatise_on_good_works_with_intro_gwork10.txt drwxr-xr-x 3 root root 5 Jul 26 14:12 level_1 root at sol10 /export/home# UNMOUNT root at sol10 /export/home# umount /fs1 root at sol10 /export/home# CHECKSUM RAW DEV AGAIN: MATCHES (NO DATA WRITTEN) root at sol10 /export/home# root at sol10 /export/home# gsha1sum /dev/dsk/c0t1d0s1 5c08a7edfe3d04f5fff6d37c6691e85c3745629f /dev/dsk/c0t1d0s1 root at sol10 /export/home#
Mark Furner wrote:> PREPARE LOOPBACK DEVICE > note need full path for file > > root at sol10 /export/home# lofiadm -a /export/home/t1_fs1.COPY.dd /dev/lofi/1 > root at sol10 /export/home# lofiadm > Block Device File > /dev/lofi/1 /export/home/t1_fs1.COPY.dd > root at sol10 /export/home# > > ZFS MOUNT OF LOOPBACK DEVICE DOESNT WORK > > root at sol10 /export/home# zfs mount -o > noexec,nosuid,noatime,nodevices,ro /dev/lofi/1 /fs1 > too many argumentsThat is the wrong level of abstraction for ZFS. The file you have is a pool not a filesystem. That means you need to import the pool into the system before you can see the filesystems. For example: oversteer:pts/1# zpool create -f test c0t0d0s7 oversteer:pts/1# zpool export test oversteer:pts/1# dd if=/dev/dsk/c0t0d0s7 of=/var/tmp/c0t0d0s7.copy 144585+0 records in 144585+0 records out oversteer:pts/1# zpool import -d /var/tmp pool: test id: 8033455162408061258 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: test ONLINE /var/tmp/c0t0d0s7.copy ONLINE You don''t need to use lofiadm at all but you do need to first import the pool using zpool import. -- Darren J Moffat
Thanks for the clarification, Darren, and sorry for cross-posting. OK, physical device -> pool -> file-system(s) Some questions: 1) zpool import allows options similar to zfs. Can I set the same or similar read-only (RO) options for the whole pool (noexec,nosuid,noatime,nodevices,ro)? 2) What changes are made to a pool''s data by exporting/importing it? My guess is that something gets written to the zpool.cache but also to the pool''s vdev labels themselves... but the file system data is probably untouched. From man zpool: zpool export [-f] pool ... Exports the given pools from the system. All devices are marked as exported, [...] Before exporting the pool, all datasets within the pool are unmounted. For pools to be portable, you must give the zpool com- mand whole disks, not just slices, so that ZFS can label the disks with portable EFI labels. Otherwise, disk drivers on platforms of different endianness will not recognize the disks. 3) In a large pool with several file systems is there any way to image a single file system? Regards Mark On Friday 27 July 2007 11:37, Darren J Moffat <darrenm at opensolaris.org> may have written:> Mark Furner wrote: > > PREPARE LOOPBACK DEVICE > > note need full path for file > > > > root at sol10 /export/home# lofiadm -a /export/home/t1_fs1.COPY.dd > > /dev/lofi/1 root at sol10 /export/home# lofiadm > > Block Device File > > /dev/lofi/1 /export/home/t1_fs1.COPY.dd > > root at sol10 /export/home# > > > > ZFS MOUNT OF LOOPBACK DEVICE DOESNT WORK > > > > root at sol10 /export/home# zfs mount -o > > noexec,nosuid,noatime,nodevices,ro /dev/lofi/1 /fs1 > > too many arguments > > That is the wrong level of abstraction for ZFS. The file you have is a > pool not a filesystem. That means you need to import the pool into the > system before you can see the filesystems. > > For example: > > oversteer:pts/1# zpool create -f test c0t0d0s7 > oversteer:pts/1# zpool export test > oversteer:pts/1# dd if=/dev/dsk/c0t0d0s7 of=/var/tmp/c0t0d0s7.copy > 144585+0 records in > 144585+0 records out > oversteer:pts/1# zpool import -d /var/tmp > pool: test > id: 8033455162408061258 > state: ONLINE > action: The pool can be imported using its name or numeric identifier. > config: > > test ONLINE > /var/tmp/c0t0d0s7.copy ONLINE > > You don''t need to use lofiadm at all but you do need to first import the > pool using zpool import. > > -- > Darren J Moffat
Mark Furner wrote:> Thanks for the clarification, Darren, and sorry for cross-posting. > > OK, physical device -> pool -> file-system(s) > > Some questions: > > 1) zpool import allows options similar to zfs. Can I set the same or similar > read-only (RO) options for the whole pool > (noexec,nosuid,noatime,nodevices,ro)?Not at this time no.> 3) In a large pool with several file systems is there any way to image a > single file system?Take a snapshot (using zfs snapshot) then use ''zfs send'' to archive that into an "image". Note the this isn''t quite the same as doing a dd of a ufs filesystem since you can''t "mount" that image on its own you would need to ''zfs recv'' it into another pool if you want to look at it using normal filesystem tools. -- Darren J Moffat
Thanks Darren I found Example 10 and 11 in man zfs as a quick how-to: Example 10: Remotely Replicating ZFS Data Example 11: Using the zfs receive -d Option As to Q2 about the changes to the pool data and zpool.cache I can find this out by testing. Regards Mark On Friday 27 July 2007 15:06, Darren J Moffat <darrenm at opensolaris.org> may have written:> Mark Furner wrote: > > Thanks for the clarification, Darren, and sorry for cross-posting. > > > > OK, physical device -> pool -> file-system(s) > > > > Some questions: > > > > 1) zpool import allows options similar to zfs. Can I set the same or > > similar read-only (RO) options for the whole pool > > (noexec,nosuid,noatime,nodevices,ro)? > > Not at this time no. > > > 3) In a large pool with several file systems is there any way to image a > > single file system? > > Take a snapshot (using zfs snapshot) then use ''zfs send'' to archive that > into an "image". Note the this isn''t quite the same as doing a dd of a > ufs filesystem since you can''t "mount" that image on its own you would > need to ''zfs recv'' it into another pool if you want to look at it using > normal filesystem tools. > > -- > Darren J Moffat
Mark Furner wrote:> Thanks for the clarification, Darren, and sorry for cross-posting. > > OK, physical device -> pool -> file-system(s) > > Some questions: > > 1) zpool import allows options similar to zfs. Can I set the same or similar > read-only (RO) options for the whole pool > (noexec,nosuid,noatime,nodevices,ro)?"zpool import -o ro <pool>" -- see zpool(1m). --matt
Thanks! Mark On Saturday 28 July 2007 00:35, Matthew Ahrens <Matthew.Ahrens at sun.com> may have written:> Mark Furner wrote: > > Thanks for the clarification, Darren, and sorry for cross-posting. > > > > OK, physical device -> pool -> file-system(s) > > > > Some questions: > > > > 1) zpool import allows options similar to zfs. Can I set the same or > > similar read-only (RO) options for the whole pool > > (noexec,nosuid,noatime,nodevices,ro)? > > "zpool import -o ro <pool>" -- see zpool(1m). > > --matt
Hi Despite having done the RTFM with a Draft On Disk Specifications document, I''m still trying to get a picture of how ZFS gets laid out disk. It''s rather too high-level and interested in explaining the abstractions to say much about where in the FS stuff gets put. The PDF file on data structures of a single file is a bit more helpful, but still is a bit abstract. What I want to know is how to reconstruct a ZFS pool from an image file as raw data, which byte goes where. It looks as if I''ll have to write up a description myself. AFAIK, the first 4 MB of a ZFS file system is taken up with 2 vdev labels and 3.5 MB of unused space for future use. But what happens after that? I found what looks like the meta object set MOS right after the 4 MB boundary, and later on what looks like a copy of the MOS. (This alone seems to contradict the On Disk Specifications). When does the reserved space of the ZFS partition end and the user data begin? Are there block groups? Is the user-data also contained within dnodes? The On-Disk Specs explain znodes for file metadata, and it looks as objects have their own metadata stored in a dsl_dataset_phys_t structure. Where is the file data? Can you point me in the direction of the relevant code passages? Any help would save time with the hex editor... Thanks a lot Mark On Saturday 28 July 2007 00:35, Matthew Ahrens <Matthew.Ahrens at sun.com> may have written:> Mark Furner wrote: > > Thanks for the clarification, Darren, and sorry for cross-posting. > > > > OK, physical device -> pool -> file-system(s) > > > > Some questions: > > > > 1) zpool import allows options similar to zfs. Can I set the same or > > similar read-only (RO) options for the whole pool > > (noexec,nosuid,noatime,nodevices,ro)? > > "zpool import -o ro <pool>" -- see zpool(1m). > > --matt
Erm, sorry for that bit of a rant. After some fiddling with the hex editor I found out a bit more and am trying to be more specific... 1) How are files laid out on disk, and encapsulated as objects? (Not clear from On Disk Specs or "datastructures_for_single_file.pdf") I guess I don''t understand the way they are wrapped as objects on the disk. 2) Are there block groups or an equivalent? 3) Are there extent pointers? I''ve got a ZFS test image with a large text file (234,179 bytes) that is pleasantly contiguous. I wonder if there are no block groups at all... I can''t see quite how it is encapsulated as an object, either, since it is larger than a 128kb block. Any tips for reading re. code files would be appreciated. Regards Mark On Monday 30 July 2007 20:51, Mark Furner (Mark Furner <mark.furner at gmx.net>) may have written:> Hi > > Despite having done the RTFM with a Draft On Disk Specifications document, > I''m still trying to get a picture of how ZFS gets laid out disk. It''s > rather too high-level and interested in explaining the abstractions to say > much about where in the FS stuff gets put. The PDF file on data structures > of a single file is a bit more helpful, but still is a bit abstract. What I > want to know is how to reconstruct a ZFS pool from an image file as raw > data, which byte goes where. It looks as if I''ll have to write up a > description myself. > > AFAIK, the first 4 MB of a ZFS file system is taken up with 2 vdev labels > and 3.5 MB of unused space for future use. But what happens after that? I > found what looks like the meta object set MOS right after the 4 MB > boundary, and later on what looks like a copy of the MOS. (This alone seems > to contradict the On Disk Specifications). When does the reserved space of > the ZFS partition end and the user data begin? Are there block groups? Is > the user-data also contained within dnodes? The On-Disk Specs explain > znodes for file metadata, and it looks as objects have their own metadata > stored in a dsl_dataset_phys_t structure. Where is the file data? > > Can you point me in the direction of the relevant code passages? Any help > would save time with the hex editor... > > Thanks a lot > > Mark > > On Saturday 28 July 2007 00:35, Matthew Ahrens <Matthew.Ahrens at sun.com> may > > have written: > > Mark Furner wrote: > > > Thanks for the clarification, Darren, and sorry for cross-posting. > > > > > > OK, physical device -> pool -> file-system(s) > > > > > > Some questions: > > > > > > 1) zpool import allows options similar to zfs. Can I set the same or > > > similar read-only (RO) options for the whole pool > > > (noexec,nosuid,noatime,nodevices,ro)? > > > > "zpool import -o ro <pool>" -- see zpool(1m). > > > > --matt > > _______________________________________________ > zfs-code mailing list > zfs-code at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-code