Some time ago there was some discussion on zfs send | rcvd TO A FILE. Apart form the disadvantages which I now know someone mentioned a CHECK to be at least sure that the file itself was OK (without one or more bits that felt over). I lost this reply and would love to hear this check again. In other words how can I be sure of the validity of the received file in the next command line: # zfs send -Rv rpool at 090902 > /backup/snaps/rpool.090902 I only want to know how to check the integrity of the received file. -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 B121 + All that''s really worth doing is what we do for others (Lewis Carrol)
"zfs recv -vn < file" will check the integrity of the zfs stream in file. However, this is only a one-time check; if the data is corrupted later the stream will not be recoverable. You might consider using something like par2 [1] to generate parity: while true: zfs send fs at snap > file generate par2 at desired strength if zfs recv -vn < file succeeds: break The reason for generating parity before checking if the recv succeeds is then you can be reasonably sure the stream that the parity was generated for is a valid stream. Then store the stream and the parity files together, and use the parity files to recover from media damage over time. Will [1]: http://parchive.sourceforge.net/ On Wed, Sep 2, 2009 at 13:31, Dick Hoogendijk<dick at nagual.nl> wrote:> > Some time ago there was some discussion on zfs send | rcvd TO A FILE. > Apart form the disadvantages which I now know someone mentioned a CHECK to > be at least sure that the file itself was OK (without one or more bits that > felt over). I lost this reply and would love to hear this check again. In > other words how can I be sure of the validity of the received file in the > next command line: > > # zfs send -Rv rpool at 090902 > /backup/snaps/rpool.090902 > > I only want to know how to check the integrity of the received file. > > -- > Dick Hoogendijk -- PGP/GnuPG key: 01D2433D > + http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 B121 > + All that''s really worth doing is what we do for others (Lewis Carrol) > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On Wed, 2 Sep 2009, Will Murnane wrote:> "zfs recv -vn < file" will check the integrity of the zfs stream in > file. However, this is only a one-time check; if the data is > corrupted later the stream will not be recoverable. You might > consider using something like par2 [1] to generate parity:The most commonly available program which self-validates data is ''gzip'' but ''lzop'' is faster. Besides self-validation, there is the advantage that the data is smaller due to compression. Nothing prevents validating the self-verifying archive file via this "zfs recv -vn " technique. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Dick Hoogendijk wrote:> > Some time ago there was some discussion on zfs send | rcvd TO A FILE. > Apart form the disadvantages which I now know someone mentioned a CHECK > to be at least sure that the file itself was OK (without one or more > bits that felt over). I lost this reply and would love to hear this > check again. In other words how can I be sure of the validity of the > received file in the next command line: > > # zfs send -Rv rpool at 090902 > /backup/snaps/rpool.090902 > > I only want to know how to check the integrity of the received file. >You should be able to generate a sha1sum/md5sum of the zfs send stream on the fly with ''tee'': # zfs send -R rpool at 090902 | tee /backups/snaps/rpool.090902 | sha1sum compare the output of that with the sha1sum of the file on-disk: # sha1sum /backups/snaps/rpool.090902 This only guarantees that the file contains the exact same bits as the zfs send stream. It does not verify the ZFS format/integrity of the stream - the only way to do that is to zfs recv the stream into ZFS. -- Dave
On Wed, 2 Sep 2009 13:06:35 -0500 (CDT) Bob Friesenhahn <bfriesen at simple.dallas.tx.us> wrote:> Nothing prevents validating the self-verifying archive file via this > "zfs recv -vn " technique.Does this verify the ZFS format/integrity of the stream? Or is the only way to do that to zfs recv the stream into ZFS? -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 B121 + All that''s really worth doing is what we do for others (Lewis Carrol)
On 09/03/09 14:21, dick hoogendijk wrote:> On Wed, 2 Sep 2009 13:06:35 -0500 (CDT) > Bob Friesenhahn <bfriesen at simple.dallas.tx.us> wrote: > >> Nothing prevents validating the self-verifying archive file via this >> "zfs recv -vn " technique. >> > > Does this verify the ZFS format/integrity of the stream? > Or is the only way to do that to zfs recv the stream into ZFS? > >The -n option does some verification. It verifies that the record headers distributed throughout the stream are syntactically valid. Since each record header contains a length field which allows the next header to be found, one bad header will cause the processing of the stream to abort. But it doesn''t verify the content of the data associated with each record. We might want to implement an option to enhance zfs recv -n to calculate a checksum of each dataset''s records as it''s reading the stream and then verify the checksum when the dataset''s "END" record is seen. I''m looking at integrating a utility which allows the metadata in a stream to be dumped for debugging purposes (zstreamdump). It also verifies that the data in the stream agrees with the checksum. lori -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090903/bff4f932/attachment.html>
On Thu, 03 Sep 2009 14:32:27 -0600 Lori Alt <Lori.Alt at Sun.COM> wrote:> I''m looking at integrating a utility which allows the metadata in a > stream to be dumped for debugging purposes (zstreamdump). It also > verifies that the data in the stream agrees with the checksum.This sounds nice. Any change this will make it into U8? -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 B121 + All that''s really worth doing is what we do for others (Lewis Carrol)
On 09/03/09 15:25, dick hoogendijk wrote:> On Thu, 03 Sep 2009 14:32:27 -0600 > Lori Alt <Lori.Alt at Sun.COM> wrote: > >> I''m looking at integrating a utility which allows the metadata in a >> stream to be dumped for debugging purposes (zstreamdump). It also >> verifies that the data in the stream agrees with the checksum. >> > > This sounds nice. Any change this will make it into U8? > >No, it won''t be in U8. lori -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090903/8d10e82a/attachment.html>
Lori Alt wrote:> The -n option does some verification. It verifies that the record > headers distributed throughout the stream are syntactically valid. > Since each record header contains a length field which allows the next > header to be found, one bad header will cause the processing of the > stream to abort. But it doesn''t verify the content of the data > associated with each record.So, storing the stream in a zfs received filesystem is the better option. Alas, it also is the most difficult one. Storing to a file with "zfs send -Rv" is easy. The result is just a file and if your reboot the system all is OK. However, if I zfs "receive -Fdu" into a zfs filesystem I''m in trouble when I reboot the system. I get confusion on mountpoints! Let me explain: Some time ago I backup up my rpool and my /export ; /export/home to /backup/snaps (with zfs receive -Fdu). All''s OK because the newly created zfs FS''s stay unmounted ''till the next reboot(!). When I rebooted my system (due to a kernel upgrade) the system would nog boot, because it had mounted the zfs FS "backup/snaps/export" on /export and "backup/snaps/export/home on /export/home. The system itself had those FS''s too, of course. So, there was a mix up. It would be nice if the backup FS''s would not be mounted (canmount=noauto), but I cannot give this option when I create the zfs send | receive, can I? And giving this option later on is very difficult, because "canmount" is NOT recursive! And I don''t want to set it manualy on all those backup up FS''s. I wonder how other people overcome this mountpoint issue. -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 b122 + All that''s really worth doing is what we do for others (Lewis Carrol)
On 09/04/09 09:41, dick hoogendijk wrote:> Lori Alt wrote: >> The -n option does some verification. It verifies that the record >> headers distributed throughout the stream are syntactically valid. >> Since each record header contains a length field which allows the >> next header to be found, one bad header will cause the processing of >> the stream to abort. But it doesn''t verify the content of the data >> associated with each record. > > So, storing the stream in a zfs received filesystem is the better > option. Alas, it also is the most difficult one. Storing to a file > with "zfs send -Rv" is easy. The result is just a file and if your > reboot the system all is OK. However, if I zfs "receive -Fdu" into a > zfs filesystem I''m in trouble when I reboot the system. I get > confusion on mountpoints! Let me explain: > > Some time ago I backup up my rpool and my /export ; /export/home to > /backup/snaps (with zfs receive -Fdu). All''s OK because the newly > created zfs FS''s stay unmounted ''till the next reboot(!). When I > rebooted my system (due to a kernel upgrade) the system would nog > boot, because it had mounted the zfs FS "backup/snaps/export" on > /export and "backup/snaps/export/home on /export/home. The system > itself had those FS''s too, of course. So, there was a mix up. It would > be nice if the backup FS''s would not be mounted (canmount=noauto), but > I cannot give this option when I create the zfs send | receive, can I? > And giving this option later on is very difficult, because "canmount" > is NOT recursive! And I don''t want to set it manualy on all those > backup up FS''s. > > I wonder how other people overcome this mountpoint issue. >The -u option to zfs recv (which was just added to support flash archive installs, but it''s useful for other reasons too) suppresses all mounts of the received file systems. So you can mount them yourself afterward in whatever order is appropriate, or do a ''zfs mount -a''. lori
Lori Alt wrote:> The -u option to zfs recv (which was just added to support flash > archive installs, but it''s useful for other reasons too) suppresses > all mounts of the received file systems. So you can mount them > yourself afterward in whatever order is appropriate, or do a ''zfs > mount -a''.You misunderstood my problem. It is very convenient that the filesystems are not mounted. I only wish they could stay that way!. Alas, they ARE mounted (even if I don''t want them to) when I *reboot* the system. And THAT''s when thing get ugly. I then have different zfs filesystems using the same mountpoints! The backed up ones have the same mountpoints as their origin :-/ -> The only way to stop it is to *export* the "backup" zpool OR to change *manualy* the zfs prop "canmount=noauto" in all backed up snapshots/filesystems. As I understand I cannot give this "canmount=noauto" to the zfs receive command. # zfs send -Rv rpool at 0909 | zfs receive -Fdu backup/snaps -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 B121 + All that''s really worth doing is what we do for others (Lewis Carrol)
On 09/04/09 10:17, dick hoogendijk wrote:> Lori Alt wrote: >> The -u option to zfs recv (which was just added to support flash >> archive installs, but it''s useful for other reasons too) suppresses >> all mounts of the received file systems. So you can mount them >> yourself afterward in whatever order is appropriate, or do a ''zfs >> mount -a''. > You misunderstood my problem. It is very convenient that the > filesystems are not mounted. I only wish they could stay that way!. > Alas, they ARE mounted (even if I don''t want them to) when I *reboot* > the system. And THAT''s when thing get ugly. I then have different zfs > filesystems using the same mountpoints! The backed up ones have the > same mountpoints as their origin :-/ -> The only way to stop it is to > *export* the "backup" zpool OR to change *manualy* the zfs prop > "canmount=noauto" in all backed up snapshots/filesystems. > > As I understand I cannot give this "canmount=noauto" to the zfs > receive command. > # zfs send -Rv rpool at 0909 | zfs receive -Fdu backup/snapsThere is a RFE to allow zfs recv to assign properties, but I''m not sure whether it would help in your case. I would have thought that "canmount=noauto" would have already been set on the sending side, however. In that case, the property should be preserved when the stream is preserved. But if for some reason, you''re not setting that property on the sending side, but want it set on the receiving side, you might have to write a script to set the properties for all those datasets after they are received. lori
Lori Alt wrote:> On 09/04/09 10:17, dick hoogendijk wrote: >> Lori Alt wrote: >>> The -u option to zfs recv (which was just added to support flash >>> archive installs, but it''s useful for other reasons too) suppresses >>> all mounts of the received file systems. So you can mount them >>> yourself afterward in whatever order is appropriate, or do a ''zfs >>> mount -a''. >> You misunderstood my problem. It is very convenient that the >> filesystems are not mounted. I only wish they could stay that way!. >> Alas, they ARE mounted (even if I don''t want them to) when I >> *reboot* the system. And THAT''s when thing get ugly. I then have >> different zfs filesystems using the same mountpoints! The backed up >> ones have the same mountpoints as their origin :-/ -> The only way >> to stop it is to *export* the "backup" zpool OR to change *manualy* >> the zfs prop "canmount=noauto" in all backed up snapshots/filesystems. >> >> As I understand I cannot give this "canmount=noauto" to the zfs >> receive command. >> # zfs send -Rv rpool at 0909 | zfs receive -Fdu backup/snaps > There is a RFE to allow zfs recv to assign properties, but I''m not > sure whether it would help in your case. I would have thought that > "canmount=noauto" would have already been set on the sending side, > however. In that case, the property should be preserved when the > stream is preserved.Well, I checked again today. This is what happens: NAME PROPERTY VALUE SOURCE tank/ROOT/daffy canmount on default NAME PROPERTY VALUE SOURCE rpool/ROOT/daffy canmount noauto local As you can see the original dataset (rpool/ROOT/daffy) has canmount=noauto set. However, the received dataset (zfs send rpool/ROOT/daffy at 090905 | zfs receive -Fdu /tank) has this property changed(!) into canmount=on. So, what you state is not true. The property is NOT preserved. Is this a bug? -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2010.02 B121 + All that''s really worth doing is what we do for others (Lewis Carrol)