FULL backup to a file zfs snapshot -r rpool at 0908 zfs send -Rv rpool at 0908 > /net/remote/rpool/snaps/rpool.0908 INCREMENTAL backup to a file zfs snapshot -i rpool at 0908 rpool at 090822 zfs send -Rv rpool at 090822 > /net/remote/rpool/snaps/rpool.090822 As I understand the latter gives a file with changes between 0908 and 090822. Is this correct? How do I restore those files? I know how to recreate the root pool and how to restore the first one (.../snaps/rpool.0908) But what is the exact zfs syntax to restore the second file on top of the first one, containing the differences between the two? -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel + All that''s really worth doing is what we do for others (Lewis Carrol)
dick hoogendijk <dick at nagual.nl> wrote:> FULL backup to a file > zfs snapshot -r rpool at 0908 > zfs send -Rv rpool at 0908 > /net/remote/rpool/snaps/rpool.0908 > > INCREMENTAL backup to a file > zfs snapshot -i rpool at 0908 rpool at 090822 > zfs send -Rv rpool at 090822 > /net/remote/rpool/snaps/rpool.090822 > > As I understand the latter gives a file with changes between 0908 and > 090822. Is this correct?What do you understand by "incremental backup"? If you like to be able to restore single files, I recommend you to use "star" for the incrementals. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
On Sun, 23 Aug 2009 13:15:37 +0200 Joerg.Schilling at fokus.fraunhofer.de (Joerg Schilling) wrote:> dick hoogendijk <dick at nagual.nl> wrote: > > > FULL backup to a file > > zfs snapshot -r rpool at 0908 > > zfs send -Rv rpool at 0908 > /net/remote/rpool/snaps/rpool.0908 > > > > INCREMENTAL backup to a file > > zfs snapshot -i rpool at 0908 rpool at 090822 > > zfs send -Rv rpool at 090822 > /net/remote/rpool/snaps/rpool.090822 > > > > As I understand the latter gives a file with changes between 0908 > > and > > 090822. Is this correct? > > What do you understand by "incremental backup"?I do not want to process the first zfs send option everytime I make a backup of my root pool. It simply takes too long and too much space. I do, however, want to be able to restore my root pool in case of a disastre, as good and recent as possible.> If you like to be able to restore single files, I recommend you to > use "star" for the incrementals.I have no need for restoring single files. I use star / rsync for this already. I want to be able to restore my root pool in case of disk failure. So, I can always do a zfs send of the whole root, but I thought it might be possible to do this onece, followed by incremental differences. -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel + All that''s really worth doing is what we do for others (Lewis Carrol)
Ok, you *can* do this, but "zfs send" is *not* a backup mechanism. That warning is all over the documentation, and for good reason. ZFS send is not a guaranteed stable format, it is quite possible that upgrades will leave you unable to receive that file, essentially leaving it unreadable. Also, any errors encountered during the receive will leave you with no data. ZFS receive is an all or nothing event, if so much as a single bit get corrupted for any reason, your entire backup is toast. If you really want to store a backup, create another ZFS filesystem somewhere and do a send/receive into it. Please don''t try to dump zfs send to a file and store the results. -- This message posted from opensolaris.org
On Sun, 23 Aug 2009 09:54:07 PDT Ross <myxiplx at googlemail.com> wrote:> If you really want to store a backup, create another ZFS filesystem > somewhere and do a send/receive into it. Please don''t try to dump > zfs send to a file and store the results.If this is true than WHY does SUN advice on creating a zfs send to a file somewhere? "ZFS Root Pool Recovery" from the ZFS Troubleshooting Guide clearly mentions the creation of a -file- : http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Root_Pool_Recovery -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel + All that''s really worth doing is what we do for others (Lewis Carrol)
God knows, I''ve just checked the docs online and they make no mention of it either: http://docs.sun.com/app/docs/doc/819-5461/gbchx?a=view This looks to me like a serious omission in the documentation. Saving a send stream to file goes contrary to all the advice I''ve ever seen posted on these forums. What I would say is that if you do follow this advice, find a way to save the files that allows you to checksum and test them as they are stored, with some kind of redundancy so that any errors that occur in the remote file can be fixed without risking your backups. -- This message posted from opensolaris.org
On Sun, 23 Aug 2009, Ross wrote:> God knows, I''ve just checked the docs online and they make no mention of it either: > http://docs.sun.com/app/docs/doc/819-5461/gbchx?a=view > > This looks to me like a serious omission in the documentation. > Saving a send stream to file goes contrary to all the advice I''ve > ever seen posted on these forums.There is absolutely nothing wrong with saving a send stream to a file. Using the files for long-term backups is another matter entirely.> What I would say is that if you do follow this advice, find a way to > save the files that allows you to checksum and test them as they are > stored, with some kind of redundancy so that any errors that occur > in the remote file can be fixed without risking your backups. --Save the data to a file stored in zfs. Then you are covered. :-) Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
> Save the data to a file stored in zfs. Then you are > covered. :-)Only if the stream was also separately covered in transit. While you want in-transit protection regardless, "zfs recv"ing the stream into a pool validates that it was not damaged in transit, as well as giving you at-rest protection thereafter. In the context of storage, end-to-end means writer-to-reader across a time gap. ZFS gives us this nicely, for regular files locally. For streams, it''s send-to-recv; introducing a time gap with intermediate storage is fine, but you need to recognise that a stored stream is still "in transit" and has not yet been validated. This recognition helps you decide things like whether it''s yet safe to rely on it as a backup (eg destroy the original copy in a migration). As an aside, there are other ways to validate integrity, if you want to store a stream as a file - you could gpg-sign the stream and verify the signature once on storage at the other end. That still leaves you with the version-skew exposures of stream storage. Those version-skew issues can be a hindrance regardless of time - even a piped stream between old and new systems can cause problems when trying to migrate and upgrade. The problem here is not so much how the system works; for better or worse, once that''s understood people can design their solutions accordingly, perhaps with a bit of grumbling about reduced convenience. The immediate issue here is that various documentation and examples and guides provide mixed and even contradictory advice which hinders or hides that understanding. This can lead to surprises and anger, when a recommended solution relies on recovery from a saved stream that has not been validated. Userland tools to read and verify a stream, without having to play it into a pool (seek and io overhead) could really help here. -- This message posted from opensolaris.org
On Sun, 23 Aug 2009, Daniel Carosone wrote:> > Userland tools to read and verify a stream, without having to play > it into a pool (seek and io overhead) could really help here.This assumes that the problem is data corruption of the stream, which could occur anywhere, even on the originating host. The system where the data is stored may not support zfs-specific tools so portable OS-independent tools are desirable. A simple way to create a self-validating stream is to pipe the data through gzip (or lzop) on the originating host. Gzip (or lzop) can then be used to verify that the data received/read is correct. This offers the additional benefit of some compression as well. In particular, lzop is quite fast and the compressor may actually help send or write performance. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
> zfs send -Rv rpool at 0908 > /net/remote/rpool/snaps/rpool.0908The recommended thing is to "zfs send | zfs receive" ... or more likely, "zfs send | ssh somehost ''zfs receive''" You should ensure the source and destination OSes are precisely the same version, because then you''re assured the zfs send&receive are both compatible with each other. By directly piping the send to the receive (and it completing successfully), you''re guaranteeing the checksum integrity of the filesystem in transit, and enabling both the restore of the whole filesystem, and individual files, if you would ever care about that. If you "zfs send > somefile" and then you need to do a restore, you''re only assured to be able to restore onto precisely the same version of OS that originated the "sendfile." And as other people mentioned, you''re only able to restore the whole thing - and you''re at risk if there''s even a single bit data corruption. It should not be difficult to directly pipe a send to a receive - the only obstacle would be if the destination filesystem is not ZFS. Hopefully you can overcome that obstacle. To answer the original question - how to do an incremental - Please see this example: ## Create Full snapshot and send it zfs snapshot sourcefs at uniqueSnapName zfs send sourcefs at uniqueSnapName > somefile-Full (or: zfs send sourcefs at uniqueSnapName | ssh somehost ''zfs receive -F targetfs at uniqueSnapName'' ) ## Create Incremental snap and send it zfs snapshot sourcefs at IncrementalSnap1 zfs send -i sourcefs at uniqueSnapName sourcefs at IncrementalSnap1 > somefile-incremental (or: zfs send -i sourcefs at uniqueSnapName sourcefs at IncrementalSnap1 | ssh somehost ''zfs receive targetfs at IncrementalSnap1'' ) ## Create yet another incremental and send it zfs snapshot sourcefs at IncrementalSnap2 zfs send -i sourcefs at IncrementalSnap1 sourcefs at IncrementalSnap2 > somefile-incremental2 (or: zfs send -i sourcefs at IncrementalSnap1 sourcefs at IncrementalSnap2 | ssh somehost ''zfs receive targetfs at IncrementalSnap2'' )
> On Sun, 23 Aug 2009, Daniel Carosone wrote: > > Userland tools to read and verify a stream, without > having to play > > it into a pool (seek and io overhead) could really > help here. > > This assumes that the problem is data corruption of > the stream, which > could occur anywhere, even on the originating host.yes, exactly. consider, for example, a truncated stream because the send was interrupted.> The system where > he data is stored may not support zfs-specific tools > so portable > OS-independent tools are desirable.That''s part of what I was inferring by "userland", but could have emphasised. Open source should imply "portable, OS-independent" software that could be run on other systems.> A simple way to create a self-validating stream is to > pipe the data > through gzip (or lzop) on the originating host.How does this validate the truncated send example above? -- This message posted from opensolaris.org
On Sun, 23 Aug 2009 22:05:15 -0400 Edward Ned Harvey <solaris at nedharvey.com> wrote:> > zfs send -Rv rpool at 0908 > /net/remote/rpool/snaps/rpool.0908 > > The recommended thing is to "zfs send | zfs receive" [...][cut the rest of the reply] I want to thank everyone for the insights shared on this matter. I learned a lot and will change the procedure to a send/recv. The receiving system is on the exact same level of ZFS, so that''s fine. I -DO- think however that the advice in the mentioned link should be rewrote to this procedure or at least it should be clearly mentioned as a way to go. (CINDY?) -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel + All that''s really worth doing is what we do for others (Lewis Carrol)
On Sun, 23 Aug 2009 22:05:15 -0400 Edward Ned Harvey <solaris at nedharvey.com> wrote:> > zfs send -Rv rpool at 0908 > /net/remote/rpool/snaps/rpool.0908 > > The recommended thing is to "zfs send | zfs receive"I have a zpool named backup for this purpose (mirrored). Do I create a seperate FS (backup/FS) into it or can I use your example like: zfs send rpool at 0908 | zfs receive -Fd backup at 0908 -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel + All that''s really worth doing is what we do for others (Lewis Carrol)
dick hoogendijk <dick at nagual.nl> wrote:> On Sun, 23 Aug 2009 22:05:15 -0400 > Edward Ned Harvey <solaris at nedharvey.com> wrote: > > > > zfs send -Rv rpool at 0908 > /net/remote/rpool/snaps/rpool.0908 > > > > The recommended thing is to "zfs send | zfs receive" > > I have a zpool named backup for this purpose (mirrored). > > Do I create a seperate FS (backup/FS) into it or can I use your example > like: zfs send rpool at 0908 | zfs receive -Fd backup at 0908Unless this second pool is on a different physical location, this is not a backup. A real backup is able to survive a fire, theft or similar problems. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) joerg.schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
Joerg Schilling wrote:> dick hoogendijk <dick at nagual.nl> wrote: > >> On Sun, 23 Aug 2009 22:05:15 -0400 >> Edward Ned Harvey <solaris at nedharvey.com> wrote: >> >>>> zfs send -Rv rpool at 0908 > /net/remote/rpool/snaps/rpool.0908 >>> The recommended thing is to "zfs send | zfs receive" >> I have a zpool named backup for this purpose (mirrored). >> >> Do I create a seperate FS (backup/FS) into it or can I use your example >> like: zfs send rpool at 0908 | zfs receive -Fd backup at 0908 > > Unless this second pool is on a different physical location, this is not > a backup.That depends what backup means in this particular environment and what the risk model is. If fire, theft or other things that normally require and offsite copy aren''t part of this persons risk model then it may well be a perfectly sufficient backup for them.> A real backup is able to survive a fire, theft or similar problems.Not all "secondary/offline copies" of data need to survive those risks. This particular case could be, for example: if "backup" is a pool made from a disk (or set of disks) that are either physically removed or otherwise protected from fire and/or theft then it is a backup by your definition. The drives may get detached when the receive is finished and put them into a firesafe. Or the site may be sufficiently physically secure for the security threat model anyway. You are making assumptions about the physical environment based on a command being run. Just like "star cf - | star xf -" isn''t backup either then. -- Darren J Moffat
On Aug 23, 2009, at 8:12 PM, Daniel Carosone wrote:>> On Sun, 23 Aug 2009, Daniel Carosone wrote: >>> Userland tools to read and verify a stream, without >> having to play >>> it into a pool (seek and io overhead) could really >> help here. >> >> This assumes that the problem is data corruption of >> the stream, which >> could occur anywhere, even on the originating host. > > yes, exactly. consider, for example, a truncated stream because the > send was interrupted.You can validate a stream stored as a file at any time using the "zfs receive -n" option. Personally, I prefer to use -n and -u, but -u is a relatively new option. Therefore, the procedures we''ve used for decades still works: 1. make backup 2. verify backup 3. breathe easier -- richard
On Aug 23, 2009, at 11:17 AM, dick hoogendijk wrote:> On Sun, 23 Aug 2009 09:54:07 PDT > Ross <myxiplx at googlemail.com> wrote: > >> If you really want to store a backup, create another ZFS filesystem >> somewhere and do a send/receive into it. Please don''t try to dump >> zfs send to a file and store the results. > > If this is true than WHY does SUN advice on creating a zfs send to a > file somewhere? "ZFS Root Pool Recovery" from the ZFS Troubleshooting > Guide clearly mentions the creation of a -file- : > > http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Root_Pool_RecoveryNit: solarisinternals.com is not Sun. solarisinternals.com is part of a community and has some contributors from Sun. Official Sun docs are hosted somewhere on sun.com. The reason that zfs send/receive is not positioned as an enterprise backup solution is because it does not have many of the features of enterprise backup solutions and people were getting confused. zfs send/receive replicates datasets. Many people could associate this sort of replication with block-level replicators, and not enterprise backup solutions. I think it is fair to say that there really wasn''t anything quite like zfs send/receive before, so it is not surprising that there is some confusion surrounding it. IMHO, it is a useful part of a backup strategy, especially for high-volume, highly available systems. http://richardelling.blogspot.com/2009/08/backups-for-file-systems-with-millions.html -- richard
On Mon, 24 Aug 2009 16:36:13 +0100 Darren J Moffat <darrenm at opensolaris.org> wrote:> Joerg Schilling wrote: > > dick hoogendijk <dick at nagual.nl> wrote: > > > >> On Sun, 23 Aug 2009 22:05:15 -0400 > >> Edward Ned Harvey <solaris at nedharvey.com> wrote: > >> > >>>> zfs send -Rv rpool at 0908 > /net/remote/rpool/snaps/rpool.0908 > >>> The recommended thing is to "zfs send | zfs receive" > >> I have a zpool named backup for this purpose (mirrored). > >> > >> Do I create a seperate FS (backup/FS) into it or can I use your > >> example like: zfs send rpool at 0908 | zfs receive -Fd backup at 0908 > > > > Unless this second pool is on a different physical location, this > > is not a backup. > > That depends what backup means in this particular environment and > what the risk model is. If fire, theft or other things that normally > require and offsite copy aren''t part of this persons risk model then > it may well be a perfectly sufficient backup for them. > > > A real backup is able to survive a fire, theft or similar problems. > > Not all "secondary/offline copies" of data need to survive those > risks. > > This particular case could be, for example: if "backup" is a pool > made from a disk (or set of disks) that are either physically removed > or otherwise protected from fire and/or theft then it is a backup by > your definition. The drives may get detached when the receive is > finished and put them into a firesafe.Thank you for the analyze. This is the case. The drives are stored somewhere else after the backup has been made. -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel + All that''s really worth doing is what we do for others (Lewis Carrol)
On Sun, 23 Aug 2009 22:05:15 -0400 Edward Ned Harvey <solaris at nedharvey.com> wrote:> ## Create Full snapshot and send it > zfs send sourcefs at uniqueSnapName | ssh somehost ''zfs receive -F > targetfs at uniqueSnapName''this is what I want to do. However I want a recursive backup from the root pool. From the solaris dox I udnerstand I have to do this line: # zfs send -Rv rpool at 0908 | zfs receive -Fd backup/server/rpool at 0908 I''m not quite sure about the -Fd option of "receive" Is this correct? -- Dick Hoogendijk -- PGP/GnuPG key: 01D2433D + http://nagual.nl/ | SunOS 10u7 5/09 | OpenSolaris 2009.06 rel + All that''s really worth doing is what we do for others (Lewis Carrol)
> You can validate a stream stored as a file at any > time using the "zfs receive -n" option.Interesting. Maybe it''s just a documentation issue, but the man page doesn''t make it clear that this command verifies much more than the names in the stream, and suggests that the rest of the data could just be skipped over. If indeed this command does thoroughly process and validate the stream, without actually writing anything to disk, that would be very useful and should be advertised clearly.> Personally, I prefer to use -n and -u, > but -u is a relatively new option.I don''t get how they combine, from the descriptions. It seems to me that with -n there''s no filesystem being created for -u to then not mount. Again, maybe this is the result of misleading descriptions.> Therefore, the procedures we''ve used for decades > still works: > 1. make backup > 2. verify backup > 3. breathe easierThat''s what I want, of course. The best/only way I have found is to store the backup recv''d in a pool. This gives me: * validation of correct transfer, which I don''t get any other way that I''ve found so far. * version upgrade compatibility guarantees. The zfs on-disk format is the only one for which this is presently true, and which preserves properties, metadata, etc. I actually like this: one well-tested historical compatibility path is possibly better than maintaining multiple formats each with compatibility quirks. * redundancy, compression, and other zfs goodness for backup media * the ability to manage backup cycles and space to the size of the destination, thus detecting problems before the time-consuming part when writing out media. * the ability to browse and explore content, or restore individual files if needed, though this is of less immediate concern (that''s what snapshots are for, at least in the common case) However, I do get the attraction of storing backups as files. I just use a different file format: I have taken to making backup pools out of files the size of whatever removable media I plan on storing the backup on. When the backup pool is ready, I can export it, and gpg the files as they''re written out as an archive copy of the backup pool. Then I reimport the pool and keep sending backups to it. This is for home, and this scheme lets me separate the "making a second copy" from the "making an offsite archive" parts of the cycle, to suit my available time. *Then* I breathe easier. :-) I got burnt (thankfully only in testing) by a previous attempt to use mirrors and resilvering with such files. They''re ~useless once detached. The downside is the need to completely re-write the offsite copies (no smart resilver, but irrelevant for dvd or tapes), and the need to read all files back in before restoring. I only plan on needing that for a full post-disaster rebuild, so no biggie there. -- This message posted from opensolaris.org
On Aug 24, 2009, at 5:22 PM, Daniel Carosone wrote:>> You can validate a stream stored as a file at any >> time using the "zfs receive -n" option. > > Interesting. Maybe it''s just a documentation issue, but the man > page doesn''t make it clear that this command verifies much more than > the names in the stream, and suggests that the rest of the data > could just be skipped over.If you think about it, the stream may be concatenated, so the end is when the stream ends. You have to read the stream to find the end.> If indeed this command does thoroughly process and validate the > stream, without actually writing anything to disk, that would be > very useful and should be advertised clearly.In my testing, yes, this is what happens. I am not sure if it is designed to be used as such. Someone from the zfs-team should be able to answer that question.>> Personally, I prefer to use -n and -u, >> but -u is a relatively new option. > > I don''t get how they combine, from the descriptions. It seems to me > that with -n there''s no filesystem being created for -u to then not > mount. Again, maybe this is the result of misleading descriptions.Without -u, you need a receiving filesystem, even if you don''t actually receive anything.>> Therefore, the procedures we''ve used for decades >> still works: >> 1. make backup >> 2. verify backup >> 3. breathe easier > > That''s what I want, of course. The best/only way I have found is to > store the backup recv''d in a pool. This gives me: > * validation of correct transfer, which I don''t get any other way > that I''ve found so far. > * version upgrade compatibility guarantees. The zfs on-disk format > is the only one for which this is presently true, and which > preserves properties, metadata, etc. I actually like this: one > well-tested historical compatibility path is possibly better than > maintaining multiple formats each with compatibility quirks.Since it is open source, worst case is that you have to dig up an antique to receive to and then transfer again.> * redundancy, compression, and other zfs goodness for backup mediaThis is a different topic, but yes, I agree.> * the ability to manage backup cycles and space to the size of the > destination, thus detecting problems before the time-consuming part > when writing out media.Can''t do this with a stream (pipe). This is a feature for an enterprise backup system and, repeat after me, zfs send/receive is not a substitute for an enterprise backup solution.> * the ability to browse and explore content, or restore individual > files if needed, though this is of less immediate concern (that''s > what snapshots are for, at least in the common case)send streams are dataset objects, not files. If you want file-level backups and restorations, then use an enterprise backup solution. FWIW, amanda is open source and what I consider an enterprise backup solution. -- richard> > However, I do get the attraction of storing backups as files. I > just use a different file format: > > I have taken to making backup pools out of files the size of > whatever removable media I plan on storing the backup on. When the > backup pool is ready, I can export it, and gpg the files as they''re > written out as an archive copy of the backup pool. Then I reimport > the pool and keep sending backups to it. This is for home, and > this scheme lets me separate the "making a second copy" from the > "making an offsite archive" parts of the cycle, to suit my available > time. > > *Then* I breathe easier. :-) > > I got burnt (thankfully only in testing) by a previous attempt to > use mirrors and resilvering with such files. They''re ~useless once > detached. The downside is the need to completely re-write the > offsite copies (no smart resilver, but irrelevant for dvd or tapes), > and the need to read all files back in before restoring. I only > plan on needing that for a full post-disaster rebuild, so no biggie > there. > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Mon, 24 Aug 2009, Daniel Carosone wrote:> I got burnt (thankfully only in testing) by a previous attempt to > use mirrors and resilvering with such files. They''re ~useless once > detached. The downside is the need to completely re-write theHow about if you don''t ''detach'' them? Just unplug the backup device in the pair, plug in the temporary replacement, and tell zfs to replace the device. Of course this requires an easily removable device. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
> How about if you don''t ''detach'' them? Just unplug > the backup device in the pair, plug in the > temporary replacement, and tell zfs to > replace the device.Hm. I had tried a variant: a three-way mirror, with one device missing most of the time. The annoyance of that was that the pool complained most of the time that one device was missing.> Of course this requires an easily removable device.It also requires some other things, which I don''t miss as constraints: - enough easily removable devices, and (more scarce) connection ports, for them to all be connected at once. I don''t know what happens if you progressively resilver one mirror vdev after another, you''ll have a "backup" that''s skewed over time. - a pool made of mirror vdevs, not any other type. - removable devices that are disks (not tapes or blobs in a storage service or...) -- This message posted from opensolaris.org
Cindy.Swearingen at Sun.COM
2009-Aug-25 21:42 UTC
[zfs-discuss] incremental backup with zfs to file
Hi Dick, I''m testing root pool recovery from remotely stored snapshots rather than from files. I can send the snapshots to a remote pool easily enough. The problem I''m having is getting the snapshots back while the local system is booted from the miniroot to simulate a root pool recovery. I don''t know how to config ssh in this scenario. It might be easier to export the remote pool, move the disks, and import it on the local system to access the snapshots as described already. Two snapshot experts are offsite this week so I will get back to this thread later or file a CR if it is too difficult. Thanks, Cindy On 08/24/09 13:13, dick hoogendijk wrote:> On Sun, 23 Aug 2009 22:05:15 -0400 > Edward Ned Harvey <solaris at nedharvey.com> wrote: > > >>## Create Full snapshot and send it >>zfs send sourcefs at uniqueSnapName | ssh somehost ''zfs receive -F >>targetfs at uniqueSnapName'' > > > this is what I want to do. However I want a recursive backup from the > root pool. From the solaris dox I udnerstand I have to do this line: > > # zfs send -Rv rpool at 0908 | zfs receive -Fd backup/server/rpool at 0908 > > I''m not quite sure about the -Fd option of "receive" > Is this correct? >