What is the best way to back up a zfs pool for recovery? Recover entire pool or files from a pool... Would you use snapshots and clones? I would like to move the "backup" to a different disk and not use tapes. suggestions?? TIA --Kenny -- This message posted from opensolaris.org
On Fri, January 15, 2010 13:47, Kenny wrote:> What is the best way to back up a zfs pool for recovery? Recover entire > pool or files from a pool... Would you use snapshots and clones? > > I would like to move the "backup" to a different disk and not use tapes. > > suggestions??What I''m trying to do is: 1) Make regular snapshots on the live filesystems. So long as nothing goes wrong, people can recover individual files from those easily. 2) Back up the live filesystems to one or more backup pools, with all snapshots. This can be restored to the live filesystem if there''s a total disaster, or mounted and individual files retrieved if necessary. This does take up more space in the live filesystem; if one eliminated all the old snapshots there, it would be smaller. Since the big things in this environment tend to stick around once they appear, I don''t mind this too much. To accomplish 2, I''m trying to use zfs send/receive. I''m not going to archive the stream, just use it to create / update the backup filesystem. So far, I''m running into frequent problems. I can''t get incrementals to work, and the last time I made a full backup, I couldn''t export the pool afterwards. I had a previous system using rsync working fine, but that didn''t handle ZFS ACLs properly, and when I went from Samba to cifs, that became an issue. -- David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/ Photos: http://dd-b.net/photography/gallery/ Dragaera: http://dragaera.info
Have a simple rolling ZFS replication script: http://dpaste.com/145790/ -- bda cyberpunk is dead. long live cyberpunk.
> What is the best way to back up a zfs pool for recovery? Recover > entire pool or files from a pool... Would you use snapshots and > clones? > > I would like to move the "backup" to a different disk and not use > tapes.Personally, I use "zfs send | zfs receive" to an external disk. Initially a full image, and later incrementals. This way, you''ve got the history of what previous snapshots you''ve received on the external disk, it''s instantly available if you connect to a new computer, and you can restore either the whole FS, or a single file if you want.
On Sat, 2010-01-16 at 07:24 -0500, Edward Ned Harvey wrote:> Personally, I use "zfs send | zfs receive" to an external disk. Initially a > full image, and later incrementals.Do these incrementals go into the same filesystem that received the original zfs stream?
> > Personally, I use "zfs send | zfs receive" to an external disk. > Initially a > > full image, and later incrementals. > > Do these incrementals go into the same filesystem that received the > original zfs stream?Yes. In fact, I think that''s the only way possible. The end result is ... On my external disk, I have a ZFS filesystem, with snapshots. Each snapshot corresponds to each incremental send|receive. Personally, I like to start with a fresh "full" image once a month, and then do daily incrementals for the rest of the month. There is one drawback: If I have >500G filesystem to backup, and I have 1Tb target media ... Once per month, I have to "zpool destroy" the target media before I can write a new full backup onto it. This leaves a gap where the backup has been destroyed and the new image has yet to be written. To solve this problem, I have more than one external disk, and occasionally rotate them. So there''s still another offline backup available, if something were to happen to my system during the moment when the backup was being destroyed once per month.
On Jan 17, 2010, at 2:38 AM, Edward Ned Harvey wrote:>>> Personally, I use "zfs send | zfs receive" to an external disk. >> Initially a >>> full image, and later incrementals. >> >> Do these incrementals go into the same filesystem that received the >> original zfs stream? > > Yes. In fact, I think that''s the only way possible. The end result is ... On my external disk, I have a ZFS filesystem, with snapshots. Each snapshot corresponds to each incremental send|receive. > > Personally, I like to start with a fresh "full" image once a month, and then do daily incrementals for the rest of the month.This doesn''t buy you anything. ZFS isn''t like traditional backups.> There is one drawback: If I have >500G filesystem to backup, and I have 1Tb target media ... Once per month, I have to "zpool destroy" the target media before I can write a new full backup onto it. This leaves a gap where the backup has been destroyed and the new image has yet to be written.Just make a rolling snapshot. You can have different policies for destroying snapshots on the primary and each backup tier. -- richard> > To solve this problem, I have more than one external disk, and occasionally rotate them. So there''s still another offline backup available, if something were to happen to my system during the moment when the backup was being destroyed once per month. > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Le 17 janv. 10 ? 11:38, Edward Ned Harvey a ?crit :>>> Personally, I use "zfs send | zfs receive" to an external disk. >> Initially a >>> full image, and later incrementals. >> >> Do these incrementals go into the same filesystem that received the >> original zfs stream? > > Yes. In fact, I think that''s the only way possible. The end result > is ... On my external disk, I have a ZFS filesystem, with > snapshots. Each snapshot corresponds to each incremental send| > receive. > > Personally, I like to start with a fresh "full" image once a month, > and then do daily incrementals for the rest of the month. > > There is one drawback: If I have >500G filesystem to backup, and I > have 1Tb target media ... Once per month, I have to "zpool destroy" > the target media before I can write a new full backup onto it. This > leaves a gap where the backup has been destroyed and the new image > has yet to be written. > > To solve this problem, I have more than one external disk, and > occasionally rotate them. So there''s still another offline backup > available, if something were to happen to my system during the > moment when the backup was being destroyed once per month.ZFS can check the pool and make sure that there is no error. Running ''zpool scrub'' on the two pools from time to time - let''s say every month - should give you a similar level of protection without the need for a full backup. Even when backing up with rsync+zfs snapshot, a full incremental every month may not be required. A rsync run with the --checksum option every month may be good enough. It forces the read of the full data on both sides, but at least it avoids the network transfer if the pools are on different hosts, and it avoids increasing the space used by the snapshots. Ga?tan -- Ga?tan Lehmann Biologie du D?veloppement et de la Reproduction INRA de Jouy-en-Josas (France) tel: +33 1 34 65 29 66 fax: 01 34 65 29 09 http://voxel.jouy.inra.fr http://www.itk.org http://www.mandriva.org http://www.bepo.fr -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 203 bytes Desc: Ceci est une signature ?lectronique PGP URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100117/0fae16f1/attachment.bin>
On Sun, Jan 17, 2010 at 08:05:27AM -0800, Richard Elling wrote:> > Personally, I like to start with a fresh "full" image once a month, and then do daily incrementals for the rest of the month. > > This doesn''t buy you anything... as long as you scrub both the original pool and the backup pool with the same regularity. sending the full backup from the source is basically the same as a scrub of the source. If scrub ever find an error on your backup pool, you will need to re-send the snapshots as a full stream from scratch (or at least from a snapshot from before where the bad blocks are referenced). You can''t just copy over the damaged file into the top filesystem on the backup media, because if you write to that filesystem you will no longer be able to recv new relative snapshots into it (without rolling back with xfs recv -F)> > To solve this problem, I have more than one external disk, and > > occasionally rotate them.That''s a good idea regardless, with one on-site to be used regularly, and one off-site in case of theft/fire/etc. If you rotate, say, once a month, and can keep at least a month-and-a-day''s worth of snapshots on the primary pool, then you can fully catch up the month-old disk after a changeover.> ZFS isn''t like normal backupsHooray! -- Dan. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 194 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100118/fcd7ff3a/attachment.bin>
On Mon, 18 Jan 2010, Daniel Carosone wrote:> > .. as long as you scrub both the original pool and the backup pool > with the same regularity. sending the full backup from the source is > basically the same as a scrub of the source.This is not quite true. The send only reads/verifies as much as it needs to send the data. It won''t read a redundant copy if it does not have to. It won''t traverse metadata that it does not have to. A scrub reads/verifies all data and metadata. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Sun, Jan 17, 2010 at 04:38:03PM -0600, Bob Friesenhahn wrote:> On Mon, 18 Jan 2010, Daniel Carosone wrote: >> >> .. as long as you scrub both the original pool and the backup pool >> with the same regularity. sending the full backup from the source is >> basically the same as a scrub of the source. > > This is not quite true. The send only reads/verifies as much as it > needs to send the data. It won''t read a redundant copy if it does not > have to. It won''t traverse metadata that it does not have to. A scrub > reads/verifies all data and metadata.Sure, but I was comparing to not doing scrubs at all, since the more dangerous interpretation is that always-incremental sends are fully equivalent to the OP''s method. I was pointing out the lack of a scrub-like side-effect in that method. I shouldn''t have glossed over the differences with "basically". If one was not doing scrubs, and switched from sending full streams monthly to continuous replication streams, old data might go unread and unreadable over time. We all agree scrubs and incrementals are the way to go, but don''t do either alone. -- Dan. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 194 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100118/57f66e9f/attachment.bin>
> > Personally, I like to start with a fresh "full" image once a month, > and then do daily incrementals for the rest of the month. > > This doesn''t buy you anything. ZFS isn''t like traditional backups.If you never send another full, then eventually the delta from the original to the present will become large. Not a problem, you''re correct, as long as your destination media is sufficiently large. Unless I am mistaken, I believe, the following is not possible: On the source, create snapshot "1" Send snapshot "1" to destination On the source, create snapshot "2" Send incremental, from "1" to "2" to the destination. On the source, destroy snapshot "1" On the destination, destroy snapshot "1" I think, since snapshot "2" was derived from "1" you can''t destroy "1" unless you''ve already destroyed "2" Am I wrong?
Le 18 janv. 10 ? 09:24, Edward Ned Harvey a ?crit :>>> Personally, I like to start with a fresh "full" image once a month, >> and then do daily incrementals for the rest of the month. >> >> This doesn''t buy you anything. ZFS isn''t like traditional backups. > > If you never send another full, then eventually the delta from the > original > to the present will become large. Not a problem, you''re correct, as > long as > your destination media is sufficiently large. > > Unless I am mistaken, I believe, the following is not possible: > > On the source, create snapshot "1" > Send snapshot "1" to destination > On the source, create snapshot "2" > Send incremental, from "1" to "2" to the destination. > On the source, destroy snapshot "1" > On the destination, destroy snapshot "1" > > I think, since snapshot "2" was derived from "1" you can''t destroy "1" > unless you''ve already destroyed "2"This is definitely possible with zfs. Just try! Ga?tan -- Ga?tan Lehmann Biologie du D?veloppement et de la Reproduction INRA de Jouy-en-Josas (France) tel: +33 1 34 65 29 66 fax: 01 34 65 29 09 http://voxel.jouy.inra.fr http://www.itk.org http://www.mandriva.org http://www.bepo.fr -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 203 bytes Desc: Ceci est une signature ?lectronique PGP URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100118/98c2b26b/attachment.bin>
Edward Ned Harvey wrote:>>> Personally, I like to start with a fresh "full" image once a month, >>> >> and then do daily incrementals for the rest of the month. >> >> This doesn''t buy you anything. ZFS isn''t like traditional backups. >> > > If you never send another full, then eventually the delta from the original > to the present will become large. Not a problem, you''re correct, as long as > your destination media is sufficiently large. > > Unless I am mistaken, I believe, the following is not possible: > > On the source, create snapshot "1" > Send snapshot "1" to destination > On the source, create snapshot "2" > Send incremental, from "1" to "2" to the destination. > On the source, destroy snapshot "1" > On the destination, destroy snapshot "1" > > I think, since snapshot "2" was derived from "1" you can''t destroy "1" > unless you''ve already destroyed "2" > > Am I wrong?Yes - what you describe is how I maintain my remote backups! -- Ian.
On Mon, Jan 18, 2010 at 03:24:19AM -0500, Edward Ned Harvey wrote:> Unless I am mistaken, I believe, the following is not possible: > > On the source, create snapshot "1" > Send snapshot "1" to destination > On the source, create snapshot "2" > Send incremental, from "1" to "2" to the destination. > On the source, destroy snapshot "1" > On the destination, destroy snapshot "1" > > I think, since snapshot "2" was derived from "1" you can''t destroy "1" > unless you''ve already destroyed "2" > > Am I wrong?As noted already, yes you are. Indeed, if you specify zfs recv -F, you only need to destroy @1 at the source. When you later send -R, snapshots destroyed at the source will also be destroyed at the receiver. That''s not always what you want, so be careful, but if it is what you want it''s useful. -- Dan. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 194 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100118/0e8c10c4/attachment.bin>