I have an OpenSolaris snv_105 server at home that holds my photos, docs, music, etc, in a zfs pool. I backup my laptops with rsync to the OpenSolaris server. All of my important data is in one place, on the OpenSolaris server. I want to backup this data. I want to protect against losing my data, and I would also like to recover previous versions of files when I make mistakes. * I do not have a tape drive, nor do I want to purchase one. * I would like to backup to Amazon''s S3 service. I have an account. I did some searching and it seems that instead of creating a bunch of tar files (what I do now), I should create regular zfs snapshots and back those up with Amanda. Does that sound like a viable backup solution? Then, I was thinking about storing those backups on S3. * Has anyone tried this? * Are there any problems I may run into? * Are there better ways backup my zfs pool without purchasing expensive software? Thanks in advance.
On Feb 17, 2009, at 17:56, Joe S wrote:> Does that sound like a viable backup solution?It has been explicitly stated numerous times that the output of ''zfs send'' has no guarantees and it is undocumented. From zfs(1M):> The format of the [zfs send] stream is evolving. No backwards > compatibility is guaranteed. You may not be able to receive your > streams on future versions of ZFS.http://docs.sun.com/app/docs/doc/819-2240/zfs-1m If you want to do back ups of your file system use a documented utility (tar, cpio, pax, zip, etc.).> Then, I was thinking about storing those backups on S3. > > * Has anyone tried this? > * Are there any problems I may run into? > * Are there better ways backup my zfs pool without purchasing > expensive software?The best way would probably to purchase an external drive and send updates to it, then export the drive, and take it offsite. Rotate between two drivers. Personally I recommend using FireWire whenever possible. There are various utilities for using S3 as a backup service, but I''m not sure this is the proper forum: http://www.google.com/search?q=amazon+s3+backups
On Tue, Feb 17, 2009 at 3:35 PM, David Magda <dmagda at ee.ryerson.ca> wrote:> On Feb 17, 2009, at 17:56, Joe S wrote: > >> Does that sound like a viable backup solution? > > It has been explicitly stated numerous times that the output of ''zfs send'' > has no guarantees and it is undocumented. From zfs(1M): > >> The format of the [zfs send] stream is evolving. No backwards >> compatibility is guaranteed. You may not be able to receive your streams on >> future versions of ZFS. > > http://docs.sun.com/app/docs/doc/819-2240/zfs-1m >Yes, this would be a huge problem. Thanks for the reference.> If you want to do back ups of your file system use a documented utility > (tar, cpio, pax, zip, etc.).I''m going to try to use Amanda and backup my data (not snapshots).
On February 17, 2009 6:35:12 PM -0500 David Magda <dmagda at ee.ryerson.ca> wrote:> On Feb 17, 2009, at 17:56, Joe S wrote: > >> Does that sound like a viable backup solution? > > It has been explicitly stated numerous times that the output of ''zfs > send'' has no guarantees and it is undocumented. From zfs(1M): > >> The format of the [zfs send] stream is evolving. No backwards >> compatibility is guaranteed. You may not be able to receive your >> streams on future versions of ZFS. > > http://docs.sun.com/app/docs/doc/819-2240/zfs-1m > > If you want to do back ups of your file system use a documented utility > (tar, cpio, pax, zip, etc.).I see lots of recent discussion on this and think folks tend to focus on the wrong part of the problem, which is understandable as the documentation is only concerned with what is really insignificant for practical purposes. If you go to a future version of zfs, simply replace all your "full" filesystem streams with new ones, and then of course start new incrementals. Any reasonable backup procedure probably involves starting new full backups at regular intervals anyway, and more frequently than you update your filesystem ... The real problem here is lack of checksums and inability to restore the good parts of a stream if there is just one bad part. Storing the stream on zfs would help, but if you can do that you can of course just receive the stream into a replica filesystem. There was a lengthy couple of posts within the past couple of weeks on the merits of tar, cpio, etc. I suggest (to Joe) reading it.> The best way would probably to purchase an external drive and sendupdates > to it, then export the drive, and take it offsite. That''s my thoughts as well. My own take on it is that your backup should be at least as resistant to bit rot (etc.) as the original filesystem, and when your starting point is zfs that''s a tall order. I guess if zfs itself is not an option for storing the backups, then someone could add an error correcting checksum code to the stream (out of band). -frank
On February 17, 2009 3:57:34 PM -0800 Joe S <js.lists at gmail.com> wrote:> On Tue, Feb 17, 2009 at 3:35 PM, David Magda <dmagda at ee.ryerson.ca> wrote: >> If you want to do back ups of your file system use a documented utility >> (tar, cpio, pax, zip, etc.). > > I''m going to try to use Amanda and backup my data (not snapshots).You missed the point, which is not to avoid snapshots, but to avoid saving the stream as a backup. Backing up a snapshot is typically preferred to backing up a "live" filesystem. -frank
Once again, I find I have to correct myself:> If you go to a future version of zfs, simply replace all your "full" > filesystem streams with new ones, and then of course start new > incrementals. Any reasonable backup procedure probably involves starting > new full backups at regular intervals anyway, and more frequently than > you update your filesystem ...Of course many (most?) backup schemes will want archival backups, where new full backups do not REPLACE older full backups. So if you have older full backups, and the stream format changes, you''d need an older system to restore to. If you have a lot of data or a lot of archives, this could range from a nuisance to infeasible. There doesn''t seem to be much reason to save the stream as opposed to saving a pax/tar/xxx archive anyway. With a replication type stream (zfs send -R), sure you get the filesystem properties but those are easily stashed away separately or even as a text file within the backed up filesystem itself. And at least if you use the Solaris tar you get all the ACLs, etc. I do wonder if backing up a mirror/raidz filesystem to a non-replicated disk, to be taken offsite, is a good practice. If we agree that the lack of a checksum of the zfs stream itself is the major problem, than it would seem that the lack of a parity or mirror disk in a backup filesystem is also problematic. I guess ditto blocks solve that problem? -frank
On Tue, February 17, 2009 16:56, Joe S wrote:> I have an OpenSolaris snv_105 server at home that holds my photos, > docs, music, etc, in a zfs pool. I backup my laptops with rsync to the > OpenSolaris server. All of my important data is in one place, on the > OpenSolaris server. I want to backup this data. I want to protect > against losing my data, and I would also like to recover previous > versions of files when I make mistakes. > > * I do not have a tape drive, nor do I want to purchase one.Tape used to be great :-(. These days it''s too expensive for home users, I agree.> * I would like to backup to Amazon''s S3 service. I have an account.Hmm, that''s the big difference from my approach. I haven''t looked at S3, but I know that it breaks the basis for my current scheme. The obvious benefit is that you get your backups offsite. The obvious problem is that your outbound bandwidth from home is probably low (mine is under 1Mb). However, $0.15 per GB per month is not tolerable for backup storage for me. For my (roughly) 400GB, that''s a cost of $60/month, plus upload and download charges. I can buy a third external drive and rotate one through a desk drawer at work or something for off-site for what the first month would cost including the upload charge, I believe (currently I''m burning optical disks of new photos and storing one copy of those off-site).> I did some searching and it seems that instead of creating a bunch of > tar files (what I do now), I should create regular zfs snapshots and > back those up with Amanda. Does that sound like a viable backup > solution? Then, I was thinking about storing those backups on S3. > > * Has anyone tried this? > * Are there any problems I may run into? > * Are there better ways backup my zfs pool without purchasing > expensive software?My previous, highly satisfactory, scheme: Backup the fileserver via rsync to USB external drives. I had a perl script to do that, and two drives. This was easy because my external drives are bigger than my pool; if you have a big pool this won''t work in exactly this form for you. Advantages include: I can mount the backups on the server (or any solaris system, including the livecd) and access the individual files easily. Disadvantages include: I don''t believe rsync preserves extended attributes or ACLs. I was previously using Samba, which doesn''t use ACLs, so I didn''t care, but after a hardware problem I''ve ended up upgrading Solaris and I''m running CIFS now, and now I do care about ACLs, so I''m changing my backup scheme just a little. My revised scheme is to use zfs send/receive to get incremental backups to the external drives. All the same advantages, and cancels the disadvantage. My experience over the years with backup products has left me very suspicious of creating huge backup files allegedly containing my valuable data; and I extend that to zip and tar, not just the proprietary formats that Acronis or NTI use. It''s one more thing to go wrong. If drives don''t get big fast enough (and my pool growth has leveled off at this point), I can go to using external eSATA racks or something for backup volumes. Generally speaking, making a snapshot in zfs and then backing that up, by whatever method, is a good way to go; the snapshot is atomic (including across sub-filesystems), and it gives you a clear-cut thing for the backup to be equivalent to. Also keeping snapshots around locally lets you recover accidentally deleted files without reference to your backup media, which can be convenient. -- David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/ Photos: http://dd-b.net/photography/gallery/ Dragaera: http://dragaera.info
David Dyer-Bennet
2009-Feb-18 16:11 UTC
[zfs-discuss] Firewire (was Re: Backing up ZFS snapshots)
On Tue, February 17, 2009 17:35, David Magda wrote:> Personally I recommend using FireWire whenever > possible.I haven''t tested performance on Solaris. But I bought into the Firewire hype, and put firewire cards into my two Linux servers for access to my external hard drives, and also used it on my Windows desktop and laptop boxes. That''s at least three different external enclosure models and three different controllers. None of them seemed to be noticeably faster than USB. So I''ve stopped buying firewire. -- David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/ Photos: http://dd-b.net/photography/gallery/ Dragaera: http://dragaera.info
On Wed, Feb 18, 2009 at 10:11 AM, David Dyer-Bennet <dd-b at dd-b.net> wrote:> > On Tue, February 17, 2009 17:35, David Magda wrote: > > > Personally I recommend using FireWire whenever > > possible. > > I haven''t tested performance on Solaris. > > But I bought into the Firewire hype, and put firewire cards into my two > Linux servers for access to my external hard drives, and also used it on > my Windows desktop and laptop boxes. That''s at least three different > external enclosure models and three different controllers. > > None of them seemed to be noticeably faster than USB. > > So I''ve stopped buying firewire. > > -- >Odd, my firewire enclosure transfers are north of 50MB/sec, while the same drive in a USB enclosure is lucky to break 25MB/sec. You sure your local disk isn''t just dog slow? --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090218/dcdafefd/attachment.html>
David Dyer-Bennet
2009-Feb-18 17:47 UTC
[zfs-discuss] Firewire (was Re: Backing up ZFS snapshots)
On Wed, February 18, 2009 11:19, Tim wrote:> Odd, my firewire enclosure transfers are north of 50MB/sec, while the same > drive in a USB enclosure is lucky to break 25MB/sec. You sure your local > disk isn''t just dog slow?I can easily see 90MB/sec (and that''s production load, not benchmark conditions) on the local disk. I sometimes see 30MB/sec on the USB disk, though 20 is more normal. I haven''t used the firewire in long enough I can''t quote a number, just that I stopped making a point of firewire. -- David Dyer-Bennet, dd-b at dd-b.net; http://dd-b.net/ Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/ Photos: http://dd-b.net/photography/gallery/ Dragaera: http://dragaera.info
>>>>> "fc" == Frank Cusack <fcusack at fcusack.com> writes: >>>>> "dd" == David Dyer-Bennet <dd-b at dd-b.net> writes:fc> If you go to a future version of zfs, simply replace all your fc> "full" filesystem streams with new ones, I still think you should not be storing these streams at all, for reasons you describe later. fc> The real problem here is lack of checksums and inability to fc> restore the good parts of a stream if there is just one bad fc> part. there ARE checksums in ''zfs send'' streams. That''s the mechanism through which single bit flips manage to invalidate the entire stream. from my quick testing, ''tar'' does not have checksums, and some versions of the ''cpio'' stream DO have checksums. dd> put firewire cards into my two Linux servers for access to my dd> external hard drives, and also used it on my Windows desktop dd> and laptop boxes. That''s at least three different external dd> enclosure models and three different controllers. dd> None of them seemed to be noticeably faster than USB. In general I don''t notice too much how fast things are. I just walk away and come back when they''re done. so, I believe you. dd> So I''ve stopped buying firewire. fine, that''s reasonable. The cost premium is excessive. but, not only speed. it is also less crappy than USB. especially around here where problems get blamed on dropped SYNC CACHE commands or buggy FTL''s or whatever, maybe you care about general crappyness-level. OTOH if you manage to get a working enclosure, which even if you pay for firewire you can still never even prove, the problems will just get blamed on the disk drive or Solaris''s USB driver or some war story about a low-quality two-foot USB cable, so maybe it makes sense to get the crappiest cheapest case possible. I think there is only one kind of host controller on Firewire, like how on USB there are only two kinds of host controller. there were some weird ones before, but now it is all OHCI. Though several companies make chips that conform to OHCI, I''ve not heard of any relevant differences among them. There are many firewire-to-PATA/SATA bridges, and they DO have relevant differences in both speed and correctness. You need to get the Oxford brand because they''re the ones who have delivered fast and non-buggy firmware since the beginning. The BSD firewire stack took like a decade to become useable after its first release, while the BSD USB stack was working well pretty soon after its release. so I wonder if Linux and Solaris have similar problems. If they do, it could nullify Firewire''s supposed advantage (except on Mac OS where the stack is fine), or maybe you just need to try again with newer Linux/Solaris kernel. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090218/83badcb4/attachment.bin>
I appreciate the feedback. I''ve decided to: * create daily ZFS snapshots and zfs send these to separate external disks (via esata). * create monthly full backups via rsync, tar, or amanda on separate external disks. I''m not going to store everything on S3, it is too expensive. However, I will keep an encrypted copy of my critical docs and items on S3 in case my house burns or fileserver is stolen.
On Wed, 18 Feb 2009 11:27:38 -0800, Joe S <js.lists at gmail.com> wrote:> I appreciate the feedback. > > I''ve decided to: > > * create daily ZFS snapshots and zfs send these to separate external > disks (via esata).I''ll be interested in hearing anything you learn about eSata on Solaris. I haven''t used eSata anywhere yet, but it sounds good.> * create monthly full backups via rsync, tar, or amanda on separate > external disks.I''m keeping daily snapshots on both my fileserver pool and the backup pools; I''m not sure separating your incremental and daily backups like this is beneficial in the ZFS environment. "incremental" in this case is just the quickest way to update the backup pool.> I''m not going to store everything on S3, it is too expensive. However, > I will keep an encrypted copy of my critical docs and items on S3 in > case my house burns or fileserver is stolen.Sounds rational. I once had the development system for a project I was working on stolen out of the lab. It''s a good thing they didn''t grab all the nearby floppy disks, because those included the backups (nothing outside that room; not good practice). This was in the era where backups were being made on floppies, so it was a somewhat painstaking manual process, and my backups were 3 days out of date. Luckily I had a lab notebook telling in some detail what I''d done during those three days, and it only took about half a day to do it again from the notes. But no manager there had ever said a thing to me about backing up that box; I just did it on my own because it was obviously necessary. And I really should have taken the disks back to my desk, which was not in that room. Yeah, off-site backups are very important.
David Magda <dmagda <at> ee.ryerson.ca> writes:> > The format of the [zfs send] stream is evolving. No backwards > > compatibility is guaranteed. You may not be able to receive your > > streams on future versions of ZFS. > > http://docs.sun.com/app/docs/doc/819-2240/zfs-1m > > If you want to do back ups of your file system use a documented > utility (tar, cpio, pax, zip, etc.).Well understood. But does anyone know the long-term intentions of the ZFS developers in this area? The one big disadvantage of the recommended approaches shows up when you start taking advantage of ZFS to clone filesystems without replicating storage. Using "zfs send" will avoid representing the data twice to the backup system (and allow easy reconstruction of the clones), but I don''t think the same goes for the other techniques. It would be nice to know that they''re thinking about a way to address these issues. -- Dave Abrahams Boostpro Computing http://boostpro.com
On Sat, 21 Feb 2009, David Abrahams wrote:>> >> If you want to do back ups of your file system use a documented >> utility (tar, cpio, pax, zip, etc.). > > Well understood. But does anyone know the long-term intentions of > the ZFS developers in this area? The one big disadvantage of the > > It would be nice to know that they''re thinking about a way to address > these issues.You are requesting that the ZFS developers be able to predict the future. How can they do that? Imposting a requirement for backwards compatibly will make it more difficult for ZFS to adapt to changing requirements as it evolves. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Sat, Feb 21, 2009 at 11:26 AM, Bob Friesenhahn < bfriesen at simple.dallas.tx.us> wrote:> On Sat, 21 Feb 2009, David Abrahams wrote: > >> >>> If you want to do back ups of your file system use a documented >>> utility (tar, cpio, pax, zip, etc.). >>> >> >> Well understood. But does anyone know the long-term intentions of >> the ZFS developers in this area? The one big disadvantage of the >> >> It would be nice to know that they''re thinking about a way to address >> these issues. >> > > You are requesting that the ZFS developers be able to predict the future. > How can they do that? > > Imposting a requirement for backwards compatibly will make it more > difficult for ZFS to adapt to changing requirements as it evolves. > > Bob >No, he''s not asking them to predict the future. Don''t be a dick. He''s asking if they can share some of their intentions based on their current internal roadmap. If you''re telling me Sun doesn''t have a 1yr/2yr/3yr roadmap for ZFS I''d say we''re all in some serious trouble. "We make it up as we go along" does NOT invoke the least bit of confidence, and I HIGHLY doubt that''s how they''re operating. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090221/d0a33407/attachment.html>
On Sat, 21 Feb 2009, Tim wrote:> > No, he''s not asking them to predict the future. Don''t be a dick. He''s > asking if they can share some of their intentions based on their current > internal roadmap. If you''re telling me Sun doesn''t have a 1yr/2yr/3yr > roadmap for ZFS I''d say we''re all in some serious trouble. "We make it up as > we go along" does NOT invoke the least bit of confidence, and I HIGHLY doubt > that''s how they''re operating.ZFS is principally already developed. It is now undergoing feature improvement, performance, and stability updates. Perhaps entries in the OpenSolaris bug tracking system may reveal what is requested to be worked on. In the current economy, I think that "We make it up as we go along" is indeed the best plan. That is what most of us are doing now. Multi-year roadmaps are continually being erased and restarted due to changes (reductions) in staff, funding, and customer base. Yes, most of us are in some serious trouble and if you are not, then you are somehow blessed. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Sat, Feb 21, 2009 at 12:18 PM, Bob Friesenhahn < bfriesen at simple.dallas.tx.us> wrote:> > ZFS is principally already developed. It is now undergoing feature > improvement, performance, and stability updates. Perhaps entries in the > OpenSolaris bug tracking system may reveal what is requested to be worked > on. > > In the current economy, I think that "We make it up as we go along" is > indeed the best plan. That is what most of us are doing now. Multi-year > roadmaps are continually being erased and restarted due to changes > (reductions) in staff, funding, and customer base. Yes, most of us are in > some serious trouble and if you are not, then you are somehow blessed. > > BobWell given that I *KNOW* Sun isn''t making shit up as they go along, and I have *SEEN* some of their plans under NDA, I''ll just outright call bullshit. I was trying to be nice about it. If you''re making stuff up as you go along that''s likely why you''re struggling. Modifying plans is one thing. Not having any is another thing entirely. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090221/efef8009/attachment.html>
On Feb 21, 2009, at 13:27, Tim wrote:> Well given that I *KNOW* Sun isn''t making shit up as they go along, > and I > have *SEEN* some of their plans under NDA, I''ll just outright call > bullshit. > I was trying to be nice about it. If you''re making stuff up as you go > along that''s likely why you''re struggling. Modifying plans is one > thing. > Not having any is another thing entirely.In preparing for battle I have always found that plans are useless, but planning is indispensable. -- Dwight D. Eisenhower
David Abrahams wrote:> David Magda <dmagda <at> ee.ryerson.ca> writes: > > > >>> The format of the [zfs send] stream is evolving. No backwards >>> compatibility is guaranteed. You may not be able to receive your >>> streams on future versions of ZFS. >>> >> http://docs.sun.com/app/docs/doc/819-2240/zfs-1m >> >> If you want to do back ups of your file system use a documented >> utility (tar, cpio, pax, zip, etc.). >> > > Well understood. But does anyone know the long-term intentions of > the ZFS developers in this area? The one big disadvantage of the > recommended approaches shows up when you start taking advantage > of ZFS to clone filesystems without replicating storage. Using "zfs send" > will avoid representing the data twice to the backup system (and allow > easy reconstruction of the clones), but I don''t think the same goes for > the other techniques. > >I wouldn''t have any serious concerns about backing up snapshots provided the stream version was on the tape label and I had a backup of the Solaris release (or a virtual machine) that produced them. -- Ian.
On Sat, 21 Feb 2009, Tim wrote:> Well given that I *KNOW* Sun isn''t making shit up as they go along, and I > have *SEEN* some of their plans under NDA, I''ll just outright call bullshit. > I was trying to be nice about it. If you''re making stuff up as you go > along that''s likely why you''re struggling. Modifying plans is one thing. > Not having any is another thing entirely.If plans are important to you, then I suggested that you write to your local congressman and express your concern. Otherwise plans are stymied by a severely faltering economy, a failing banking system, and unpredictable government response. At this point we should be happy that Sun has reiterated its support for OpenSolaris and ZFS during these difficult times. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
>>>>> "da" == David Abrahams <dave at boostpro.com> writes: >>>>> "ic" == Ian Collins <ian at ianshome.com> writes:da> disadvantage of the recommended approaches shows up when you da> start taking advantage of ZFS to clone filesystems without da> replicating storage. Using "zfs send" will avoid representing da> the data twice Two or three people wanted S3 support, but IMVHO maybe S3 is too expensive and is better applied to getting another decade out of aging, reliable operating systems, and ZFS architecture should work towards replacing S3 not toward pandering to it. Many new ZFS users are convinced to try ZFS because they want to back up non-ZFS filesystems onto zpool''s because it''s better than tape, so that''s not a crazy idea. It''s plausible to want one backup server with a big slow pool, deliberately running a Solaris release newer than anything else in your lab. Then have tens of Solaris''s of various tooth-length ''zfs send''ing from many pools toward one pool on the backup server. The obvious way to move a filesystem rather than a pool from older Solaris to newer, is ''zfs send | zfs recv''. The obvious problem: this doesn''t always work. The less obvious problem: how do you restore? It''s one thing to say, ``I want it to always work to zfs send from an older system to a newer,'''' which we are NOT saying yet. To make restore work, we need to promise more: ``the format of the ''zfs send'' stream depends only on the version number of the ZFS filesystem being sent, not on the zpool version and not on the build of the sending OS.'''' That''s a more aggressive compatibility guarantee than anyone''s suggested so far, never mind what we have. At least it''s more regression-testable than the weaker compatibility promises: you can ''zfs send'' a hundred stored test streams from various old builds toward the system under test, then ''zfs send'' them back to simulate a restore, and modulo some possible headers you could strip off, they should be bit-for-bit identical when they come out as when they go in. zpool becomes a non-fragile way of storing fragile ''zfs send'' streams. And to make this comparable in trustworthyness to pre-ZFS backup systems, we need a THIRD thing---a way to TEST the restore without disrupting the old-Solaris system in production, a restore-test we are convinced will expose the problems we know ''zfs recv'' sometimes has including lazy-panic problems---and I think the send|recv architecture has painted us into a corner in terms of getting that since gobs of kernel code are involved in receiving streams so that there''s no way to fully test a recv other than to make some room for it, recv it, then ''zfs destroy''. so... yeah. I guess the lack of ''zfs send'' stream compatibility does make into shit my answer ``just use another zpool for backup. Tape''s going out of fashion anyway.'''' And when you add compatibility problems to the scenario, storing backups in zpool stream rather than ''zfs send'' format no longer resolves the problem I raised before with the lack of recv-test. I guess the only thing we really have for backup is rsync --in-place --no-whole-file. ic> I wouldn''t have any serious concerns about backing up ic> snapshots provided the stream version was on the tape label ic> and I had a backup of the Solaris release (or a virtual ic> machine) that produced them. I would have serious concerns doing that because of the numerous other problems I always talk about that you haven''t mentioned. But, I don''t wish for ''zfs send'' to become a backup generator. I like it as is. Here are more important problems: * are zfs send and zfs recv fast enough now, post-b105? * endian-independence (fixed b105?) * toxic streams that panic the receiving system (AFAIK unfixed) though, if I had to add one more wish to that list, the next one would probably be more stream format compatibility across Solaris releases. Understand the limitations of your VM approach. Here is the way you get access to your data through it: * attach a huge amount of storage to the VM and create a zpool on it inside the VM * pass the streams through the VM and onto the pool, hoping none are corrupt or toxic since they''re now stored and you no longer have the chance to re-send them. but nevermind that problem for now. * export the pool, shut down the VM [this is the only spot where backward-compatibility is guaranteed, and where it seems trustworthy so far] * import the pool on a newer Solaris * upgrade the pool and the filesystems in it so, you have to assign disks to the VM, zpool export, zpool import. If what you''re trying to restore is tiny, you can make a file vdev. And if it''s Everything, then you can destroy the production pool, recreate it inside the vm, u.s.w. No problem. But what if you''re trying to restore something that uses 12 disks worth of space on your 48-disk production pool? You have free space for it on the production pool, but (1) you do not have 12 unassigned disks sitting around nor anywhere to mount all 12 at once, and (2) you do not have twice enough free space for it on the production pool so that you could use iSCSI or a file vdev on NFS, yo uonly have one times enough space for it. wtf are you supposed to do? With traditional backup methods your situation is workable because you can restore straight onto the free space in the production pool, but with ZFS you must convert your backup into disk images with the VM first, then restore it a second time to its actual destination. And with the ponderously large pool sizes ZFS encourages, providing these ``disk images'''' to the restore VM could mean doubling your hardware investment. This breaks the filesystems-within-pools aggregation feature of ZFS. Filesystems are supposedly the correct unit of backup, not pools---this is pitched as one reason filesystems exist in the first place. But in this example, compatibility issues effectively make pools the unit of backup because it is only a pool that you can reliably pass from an older Solaris to a newer one, not a filesystem. You''re right to say ``a VM'''' though. A backup of the solaris DVD used to do the ''zfs send'' is not worth much when hardware is becoming unobtainable on a 6mo cycle---you won''t be able to run it anywhere. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090221/1d07bdbc/attachment.bin>
Miles Nordin wrote:> ic> I wouldn''t have any serious concerns about backing up > ic> snapshots provided the stream version was on the tape label > ic> and I had a backup of the Solaris release (or a virtual > ic> machine) that produced them. > > I would have serious concerns doing that because of the numerous other > problems I always talk about that you haven''t mentioned. > > But, I don''t wish for ''zfs send'' to become a backup generator. I like > it as is. Here are more important problems: > > * are zfs send and zfs recv fast enough now, post-b105? > > * endian-independence (fixed b105?) > > * toxic streams that panic the receiving system (AFAIK unfixed) > >We should see a resolution for this soon, I have have a support case open and I no have a reproducible test case. I haven''t been able to panic any recent SXCE builds with the streams that panic Solaris 10.> though, if I had to add one more wish to that list, the next one would > probably be more stream format compatibility across Solaris releases. > >Luckily for us, they haven''t broken it yet on a production release. They would give them selves a massive headache if they did. One point that has been overlooked is replication, I''m sure I''m not alone in sending older stream formats to newer staging servers.> Understand the limitations of your VM approach. Here is the way you > get access to your data through it: > > * attach a huge amount of storage to the VM and create a zpool on it > inside the VM > >I currently use iSCSI.> * pass the streams through the VM and onto the pool, hoping none are > corrupt or toxic since they''re now stored and you no longer have > the chance to re-send them. but nevermind that problem for now. > >I receive the stream as well as archive it.> * export the pool, shut down the VM > > [this is the only spot where backward-compatibility is guaranteed, > and where it seems trustworthy so far] > > * import the pool on a newer Solaris > > * upgrade the pool and the filesystems in it > >Not necessary.> so, you have to assign disks to the VM, zpool export, zpool import. > If what you''re trying to restore is tiny, you can make a file vdev. > And if it''s Everything, then you can destroy the production pool, > recreate it inside the vm, u.s.w. No problem. But what if you''re > trying to restore something that uses 12 disks worth of space on your > 48-disk production pool? You have free space for it on the production > pool, but (1) you do not have 12 unassigned disks sitting around nor > anywhere to mount all 12 at once, and (2) you do not have twice enough > free space for it on the production pool so that you could use iSCSI > or a file vdev on NFS, yo uonly have one times enough space for it. > >I don''t do this for "handy" backups. We only do this to archive a filesystem. -- Ian.
on Sat Feb 21 2009, Miles Nordin <carton-AT-Ivy.NET> wrote:>>>>>> "da" == David Abrahams <dave at boostpro.com> writes: >>>>>> "ic" == Ian Collins <ian at ianshome.com> writes: > > da> disadvantage of the recommended approaches shows up when you > da> start taking advantage of ZFS to clone filesystems without > da> replicating storage. Using "zfs send" will avoid representing > da> the data twice > > Two or three people wanted S3 support,Amazon S3 support directly in ZFS? I''d like that, but I''m not sure what it looks like. There are already tools that will send / receive ZFS to Amazon S3. Is there something you can only do well if you own the filesystem code?> but IMVHO maybe S3 is too expensive and is better applied to getting > another decade out of aging, reliable operating systems, and ZFS > architecture should work towards replacing S3 not toward pandering to > it.Replacing S3?> Many new ZFS users are convinced to try ZFS because they want to back > up non-ZFS filesystems onto zpool''s because it''s better than tape, so > that''s not a crazy idea.Not crazy, unless you need to get the backups off-site.> It''s plausible to want one backup server with a big slow pool, > deliberately running a Solaris release newer than anything else in > your lab. Then have tens of Solaris''s of various tooth-length ''zfs > send''ing from many pools toward one pool on the backup server. The > obvious way to move a filesystem rather than a pool from older Solaris > to newer, is ''zfs send | zfs recv''. > > The obvious problem: this doesn''t always work.Because they might break the send/recv format across versions.> The less obvious problem: how do you restore? It''s one thing to say, > ``I want it to always work to zfs send from an older system to a > newer,'''' which we are NOT saying yet. To make restore work, we need > to promise more: ``the format of the ''zfs send'' stream depends only on > the version number of the ZFS filesystem being sent, not on the zpool > version and not on the build of the sending OS.'''' That''s a more > aggressive compatibility guarantee than anyone''s suggested so far, > never mind what we have.Sure. But maybe send/recv aren''t the right tools for this problem. I''m just looking for *a way* to avoid storing lots of backup copies of cloned filesystems; I''m not asking that it be called send/recv. -- Dave Abrahams BoostPro Computing http://www.boostpro.com
on Wed Feb 18 2009, Frank Cusack <fcusack-AT-fcusack.com> wrote:> On February 17, 2009 3:57:34 PM -0800 Joe S <js.lists at gmail.com> wrote: >> On Tue, Feb 17, 2009 at 3:35 PM, David Magda <dmagda at ee.ryerson.ca> wrote: >>> If you want to do back ups of your file system use a documented utility >>> (tar, cpio, pax, zip, etc.). >> >> I''m going to try to use Amanda and backup my data (not snapshots). > > You missed the point, which is not to avoid snapshots, but to avoid > saving the stream as a backup. Backing up a snapshot is typically > preferred to backing up a "live" filesystem.Has anyone here noticed that http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide suggests in several places that zfs send streams be stored for backup? -- Dave Abrahams BoostPro Computing http://www.boostpro.com
I thinks that''s legitimate so long as you don''t change ZFS versions. Personally, I''m more comfortable doing a ''zfs send | zfs recv'' than I am storing the send stream itself. The problem I have with the stream is that I may not be able to receive it in a future version of ZFS, while I''m pretty sure that I can upgrade an actual pool/fs pretty easily. On Sun, Feb 22, 2009 at 4:48 PM, David Abrahams <dave at boostpro.com> wrote:> > on Wed Feb 18 2009, Frank Cusack <fcusack-AT-fcusack.com> wrote: > >> On February 17, 2009 3:57:34 PM -0800 Joe S <js.lists at gmail.com> wrote: >>> On Tue, Feb 17, 2009 at 3:35 PM, David Magda <dmagda at ee.ryerson.ca> wrote: >>>> If you want to do back ups of your file system use a documented utility >>>> (tar, cpio, pax, zip, etc.). >>> >>> I''m going to try to use Amanda and backup my data (not snapshots). >> >> You missed the point, which is not to avoid snapshots, but to avoid >> saving the stream as a backup. Backing up a snapshot is typically >> preferred to backing up a "live" filesystem. > > Has anyone here noticed that > http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide > suggests in several places that zfs send streams be stored for backup? > > -- > Dave Abrahams > BoostPro Computing > http://www.boostpro.com > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
On Mon, Feb 23, 2009 at 11:33 AM, Blake <blake.irvin at gmail.com> wrote:> I thinks that''s legitimate so long as you don''t change ZFS versions. > > Personally, I''m more comfortable doing a ''zfs send | zfs recv'' than I > am storing the send stream itself. The problem I have with the stream > is that I may not be able to receive it in a future version of ZFS, > while I''m pretty sure that I can upgrade an actual pool/fs pretty > easily. >Worse case keep a copy of the install iso with your backed up steams and you can restore into a virtual machine. I''ve got backup floppies from 10 years ago which are pretty much useless. VMs opens up the choices. Nicholas -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090223/90ca5253/attachment.html>
>>>>> "da" == David Abrahams <dave at boostpro.com> writes: >>>>> "b" == Blake <blake.irvin at gmail.com> writes:da> Has anyone here noticed that da> http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide da> suggests in several places that zfs send streams be stored for da> backup? Yup. I requested a wiki account to fix it and was ignored. b> I thinks that''s legitimate so long as you don''t change ZFS b> versions. well fine, but there''s certainly not a consensus on that, which makes it not a ``best practice.'''' There are other problems besides the versioning. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090222/0277471f/attachment.bin>
Agreed - I don''t think that archiving simply the send stream is a smart idea (yet, until the stream format is stabilized in some way). I''d much rather archive to a normal ZFS filesystem. With ZFS''s enormous pool capacities, it''s probably the closest thing we have right now to a future-proof filesystem. On Sun, Feb 22, 2009 at 6:58 PM, Miles Nordin <carton at ivy.net> wrote:> well fine, but there''s certainly not a consensus on that, which makes > it not a ``best practice.'''' There are other problems besides the > versioning. > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >
>>>>> "b" == Blake <blake.irvin at gmail.com> writes:c> There are other problems besides the versioning. b> Agreed - I don''t think that archiving simply the send stream b> is a smart idea (yet, until the stream format is stabilized *there* *are* *other* *problems* *besides* *the* *versioning*! -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090223/2b3fd1b3/attachment.bin>
Hello David, Saturday, February 21, 2009, 10:33:05 PM, you wrote: DA> on Sat Feb 21 2009, Miles Nordin <carton-AT-Ivy.NET> wrote:>> Many new ZFS users are convinced to try ZFS because they want to back >> up non-ZFS filesystems onto zpool''s because it''s better than tape, so >> that''s not a crazy idea.DA> Not crazy, unless you need to get the backups off-site. In that case you can have another box in other location/site and replicate data to it. Eventually you can install a backup client on such a zfs server and backup your data from here - depends on environment it could make a lot of sense and even save you money on licensing alone (one backup client for data coming from many servers). See http://milek.blogspot.com/2009/02/disruptive-backup-platform.html http://milek.blogspot.com/2009/02/backup-tool.html -- Best regards, Robert Milkowski http://milek.blogspot.com
on Mon Feb 23 2009, Robert Milkowski <milek-AT-task.gda.pl> wrote:> Hello David, > > Saturday, February 21, 2009, 10:33:05 PM, you wrote: > > DA> on Sat Feb 21 2009, Miles Nordin <carton-AT-Ivy.NET> wrote: > >>> Many new ZFS users are convinced to try ZFS because they want to back >>> up non-ZFS filesystems onto zpool''s because it''s better than tape, so >>> that''s not a crazy idea. > > DA> Not crazy, unless you need to get the backups off-site. > > In that case you can have another box in other location/site and > replicate data to it. Eventually you can install a backup client on > such a zfs server and backup your data from here - depends on > environment it could make a lot of sense and even save you money on > licensing alone (one backup client for data coming from many servers). > > See > http://milek.blogspot.com/2009/02/disruptive-backup-platform.html > http://milek.blogspot.com/2009/02/backup-tool.htmlYeah, maybe someday. I''m running a small distributed business out of my home with an outgoing pipe that''s "fast" (right -- as cablemodems go), I have one server running FreeBSD 6.2 hosted elsewhere that runs our web presence, and I am setting a local server running RAIDZ2 for my critical data. I don''t have lots of scratch for more offsite boxen, although I do have lots of idle boxen with weak hardware -- mostly old laptops -- laying around the house. I just got my system backing the ZFS up to S3. I believe I can afford that, for now. The economics of this approach may change as the server fills up, of course. -- Dave Abrahams BoostPro Computing http://www.boostpro.com
I''m sure that''s true. My point was that, given the choice between a zfs send/recv from one set of devices to another, where the target is another pool, and sending a zfs stream to a tarball, I''d sooner choose a solution that''s all live filesystems. If backups are *really* important, then it''s certainly better to use a product with commercial support. I think Amanda is zfs-aware now? On Mon, Feb 23, 2009 at 12:16 PM, Miles Nordin <carton at ivy.net> wrote:>>>>>> "b" == Blake ?<blake.irvin at gmail.com> writes: > > ? ? c> There are other problems besides the versioning. > > ? ? b> Agreed - I don''t think that archiving simply the send stream > ? ? b> is a smart idea (yet, until the stream format is stabilized > > *there* *are* *other* *problems* *besides* *the* *versioning*! > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >