Hi all, What is the current preferred method for backing up ZFS data pools, preferably using free ($0.00) software, and assuming that access to individual files (a la ufsbackup/ufsrestore) is required? TIA, -- Rich Teer, SCSA, SCNA, SCSECA, OGB member CEO, My Online Home Inventory URLs: http://www.rite-group.com/rich http://www.linkedin.com/in/richteer http://www.myonlinehomeinventory.com
Rich Teer <rich.teer at rite-group.com> wrote:> Hi all, > > What is the current preferred method for backing up ZFS data pools, > preferably using free ($0.00) software, and assuming that access to > individual files (a la ufsbackup/ufsrestore) is required?If you like to still do incremental backups, I recommend star. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
On 02/21/08 16:31, Rich Teer wrote:> What is the current preferred method for backing up ZFS data pools, > preferably using free ($0.00) software, and assuming that access to > individual files (a la ufsbackup/ufsrestore) is required?For home use I am making very successful use of zfs incremental send and receive. A script decides which filesystems to backup (based on a user property retrieved by zfs get) and snapshots the filesystem; it then looks for the last snapshot that the pool I''m backing up and the pool I''m backing up to have in common, and does a zfs send -i | zfs reveive over than. Backups are pretty quick since there is not huge amount of churn in the filesystems, and on my backup disks I have browsable access to snapshot of my data from every backup I have run. Gavin -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3249 bytes Desc: S/MIME Cryptographic Signature URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080221/940a4495/attachment.bin>
> For home use I am making very successful use of zfs incremental send > and receive. A script decides which filesystems to backup (based > on a user property retrieved by zfs get) and snapshots the filesystem; > it then looks for the last snapshot that the pool I''m backing > up and the pool I''m backing up to have in common, and > does a zfs send -i | zfs reveive over than. Backups are pretty > quick since there is not huge amount of churn in the filesystems, > and on my backup disks I have browsable access to snapshot of > my data from every backup I have run.As far as I know zfs send/receive is still not reliable unless they are the same version at both ends. This is a stupid decision on the part of Sun as far as I am concerned, so I hope that I''m wrong and they''ve realised the error of their way.> GavinJulian -- Julian King Computer Officer, University of Cambridge, Unix Support
On Thu, 2008-02-21 at 21:00 +0000, Gavin Maltby wrote:> On 02/21/08 16:31, Rich Teer wrote: > > > What is the current preferred method for backing up ZFS data pools, > > preferably using free ($0.00) software, and assuming that access to > > individual files (a la ufsbackup/ufsrestore) is required? > > For home use I am making very successful use of zfs incremental send > and receive. A script decides which filesystems to backup (based > on a user property retrieved by zfs get) and snapshots the filesystem; > it then looks for the last snapshot that the pool I''m backing > up and the pool I''m backing up to have in common, and > does a zfs send -i | zfs reveive over than.We''re using a perl script which uses zfs incremental send/recv, which works pretty well for our purposes. However I hear [1] that these commands will only run on an idle thread, so get enough cores in the boxes at both ends to handle any processing demands whilst they are running.> Backups are pretty > quick since there is not huge amount of churn in the filesystems, > and on my backup disks I have browsable access to snapshot of > my data from every backup I have run. >I also leave the snapshots visible (zfs set snapdir=visible) on the fileservers so that users can retrieve old versions of their files if they need to. HTH, Chris [1] http://www.joyeur.com/2008/01/22/bingodisk-and-strongspace-what-happened
J?rg Schilling wrote:> > If you like to still do incremental backups, I > recommend star. > > J?rgCan star backup and restore ZFS ACLs and extended attributes? Nick This message posted from opensolaris.org
Nicholas Brealey wrote:> J?rg Schilling wrote: > > >> If you like to still do incremental backups, I >> recommend star. >> >> J?rg >> > > Can star backup and restore ZFS ACLs and extended attributes? > >Including the new Windows ones that the CIFS server attaches?? -Kyle> Nick > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Nicholas Brealey <nick at brealey.org> wrote:> J?rg Schilling wrote: > > > > > If you like to still do incremental backups, I > > recommend star. > > > > J?rg > > Can star backup and restore ZFS ACLs and extended attributes?If star did appear in Solaris before (see PSARC 480/2004), it most likely did support it now. ZFS ACL support has been planned for the time past star-1.5. Star-1.5-final in on hold to allow minor changes for the integration to be done before. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
Kyle McDonald <KMcDonald at Egenera.COM> wrote:> Nicholas Brealey wrote: > > J?rg Schilling wrote: > > > > > >> If you like to still do incremental backups, I > >> recommend star. > >> > >> J?rg > >> > > > > Can star backup and restore ZFS ACLs and extended attributes? > > > > > Including the new Windows ones that the CIFS server attaches??Where do you see a difference? J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
On advice of Joerg Schilling and not knowing what ''star'' was, I decided to install it for testing. Star uses a very unorthodox build and install approach so the person building it has very little control over what it does. Unfortunately I made the mistake of installing it under /usr/local where it decided to remove the GNU tar I had installed there. Star does not support traditional tar command line syntax so it can''t be used with existing scripts. Performance testing showed that it was no more efficient than the ''gtar'' which comes with Solaris. It seems that ''star'' does not support an ''uninstall'' target so now I am forced to manually remove it from my system. It seems that the best way to deal with star is to install it into its own directory so that it does not interfere with existing software. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Fri, 22 Feb 2008, Bob Friesenhahn wrote:> where it decided to remove the GNU tar I had installed there. Star > does not support traditional tar command line syntax so it can''t be > used with existing scripts. Performance testing showed that it was no > more efficient than the ''gtar'' which comes with Solaris. It seemsThere is something I should clarify in the above. Star is a stickler for POSIX command line syntax so syntax like ''tar -cvf foo.tar'' or ''tar cvf foo.tar'' does not work, but ''tar -c -v -f foo.tar'' does work. Testing with Star, GNU tar, and Solaris cpio showed that Star and GNU tar were able to archive the content of my home directory with no complaint whereas Solaris cpio required specification of the ''ustar'' format in order to deal with long file and path names, as well as large inode numbers. Solaris cpio complained about many things with my files (e.g. unresolved passwd and group info), but managed to produce the highest throughput when archiving to a disk file. I can not attest to the ability of these tools to deal with ACLs since I don''t use them. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Bob Friesenhahn <bfriesen at simple.dallas.tx.us> wrote:> On advice of Joerg Schilling and not knowing what ''star'' was, I > decided to install it for testing. Star uses a very unorthodox build > and install approach so the person building it has very little control > over what it does.This is of course wrong: - A lot of software uses an ancient build system introduced by Stallman that is based on the ideas from the Open Source movement from the 1970s. This build system needs a lot of manual intervention because is not very automated. It is also non-modular, uses multiple copies of the same code, does not allow to simply add a software packet to an existing tree without replicating code and results in exteremely hard to maintain makefiles. It does not support dependencies and by default overwrites compile results from other platforms. This is why I call this system a throw away compile environment: get tar archive -> unpack it -> repeat: compile until you fount the right manual parameters -> install -> trow everything away :-( It is used by people who don''t think globally.... - My build system started in 1992 (close to the time the RMS system was introduced) and was inspired by the new ideas from Plan 9 from the mid 1980s. It is based on new features from the plan 9 make program (e.g. include other makefiles) and the SunPro make (1986) pattern matching macro expansions. It is highly modular, does not repeat code and works on more platforms than the one mentioned above. It automatically maintains dependencies and it allows to do simultaneously compilations on all platforms if you NFS mount the source. It allows the author to use the same environment than the "user". Simuilar approaches are used by FreeBSD, (*BSD), Solaris ON and David Korn. The implementations found on FreeBSD, (*BSD), Solaris ON are single platform (non-portable), the implementation from David Korn is higly portable as the the Schily Makefilesystem. It is interesting to see that David Korn (although he uses a different approach based on a completely rewritten make program "nmake") arrives at nearly identical "leaf makefiles", so it seems that there are common wishes to simplify portability. The most important advantage from my makefile system is that it needs _much_ less user input in order to work. This is why people may believe it gives less control....in fact, it gives better control on the important parameters.> Unfortunately I made the mistake of installing it under /usr/localIt seems that you did read the documentation and know how to get control on the install location as the standard install location is /opt/schily.> where it decided to remove the GNU tar I had installed there. StarWell, if you make the mistake to install GNU tar as "tar" in a global "dump yard" for software you need to be prepared other similar software reuses the name "tar", in special if the other software (star) is closer to the tar command line standard than GNU tar.> does not support traditional tar command line syntax so it can''t beThis is _definitely_ wrong: GNU tar does not correctly implement the tar CLI standard while star does! If you have problems with scripts that depend on the non-compliant GNU tar CLI, you get a problem. I recommend to inform the authors of these scripts about their non-compliance.> used with existing scripts. Performance testing showed that it was no > more efficient than the ''gtar'' which comes with Solaris. It seems > that ''star'' does not support an ''uninstall'' target so now I am forced > to manually remove it from my system.It seems that you did not make real performance tests. I usually receive thanks for the enhanced star performance. If your performance is already limited by the background storage or by the filesystem, star of course cannot help..... Star typically needs 1/4 .. 1/3 of the CPU time needed by GNU tar ans it uses two processes to do the work in parallel. If you found a case where star is not faster than GNU tar andwhere the speed is not limited by the filesystem or the I/O devices, this is a bug that will be fixed if you provide the needed information to repeat it. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
Bob Friesenhahn <bfriesen at simple.dallas.tx.us> wrote:> On Fri, 22 Feb 2008, Bob Friesenhahn wrote: > > where it decided to remove the GNU tar I had installed there. Star > > does not support traditional tar command line syntax so it can''t be > > used with existing scripts. Performance testing showed that it was no > > more efficient than the ''gtar'' which comes with Solaris. It seems > > There is something I should clarify in the above. Star is a stickler > for POSIX command line syntax so syntax like ''tar -cvf foo.tar'' or > ''tar cvf foo.tar'' does not work, but ''tar -c -v -f foo.tar'' does work.Not true: GNU tar has many deviations from the historical tar command line syntax that is describes in the SUSv2 standard. Star, when called "tar", is 100% compatible to SUSv2. Star (called star) still is very close to SUSv2 but it disallows all constructs that are a security risk. Many people believe that star is not compliant because they compare to the non compliant GNU tar. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
On Sat, 23 Feb 2008, Joerg Schilling wrote:> > Star typically needs 1/4 .. 1/3 of the CPU time needed by GNU tar ans it > uses two processes to do the work in parallel. If you found a case where > star is not faster than GNU tar andwhere the speed is not limited by the > filesystem or the I/O devices, this is a bug that will be fixed if you provide > the needed information to repeat it.I re-ran my little test today and do see that ''star'' does produce somewhat reduced overall run time but does not consume less CPU than GNU tar. This is just a test of the time to archive the files in my home directory. My home directory is in a zfs filesystem. The output is written to a file in the same storage pool but a different filesystem. This time around I used default block sizes rather than 128K. Overall throughput seems on the order of 40MB/second. gtar -cf gtar.tar /home/bfriesen 6.42s user 128.27s system 12% cpu 17:19.66 total -rw-r--r-- 1 bfriesen home 37G Feb 23 10:55 gtar.tar star -c -f star.tar /home/bfriesen 4.11s user 142.65s system 15% cpu 16:03.41 total -rw-r--r-- 1 bfriesen home 37G Feb 23 11:15 star.tar find /home/bfriesen -depth -print 0.55s user 3.52s system 6% cpu 1:01.61 total cpio -o -H ustar -O cpio.tar 11.47s user 122.28s system 11% cpu 18:38.97 total -rwxr-xr-x 1 bfriesen home 37G Feb 23 11:40 cpio.tar* Notice that Sun''s cpio marks its output file as executable, which is clearly a bug. Clearly none of these tools are adequate to deal with the massive data storage made easy with zfs storage pools. Zfs requires similarly innovative backup solutions to deal with it. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
On Sat, 23 Feb 2008, Bob Friesenhahn wrote:> I re-ran my little test today and do see that ''star'' does produce > somewhat reduced overall run time but does not consume less CPU than > GNU tar. This is just a test of the time to archive the files in my > home directory. My home directory is in a zfs filesystem. The output > is written to a file in the same storage pool but a different > filesystem. This time around I used default block sizes rather than > 128K. Overall throughput seems on the order of 40MB/second.Cool. Can one selectively restore files from an archive created by Star? For example, if I archive everything under /home/rich, can I just restore /home/rich/some/random/file? What about with Star''s competitors, tar, gtar, pax, and cpio? (I guess I should investigate each of those tools one day!) -- Rich Teer, SCSA, SCNA, SCSECA, OGB member CEO, My Online Home Inventory URLs: http://www.rite-group.com/rich http://www.linkedin.com/in/richteer http://www.myonlinehomeinventory.com
Bob Friesenhahn <bfriesen at simple.dallas.tx.us> wrote:> On Sat, 23 Feb 2008, Joerg Schilling wrote: > > > > Star typically needs 1/4 .. 1/3 of the CPU time needed by GNU tar ans it > > uses two processes to do the work in parallel. If you found a case where > > star is not faster than GNU tar andwhere the speed is not limited by the > > filesystem or the I/O devices, this is a bug that will be fixed if you provide > > the needed information to repeat it. > > I re-ran my little test today and do see that ''star'' does produce > somewhat reduced overall run time but does not consume less CPU thanIf you found this, I am not sure what you did meter. gtar needs the same amount of system time as star does. This is not really reducable. You may reduce the system time a bit with star if you tell star to use a bigger FIFO size. Approx. 90% of the user CPU time of a tar implementation is spend in the creation of the tar archive headers. This is done much more efficiently by star than by GNU tar. If you like to compare you should know the different archive formats. If you like to do backups, you need to check archive formats that include a sufficient amount of file meta data. If you like to do backups, you need a POSIX.1-2001 based tar archive that allows extensions. If you like to create this archive type, you need 2x+ the amount of USER CPU than with vanilla tar if you do not have an optimized algorithm. Star only needs 1.7x the amount of user CPU time and star in POSIX.1-2001 mode still needs less CPU time than GNU tar with vanilla old tar archives.> Clearly none of these tools are adequate to deal with the massive data > storage made easy with zfs storage pools. Zfs requires similarly > innovative backup solutions to deal with it.You did not test backuops yet and you can''t if you include other tools in your test.... Star implements true incremental bacakups. This is missing in cpio and this is announced but not working with GNU tar. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
Rich Teer <rich.teer at rite-group.com> wrote:> Cool. Can one selectively restore files from an archive created by > Star? For example, if I archive everything under /home/rich, can I > just restore /home/rich/some/random/file? What about with Star''s > competitors, tar, gtar, pax, and cpio? (I guess I should investigate > each of those tools one day!)Star is the only portable and non fs-dependent archiver that supports incremental dumps, so I see no cometition.... Well, there is a program from David Korn that does not support ACLs and that does _differential_ backups. But you cannot easily restore single files from a differantial backup. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
On Sat, 23 Feb 2008, Joerg Schilling wrote:> Star is the only portable and non fs-dependent archiver that supports > incremental dumps, so I see no cometition....Incremental backups aren''t what I''m talking about. I''m talking about the ability to retrieve one or more distinct files from an archive, without having to restore the whole archive, like one can do with ufsrestore. -- Rich Teer, SCSA, SCNA, SCSECA, OGB member CEO, My Online Home Inventory URLs: http://www.rite-group.com/rich http://www.linkedin.com/in/richteer http://www.myonlinehomeinventory.com
Rich Teer <rich.teer at rite-group.com> wrote:> On Sat, 23 Feb 2008, Joerg Schilling wrote: > > > Star is the only portable and non fs-dependent archiver that supports > > incremental dumps, so I see no cometition.... > > Incremental backups aren''t what I''m talking about. I''m talking about > the ability to retrieve one or more distinct files from an archive, > without having to restore the whole archive, like one can do with > ufsrestore.The OP was interested in incremental backups AFAIR. People who like to backup usually also like to do incremental backups. Why don''t you? J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
Rich Teer wrote:> On Sat, 23 Feb 2008, Joerg Schilling wrote: > >> Star is the only portable and non fs-dependent archiver that supports >> incremental dumps, so I see no cometition.... > > Incremental backups aren''t what I''m talking about. I''m talking about > the ability to retrieve one or more distinct files from an archive, > without having to restore the whole archive, like one can do with > ufsrestore.that''s been in tar since I can remember; from the man-page of tar(1): x Extract or restore. The named files are extracted from the tarfile and written to the directory specified in the tarfile, relative to the current directory. HTH Michael -- Michael Schuster http://blogs.sun.com/recursion Recursion, n.: see ''Recursion''
On Sun, 24 Feb 2008, Joerg Schilling wrote:> > Incremental backups aren''t what I''m talking about. I''m talking about > > the ability to retrieve one or more distinct files from an archive, > > without having to restore the whole archive, like one can do with > > ufsrestore. > > The OP was interested in incremental backups AFAIR.I''m the OP... :-)> People who like to backup usually also like to do incremental backups. > Why don''t you?I do like incremental backups. But the ability to do incremental backups and restore arbitrary files from an archive are two different things. An incremental backup backs up files that have changed since the most recent backup, so suppose my home directory contains 1000 files, 100 of which have changed since my last backup. I perform an incremental backup of my home directory, and the resulting archive contains those 100 files. Now suppose that I accidentally delete a couple of those files; it is very desirable to be able to restore just a certain named subset of the files in an archive rather than having to restore the whole archive. I''m looking for a tool that can do that. -- Rich Teer, SCSA, SCNA, SCSECA, OGB member CEO, My Online Home Inventory URLs: http://www.rite-group.com/rich http://www.linkedin.com/in/richteer http://www.myonlinehomeinventory.com
On Sun, 24 Feb 2008, michael schuster wrote:> that''s been in tar since I can remember; from the man-page of tar(1): > > x > > Extract or restore. The named files are extracted from > the tarfile and written to the directory specified in > the tarfile, relative to the current directory.Ah ha! Excellent! Thanks for the pointer. -- Rich Teer, SCSA, SCNA, SCSECA, OGB member CEO, My Online Home Inventory URLs: http://www.rite-group.com/rich http://www.linkedin.com/in/richteer http://www.myonlinehomeinventory.com
Rich Teer <rich.teer at rite-group.com> wrote:> > People who like to backup usually also like to do incremental backups. > > Why don''t you? > > I do like incremental backups. But the ability to do incremental backups > and restore arbitrary files from an archive are two different things. An > incremental backup backs up files that have changed since the most recent > backup, so suppose my home directory contains 1000 files, 100 of which have > changed since my last backup. I perform an incremental backup of my home > directory, and the resulting archive contains those 100 files. > > Now suppose that I accidentally delete a couple of those files; it is very > desirable to be able to restore just a certain named subset of the files > in an archive rather than having to restore the whole archive. I''m looking > for a tool that can do that.Why do you believe that an incremental backup disallows to extract single files? The nice fact with tar based backps is that you have a tar archive with additional properties. Anything you may do with a tar archive applies to a full or incremental backup made by star. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
Joerg Schilling wrote:> Rich Teer <rich.teer at rite-group.com> wrote: > >>> People who like to backup usually also like to do incremental backups. >>> Why don''t you? >> I do like incremental backups. But the ability to do incremental backups >> and restore arbitrary files from an archive are two different things. An >> incremental backup backs up files that have changed since the most recent >> backup, so suppose my home directory contains 1000 files, 100 of which have >> changed since my last backup. I perform an incremental backup of my home >> directory, and the resulting archive contains those 100 files. >> >> Now suppose that I accidentally delete a couple of those files; it is very >> desirable to be able to restore just a certain named subset of the files >> in an archive rather than having to restore the whole archive. I''m looking >> for a tool that can do that. > > Why do you believe that an incremental backup disallows to extract single filesRich never said so. He said "the ability to do incremental backups and restore arbitrary files from an archive are two different things." You were addressing an issue he never brought up. Michael -- Michael Schuster Sun Microsystems, Inc. recursion, n: see ''recursion''
Rich Teer wrote> Now suppose that I accidentally delete a couple of those files; it is very > desirable to be able to restore just a certain named subset of the files > in an archive rather than having to restore the whole archive. I''m looking > for a tool that can do that.Now if Joerg wasn''t so terse in his replies, he could have told you that star is actually a more-comfortable-than-the-usual-tar in this regard. Since the builtin find, you may even restore files you accidentally deleted, but don''t recall the exact location. Now Joerg, be helpful and give a few examples, please? For anyone who ever streamed through a tar archive to first retrieve the filenames with their paths in their glorious length and correct spelling (wait, did that start at ./export/home... or at /home ??), before starting a second run at tar to actually retrieve that file, these news should be quite welcome. Another feature I am using rather often is the option to diff a directory against its tar archive to find out what has been added/deleted/modified. (Or two directories against each other, as in (cd /tmp; star -c whatever ) | star -diff diffopts=not,times,id Note that used like this, I assume that another directory called "whatever" is right under cwd''s feet at the time of calling. Both the diff and the find abilities of star are well worth investigating, I haven''t enough experience using the builtin find yet, but the diff has several times been a real life saver, detecting corrupt or muddled files, or just telling apart those ugly duplicated directories. /Tatjana This message posted from opensolaris.org
Can we take further discussion of star to star-discuss at opensolaris.org please unless it really has something to do with ZFS. Thanks. -- Darren J Moffat
michael schuster <Michael.Schuster at Sun.COM> wrote:> > Why do you believe that an incremental backup disallows to extract single files > > Rich never said so. He said "the ability to do incremental backups and > restore arbitrary files from an archive are two different things." You were > addressing an issue he never brought up.I would like to know why you and he believe so. I believe this is something that cannot be looked at independently. Well, looking at e.g. Amanda shows that the Amanda people use an incompatible nomenclature. Amanda is e.g. doing time based backups but (as it prefers GNU tar for backups) is unable to do a complete restore from scratch. Mybe you and Rich could describe what you are interested in as you seem to have an unusual understanding of what might be of interest. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
Joerg Schilling wrote:> michael schuster <Michael.Schuster at Sun.COM> wrote: > >>> Why do you believe that an incremental backup disallows to extract single files >> Rich never said so. He said "the ability to do incremental backups and >> restore arbitrary files from an archive are two different things." You were >> addressing an issue he never brought up. > > I would like to know why you and he believe so. > > I believe this is something that cannot be looked at independently. > Well, looking at e.g. Amanda shows that the Amanda people use an incompatible > nomenclature. Amanda is e.g. doing time based backups but (as it prefers > GNU tar for backups) is unable to do a complete restore from scratch. > > Mybe you and Rich could describe what you are interested in as you seem to > have an unusual understanding of what might be of interest.PLEASE take this OFF the ZFS discussion alias this has nothing to do with ZFS any more. -- Darren J Moffat
Tatjana S Heuser <theuser at orbit.in-berlin.de> wrote:> Rich Teer wrote > > > Now suppose that I accidentally delete a couple of those files; it is very > > desirable to be able to restore just a certain named subset of the files > > in an archive rather than having to restore the whole archive. I''m looking > > for a tool that can do that. > > Now if Joerg wasn''t so terse in his replies, he could have told you that star > is actually a more-comfortable-than-the-usual-tar in this regard. SinceI thought that people first read the man oage and then ask....> the builtin find, you may even restore files you accidentally deleted, but > don''t recall the exact location. Now Joerg, be helpful and give a few > examples, please?An important feature of star (when called "star") is that is does not extract files from the archive if they are not newer than the file on disk. Together with an interactive mode and the ability to specify many patterns, this allows to reduce the number of manual interactions with in the interactive mode. Together with the built in find command this help a lot to minimize the amount of typingto get a file back (thing e.g. on usinf the find syntax to specify a time (file age) range for files.> Another feature I am using rather often is the option to diff a directory > against its tar archive to find out what has been added/deleted/modified. > (Or two directories against each other, as in > (cd /tmp; star -c whatever ) | star -diff diffopts=not,times,id > Note that used like this, I assume that another directory called "whatever" > is right under cwd''s feet at the time of calling. > Both the diff and the find abilities of star are well worth investigating, > I haven''t enough experience using the builtin find yet, but the diff has > several times been a real life saver, detecting corrupt or muddled files, > or just telling apart those ugly duplicated directories.People who know find(1) by heart and who did understand where/how the libfind code is used inside star will be able to use the built in find to make life easier. Well, find(1) uses not a really simple language but once you did understand it, you are able to do even compley things easily. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
Darren J Moffat <darrenm at opensolaris.org> wrote:> Can we take further discussion of star to star-discuss at opensolaris.org > please unless it really has something to do with ZFS.Do you have a problem with a backup related discussion related to ZFS? The original question from the OP was ZFS related and it has not yet been solved. I believe it is a good idea continue the discussion at least for as long until the base question is clear. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
Darren J Moffat <Darren.Moffat at Sun.COM> wrote:> Joerg Schilling wrote: > > michael schuster <Michael.Schuster at Sun.COM> wrote: > > > >>> Why do you believe that an incremental backup disallows to extract single files > >> Rich never said so. He said "the ability to do incremental backups and > >> restore arbitrary files from an archive are two different things." You were > >> addressing an issue he never brought up. > > > > I would like to know why you and he believe so. > > > > I believe this is something that cannot be looked at independently. > > Well, looking at e.g. Amanda shows that the Amanda people use an incompatible > > nomenclature. Amanda is e.g. doing time based backups but (as it prefers > > GNU tar for backups) is unable to do a complete restore from scratch. > > > > Mybe you and Rich could describe what you are interested in as you seem to > > have an unusual understanding of what might be of interest. > > PLEASE take this OFF the ZFS discussion alias this has nothing to do > with ZFS any more.I am sorry to see that you don''t like a ZFS related discussion in this list. Please just read what I what I have written in the mail you replied. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
Joerg Schilling wrote:> Darren J Moffat <Darren.Moffat at Sun.COM> wrote: > >> Joerg Schilling wrote: >>> michael schuster <Michael.Schuster at Sun.COM> wrote: >>> >>>>> Why do you believe that an incremental backup disallows to extract single files >>>> Rich never said so. He said "the ability to do incremental backups and >>>> restore arbitrary files from an archive are two different things." You were >>>> addressing an issue he never brought up. >>> I would like to know why you and he believe so. >>> >>> I believe this is something that cannot be looked at independently. >>> Well, looking at e.g. Amanda shows that the Amanda people use an incompatible >>> nomenclature. Amanda is e.g. doing time based backups but (as it prefers >>> GNU tar for backups) is unable to do a complete restore from scratch. >>> >>> Mybe you and Rich could describe what you are interested in as you seem to >>> have an unusual understanding of what might be of interest. >> PLEASE take this OFF the ZFS discussion alias this has nothing to do >> with ZFS any more. > > I am sorry to see that you don''t like a ZFS related discussion in this list. > Please just read what I what I have written in the mail you replied.ZFS discuss is fine but the thread has gone into non ZFS related and is generic backup stuff. If there are ZFS specifics - like the question about extended attributes then I think this is a reasonable place to discuss. Discussion about nomenclature of Amanda when it does not concern ZFS is not appropriate for here. -- Darren J Moffat
Rich Teer <rich.teer at rite-group.com> wrote:> > People who like to backup usually also like to do incremental backups. > > Why don''t you? > > I do like incremental backups. But the ability to do incremental backups > and restore arbitrary files from an archive are two different things. An > incremental backup backs up files that have changed since the most recent > backup, so suppose my home directory contains 1000 files, 100 of which have > changed since my last backup. I perform an incremental backup of my home > directory, and the resulting archive contains those 100 files. > > Now suppose that I accidentally delete a couple of those files; it is very > desirable to be able to restore just a certain named subset of the files > in an archive rather than having to restore the whole archive. I''m looking > for a tool that can do that.Hi Rich, I asked you a question that you did not yet answer: Are you interested only in full backups and in the ability to restore single files from that type of backups? Or are you interested in incremental backups that _also_ allow you to reduce the daily backup size but still gives you the ability to restore single files? I am asking this because there are some backup programs that do not fit into the list above: The Amanda people e.g. call something "incremental backup" that does not allow you to restore to an empty disk up to the state of the last incremental. Amanda in this case suffers from the problem that GNU tar does not allow you to do a restore on an empty disk if someone did rename directories in a way that triggers the conceptional problems in GNU tar. So it seems to be important to me to first find what kind of backup you are interested in. Please answer my questions! J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
michael schuster <Michael.Schuster at Sun.COM> wrote:> Rich never said so. He said "the ability to do incremental backups and > restore arbitrary files from an archive are two different things." You were > addressing an issue he never brought up.I really don''t understand why you did not answer my question. It is obvious that there is some confusion in the question and it is not possible to continue the discussion if you do not try to help to solve this problem. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
Darren J Moffat <Darren.Moffat at Sun.COM> wrote:> ZFS discuss is fine but the thread has gone into non ZFS related and is > generic backup stuff. If there are ZFS specifics - like the question > about extended attributes then I think this is a reasonable place to > discuss. Discussion about nomenclature of Amanda when it does not > concern ZFS is not appropriate for here.You are welcome to create a mailing list for generic backup stuff.... The discussion here seems to be started by people who are looking for a backup suitable for ZFS. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
On Tue, 26 Feb 2008, Joerg Schilling wrote:> Hi Rich, I asked you a question that you did not yet answer:Hi J?rg,> Are you interested only in full backups and in the ability to restore single > files from that type of backups? > > Or are you interested in incremental backups that _also_ allow you to reduce the > daily backup size but still gives you the ability to restore single files?Both: I''d like to be able to restore single files from both a full and incremental backup of a ZFS file system. -- Rich Teer, SCSA, SCNA, SCSECA, OGB member CEO, My Online Home Inventory URLs: http://www.rite-group.com/rich http://www.linkedin.com/in/richteer http://www.myonlinehomeinventory.com
On Feb 26, 2008, at 10:23 AM, Rich Teer wrote:> On Tue, 26 Feb 2008, Joerg Schilling wrote: > >> Hi Rich, I asked you a question that you did not yet answer: > > Hi J?rg, > >> Are you interested only in full backups and in the ability to >> restore single >> files from that type of backups? >> >> Or are you interested in incremental backups that _also_ allow you >> to reduce the >> daily backup size but still gives you the ability to restore single >> files? > > Both: I''d like to be able to restore single files from both a full and > incremental backup of a ZFS file system.A zfs-aware NDMP daemon would be really neat.> > > -- > Rich Teer, SCSA, SCNA, SCSECA, OGB member > > CEO, > My Online Home Inventory > > URLs: http://www.rite-group.com/rich > http://www.linkedin.com/in/richteer > http://www.myonlinehomeinventory.com > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-Andy
Rich Teer <rich.teer at rite-group.com> wrote:> > Are you interested only in full backups and in the ability to restore single > > files from that type of backups? > > > > Or are you interested in incremental backups that _also_ allow you to reduce the > > daily backup size but still gives you the ability to restore single files? > > Both: I''d like to be able to restore single files from both a full and > incremental backup of a ZFS file system.OK, then the only filesystem independent program I know that would be able to do what you like is star. - The solution from David Korn site does differential backups and thus is unable to easily restore single files. - GNU tar fails with incremental restores if there was some specific kind of directory rename between two incrementals. - Other programs do not support incrementals. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily