Euan Thoms
2010-Apr-29 05:02 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
I''m looking for a way to backup my entire system, the rpool zfs pool to an external HDD so that it can be recovered in full if the internal HDD fails. Previously with Solaris 10 using UFS I would use ufsdump and ufsrestore, which worked so well, I was very confident with it. Now ZFS doesn''t have an exact replacement of this so I need to find a best practice to replace it. I''m guessing that I can format the external HDD as a pool called ''backup'' and "zfs send -R ... | zfs receive ..." to it. What I''m not sure about is how to restore. Back in the days of UFS, I would boot of the Solaris 10 CD in single user mode command prompt, partition HDD with correct slices, format it, mount it and ufsrestore the entire filesystem. With zfs, I don''t know what I''m doing. Can I just make a pool called rpool and zfs send/receive it back? -- This message posted from opensolaris.org
Edward Ned Harvey
2010-Apr-29 11:06 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Euan Thoms > > I''m looking for a way to backup my entire system, the rpool zfs pool to > an external HDD so that it can be recovered in full if the internal HDD > fails. Previously with Solaris 10 using UFS I would use ufsdump and > ufsrestore, which worked so well, I was very confident with it. Now ZFS > doesn''t have an exact replacement of this so I need to find a best > practice to replace it. > > I''m guessing that I can format the external HDD as a pool called > ''backup'' and "zfs send -R ... | zfs receive ..." to it. What I''m not > sure about is how to restore. Back in the days of UFS, I would boot of > the Solaris 10 CD in single user mode command prompt, partition HDD > with correct slices, format it, mount it and ufsrestore the entire > filesystem. With zfs, I don''t know what I''m doing. Can I just make a > pool called rpool and zfs send/receive it back?An excellent question. One which many people would never bother to explore, but important nonethenless. I have not tested this, so I''ll encourage testing it and coming back to say how it went: I would install solaris or opensolaris just as you did the first time. That way, the bootloader, partition tables, etc, are all configured for you automatically. (Just restoring the filesystem is not enough.) Then I''d boot from the CD, and zfs send | zfs receive, from the external backup disk to the actual rpool. Thus replacing the entire filesystem. You should test this, because I am only like 90% certain it will work.
Cindy Swearingen
2010-Apr-29 14:24 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
Hi Euan, For full root pool recovery see the ZFS Administration Guide, here: http://docs.sun.com/app/docs/doc/819-5461/ghzvz?l=en&a=view Recovering the ZFS Root Pool or Root Pool Snapshots Additional scenarios and details are provided in the ZFS troubleshooting wiki. The link is here but the site is not responding at the moment: http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide Check back here later today. Thanks, Cindy On 04/28/10 23:02, Euan Thoms wrote:> I''m looking for a way to backup my entire system, the rpool zfs pool to an external HDD so that it can be recovered in full if the internal HDD fails. Previously with Solaris 10 using UFS I would use ufsdump and ufsrestore, which worked so well, I was very confident with it. Now ZFS doesn''t have an exact replacement of this so I need to find a best practice to replace it. > > I''m guessing that I can format the external HDD as a pool called ''backup'' and "zfs send -R ... | zfs receive ..." to it. What I''m not sure about is how to restore. Back in the days of UFS, I would boot of the Solaris 10 CD in single user mode command prompt, partition HDD with correct slices, format it, mount it and ufsrestore the entire filesystem. With zfs, I don''t know what I''m doing. Can I just make a pool called rpool and zfs send/receive it back?
Edward Ned Harvey
2010-Apr-30 03:42 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Cindy Swearingen > > For full root pool recovery see the ZFS Administration Guide, here: > > http://docs.sun.com/app/docs/doc/819-5461/ghzvz?l=en&a=view > > Recovering the ZFS Root Pool or Root Pool SnapshotsUnless I misunderstand, I think the intent of the OP question is how to do bare metal recovery after some catastrophic failure. In this situation, recovery is much more complex than what the ZFS Admin Guide says above. You would need to boot from CD, and partition and format the disk, then create a pool, and create a filesystem, and "zfs send | zfs receive" into that filesystem, and finally install the boot blocks. Only some of these steps are described in the ZFS Admin Guide, because simply expanding the rpool is a fundamentally easier thing to do. Even though I think I could do that ... I don''t have a lot of confidence in it, and I can certainly imagine some pesky little detail being a problem. This is why I suggested the technique of: Reinstall the OS just like you did when you first built your machine, before the catastrophy. It doesn''t even matter if you make the same selections you made before (IP address, package selection, authentication method, etc) as long as you''re choosing to partition and install the bootloader like you did before. This way, you''re sure the partitions, format, pool, filesystem, and bootloader are all configured properly. Then boot from CD again, and "zfs send | zfs receive" to overwrite your existing rpool. And as far as I know, that will take care of everything. But I only feel like 90% confident that would work.
Euan Thoms
2010-Apr-30 07:36 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
Thanks Cindy for the links. I see that this could possibly be a replacement for ufsbackup/ufsrestore but unless a further snapshot can be appended to the file containing the recursive rootpool snapshot, it would still regress from the incremental backup that ufsbackup has. It would take a long time to run every night but on the plus side an in-situe backup without having to stop services is an improvement from UFS days. Haven''t tried it yet, sounds a bit more complicated than I had hoped for. -- This message posted from opensolaris.org
Euan Thoms
2010-Apr-30 07:59 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
Thanks Edward, you understood me perfectly. Your suggestion sounds very promising. I like the idea of letting the installation CD set everything up, that way some hardware/drivers could possibly be updated and yet it still work. On top of a bare metal recovery, I would like to leverage the incredible power of ZFS snapshots, I love the way zfs send / receive works. It''s the root pool and BEs complexities that worry me. My ideal solution would be to have the data accessible from the backup media (external HDD) as well as be used as full syatem restore. Below is what I would consider ideal: 1.) Create a pool on an external HDD called backup-pool 2.) Send the whole rpool (all filesystems within) to the backup pool. 3.) be able to browse the backup pool starting from /backup-pool 4.) be able to export the backup pool and import on PC2 to browse the files there 5.) be able to create another snapshot of rpool and "zfs snapshot -i rpool at first-snapshot rpool at next-snapshot backup-pool/rpool" (send the increment to the backup pool/drive 6.) be able to browse the latest snapshot data on the backup drive, whilst able to clone an older snapsho 7.) be able to ''zfs send'' the latest backup snapshot to a fresh installation, thus get it back to exactly how it was before disaster. At the moment I have successfully achieved 1-4 and I''m very impressed. I am currently trying to get 5-6 working, mildy confident that it will work, done it in part but got errors with /export/home filesystem and subsequently pool failed to import/export. It''s just copying over again after wiping backup pool and starting again. I hope build 134 is a good build to test this on. However, it''s step 7 that I have no idea if it will work. Edward, your post gives me promise, 90% confidence is a good start. Watch this space for my results. -- This message posted from opensolaris.org
Euan Thoms
2010-Apr-30 11:47 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
Well I''m so impressed with zfs at the moment! I just got steps 5 and 6 (form my last post) to work, and it works well. Not only does it send the increment over to the backup drive, the latest increment/snapshot appears in the mounted filesystem. In nautilus I can browse an exact copy of my PC, from / to the deepest parts of my home folder. And it will backup my entire system in 1-2 minutes, AMAZING!! Below are the steps, try it for yourself on a spare USB HDD:>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>># Create backup storage pool on drive c12t0d0 pfexec zpool create backup-pool c12t0d0 # Recursively snapshot the root pool (rpool) pfexec zfs snapshot -r rpool at first # Send the entire pool in all it''s snapshots to the backup pool, disable mounting pfexec zfs send rpool at first | pfexec zfs receive -u backup-pool/rpool pfexec zfs send rpool/ROOT at first | pfexec zfs receive -u backup-pool/rpool/ROOT pfexec zfs send rpool/ROOT/OpenSolaris-2009.06-134 at first | pfexec zfs receive -u backup-pool/rpool/ROOT/OpenSolaris-2009.06-134 pfexec zfs send rpool/dump at first | pfexec zfs receive -u backup-pool/rpool/dump pfexec zfs send rpool/swap at first | pfexec zfs receive -u backup-pool/rpool/swap pfexec zfs send rpool/webspace at first | pfexec zfs receive -u backup-pool/rpool/webspace pfexec zfs send rpool/export at first | pfexec zfs receive -u backup-pool/rpool/export pfexec zfs send rpool/export/home at first | pfexec zfs receive -u backup-pool/rpool/export/home pfexec zfs send rpool/export/home/euan at first | pfexec zfs receive -u backup-pool/rpool/export/home/euan pfexec zfs send rpool/export/home/euan/Downloads at first | pfexec zfs receive -u backup-pool/rpool/export/home/euan/Downloads pfexec zfs send rpool/export/home/euan/VBOX-HDD at first | pfexec zfs receive -u backup-pool/rpool/export/home/euan/VBOX-HDD # Change mount points to correct structure pfexec zfs set mountpoint=legacy backup-pool/rpool/ROOT pfexec zfs set mountpoint=/backup-pool/opensolaris backup-pool/rpool/ROOT/OpenSolaris-2009.06-134 pfexec zfs set mountpoint=/backup-pool/opensolaris/rpool backup-pool/rpool pfexec zfs set mountpoint=/backup-pool/opensolaris/opt/webspace backup-pool/rpool/webspace pfexec zfs set mountpoint=/backup-pool/opensolaris/export backup-pool/rpool/export pfexec zfs set mountpoint=/backup-pool/opensolaris/export/home backup-pool/rpool/export/home pfexec zfs set mountpoint=/backup-pool/opensolaris/export/home/euan backup-pool/rpool/export/home/euan pfexec zfs set mountpoint=/backup-pool/opensolaris/export/home/euan/Downloads backup-pool/rpool/export/home/euan/Downloads pfexec zfs set mountpoint=/backup-pool/opensolaris/export/home/euan/VBOX-HDD backup-pool/rpool/export/home/euan/VBOX-HDD # Now we can mount the backup pool filesystems pfexec zfs mount backup-pool/rpool/ROOT/OpenSolaris-2009.06-134 pfexec zfs mount backup-pool/rpool pfexec zfs mount backup-pool/rpool/webspace pfexec zfs mount backup-pool/rpool/export pfexec zfs mount backup-pool/rpool/export/home pfexec zfs mount backup-pool/rpool/export/home/euan pfexec zfs mount backup-pool/rpool/export/home/euan/Downloads pfexec zfs mount backup-pool/rpool/export/home/euan/VBOX-HDD # Take second snapshot at a later point in time pfexec zfs snapshot -r rpool at second # Send the increments to the backup pool pfexec zfs send -i rpool/ROOT at first rpool/ROOT at second | pfexec zfs recv -F backup-pool/rpool/ROOT pfexec zfs send -i rpool/ROOT/OpenSolaris-2009.06-134 at first rpool/ROOT/OpenSolaris-2009.06-134 at second | pfexec zfs recv -F backup-pool/rpool/ROOT/OpenSolaris-2009.06-134 pfexec zfs send -i rpool at first rpool at second | pfexec zfs recv -F backup-pool/rpool pfexec zfs send -i rpool/dump at first rpool/dump at second | pfexec zfs recv -F backup-pool/rpool/dump pfexec zfs send -i rpool/swap at first rpool/swap at second | pfexec zfs recv -F backup-pool/rpool/swap pfexec zfs send -i rpool/webspace at first rpool/webspace at second | pfexec zfs recv -F backup-pool/rpool/webspace pfexec zfs send -i rpool/export at first rpool/export at second | pfexec zfs recv -F backup-pool/rpool/export pfexec zfs send -i rpool/export/home at first rpool/export/home at second | pfexec zfs recv -F backup-pool/rpool/export/home pfexec zfs send -i rpool/export/home/euan at first rpool/export/home/euan at second | pfexec zfs recv -F backup-pool/rpool/export/home/euan pfexec zfs send -i rpool/export/home/euan at first rpool/export/home/euan at second | pfexec zfs recv -F backup-pool/rpool/export/home/euan/Downloads pfexec zfs send -i rpool/export/home/euan/VBOX-HDD at first rpool/export/home/euan/VBOX-HDD at second | pfexec zfs recv -F backup-pool/rpool/export/home/euan/VBOX-HDD pfexec zfs send -i rpool/export/home/euan/Downloads at first rpool/export/home/euan/Downloads at second | pfexec zfs recv -F backup-pool/rpool/export/home/euan/Downloads #pfexec zfs umount backup-pool/rpool/export/home/euan/VBOX-HDD #pfexec zfs umount backup-pool/rpool/export/home/euan/Downloads #pfexec zfs umount backup-pool/rpool/export/home/euan #pfexec zfs umount backup-pool/rpool/export/home #pfexec zfs umount backup-pool/rpool/export #pfexec zfs umount backup-pool/rpool/webspace #pfexec zfs umount backup-pool/rpool #pfexec zfs umount backup-pool/rpool/ROOT/OpenSolaris-2009.06-134 # Export the pool so we can uplug the USB HDD pfexec zpool export backup-pool # Import the pool again to test pfexec zpool import pfexec zpool import -R backup-pool # Test that the files are still there ls /backup-pool/opensolaris <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< So, now for how to full-system recover from the backup after installing a fresh copy of opensolaris. Any suggestions?? -- This message posted from opensolaris.org
Edward Ned Harvey
2010-Apr-30 12:09 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Euan Thoms > > My ideal solution would be to have the data accessible from the backup > media (external HDD) as well as be used as full syatem restore. Below > is what I would consider ideal: > > 1.) Create a pool on an external HDD called backup-pool > 2.) Send the whole rpool (all filesystems within) to the backup pool. > 3.) be able to browse the backup pool starting from /backup-pool > 4.) be able to export the backup pool and import on PC2 to browse the > files there > 5.) be able to create another snapshot of rpool and "zfs snapshot -i > rpool at first-snapshot rpool at next-snapshot backup-pool/rpool" (send the > increment to the backup pool/drive > 6.) be able to browse the latest snapshot data on the backup drive, > whilst able to clone an older snapsho > 7.) be able to ''zfs send'' the latest backup snapshot to a fresh > installation, thus get it back to exactly how it was before disaster.Yes, all of the above are possible. This is what I personally do.> However, it''s step 7 that I have no idea if it will work. Edward, your > post gives me promise, 90% confidence is a good start.The remaining 10% is: Although I know for sure you can do all your backups as described above, I have not attempted the bare metal restore. Although I believe I understand all that''s needed about partitions, boot labels, etc, I must acknowledge some uncertainty about precisely the best method of doing the bare metal restore.
Edward Ned Harvey
2010-Apr-30 12:14 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Euan Thoms > > pfexec zfs send rpool at first | pfexec zfs receive -u backup-pool/rpool > pfexec zfs send rpool/ROOT at first | pfexec zfs receive -u backup- > pool/rpool/ROOT > pfexec zfs send rpool/ROOT/OpenSolaris-2009.06-134 at first | pfexec zfs > receive -u backup-pool/rpool/ROOT/OpenSolaris-2009.06-134 > pfexec zfs send rpool/dump at first | pfexec zfs receive -u backup- > pool/rpool/dump(and so on) I notice you have many zfs filesystems inside of other zfs filesystems. While this is common practice, I will personally advise against it in general, unless you can name a reason why you want to do that. Here is one reason not to do that: If you''re working in some directory, and you want to access a snapshot of some file you''re working on, you have to go up to the root of the filesystem that you''re currently in. If you go up too far and find a .zfs directory in some filesystem which is above your current filesystem, then you can''t find your snapshots. You have to know precisely which is the right .zfs directory to go into. Also, as you''ve demonstrated, it makes your backup scripts much longer.> #pfexec zfs umount backup-pool/rpool/export/home/euan/VBOX-HDD > #pfexec zfs umount backup-pool/rpool/export/home/euan/DownloadsInstead of mounting & unmounting the external zfs filesystems, I would recommend importing & exporting the external zpool. No need to mount/unmount. It''s automatic by zpool import/export.
Cindy Swearingen
2010-Apr-30 14:46 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
Hi Ned, Unless I misunderstand what bare metal recovery means, the following procedure describes how to boot from CD, recreate the root pool, and restore the root pool snapshots: http://docs.sun.com/app/docs/doc/819-5461/ghzur?l=en&a=view I retest this process at every Solaris release. Thanks, Cindy On 04/29/10 21:42, Edward Ned Harvey wrote:>> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- >> bounces at opensolaris.org] On Behalf Of Cindy Swearingen >> >> For full root pool recovery see the ZFS Administration Guide, here: >> >> http://docs.sun.com/app/docs/doc/819-5461/ghzvz?l=en&a=view >> >> Recovering the ZFS Root Pool or Root Pool Snapshots > > Unless I misunderstand, I think the intent of the OP question is how to do > bare metal recovery after some catastrophic failure. In this situation, > recovery is much more complex than what the ZFS Admin Guide says above. You > would need to boot from CD, and partition and format the disk, then create a > pool, and create a filesystem, and "zfs send | zfs receive" into that > filesystem, and finally install the boot blocks. Only some of these steps > are described in the ZFS Admin Guide, because simply expanding the rpool is > a fundamentally easier thing to do. > > Even though I think I could do that ... I don''t have a lot of confidence in > it, and I can certainly imagine some pesky little detail being a problem. > > This is why I suggested the technique of: > Reinstall the OS just like you did when you first built your machine, before > the catastrophy. It doesn''t even matter if you make the same selections you > made before (IP address, package selection, authentication method, etc) as > long as you''re choosing to partition and install the bootloader like you did > before. > > This way, you''re sure the partitions, format, pool, filesystem, and > bootloader are all configured properly. > Then boot from CD again, and "zfs send | zfs receive" to overwrite your > existing rpool. > > And as far as I know, that will take care of everything. But I only feel > like 90% confident that would work. >
erik.ableson
2010-Apr-30 15:40 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
On 30 avr. 2010, at 13:47, Euan Thoms wrote:> Well I''m so impressed with zfs at the moment! I just got steps 5 and 6 (form my last post) to work, and it works well. Not only does it send the increment over to the backup drive, the latest increment/snapshot appears in the mounted filesystem. In nautilus I can browse an exact copy of my PC, from / to the deepest parts of my home folder. And it will backup my entire system in 1-2 minutes, AMAZING!! > > Below are the steps, try it for yourself on a spare USB HDD: > > # Create backup storage pool on drive c12t0d0 > pfexec zpool create backup-pool c12t0d0 > # Recursively snapshot the root pool (rpool) > pfexec zfs snapshot -r rpool at first > > # Send the entire pool in all it''s snapshots to the backup pool, disable mounting > pfexec zfs send rpool at first | pfexec zfs receive -u backup-pool/rpool > [snip ]pfexec zfs send rpool/export/home/euan/VBOX-HDD at first | pfexec zfs receive -u backup-pool/rpool/export/home/euan/VBOX-HDD > > # Take second snapshot at a later point in time > pfexec zfs snapshot -r rpool at second > > # Send the increments to the backup pool > pfexec zfs send -i rpool/ROOT at first rpool/ROOT at second | pfexec zfs recv -F backup-pool/rpool/ROOT > [snip> ]pfexec zfs send -i rpool/export/home/euan/Downloads at first rpool/export/home/euan/Downloads at second | pfexec zfs recv -F backup-pool/rpool/export/home/euan/DownloadsJust a quick comment for the send/recv operations, adding -R makes it recursive so you only need one line to send the rpool and all descendant filesystems. I use the send/recv operations for all sorts of backup operations. For the equivalent of a "full backup" of my boot volumes : NOW=`date +%Y-%m-%d_%H-%M-%S` pfexec /usr/sbin/zfs snapshot -r rpool@$NOW pfexec /usr/sbin/zfs send ?R rpool at NOW | /usr/bin/gzip > /mnt/backups/rpool.$NOW.zip pfexec /usr/sbin/zfs destroy -r rpool@$NOW But for any incremental transfers it''s better to recv to an actual filesystem that you can scrub and confirm that the stream made it over OK. Cheers, Erik -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100430/0f06a9be/attachment.html>
Bob Friesenhahn
2010-Apr-30 17:39 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
On Thu, 29 Apr 2010, Edward Ned Harvey wrote:> This is why I suggested the technique of: > Reinstall the OS just like you did when you first built your machine, before > the catastrophy. It doesn''t even matter if you make the same selections youWith the new Oracle policies, it seems unlikely that you will be able to reinstall the OS and achieve what you had before. An exact recovery method (dd of partition images or recreate pool with ''zfs receive'') seems like the only way to be assured of recovery moving forward. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Edward Ned Harvey
2010-May-01 04:46 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
> From: Cindy Swearingen [mailto:cindy.swearingen at oracle.com] > Sent: Friday, April 30, 2010 10:46 AM > > Hi Ned, > > Unless I misunderstand what bare metal recovery means, the following > procedure describes how to boot from CD, recreate the root pool, and > restore the root pool snapshots: > > http://docs.sun.com/app/docs/doc/819-5461/ghzur?l=en&a=view > > I retest this process at every Solaris release.You are awesome. ;-) When I said I was 90% certain, it turns out, that was a spot-on assessment of my own knowledge. I did not know about setting the "bootfs" property. I see that you are apparently storing the "zfs send" datastream in a file. Of course, discouraged, but no problem as long as it''s no problem. I personally prefer to "zfs send | zfs receive" directly onto removable storage. One more really important gotcha. Let''s suppose the version of zfs on the CD supports up to zpool 14. Let''s suppose your "live" system had been fully updated before crash, and let''s suppose the zpool had been upgraded to zpool 15. Wouldn''t that mean it''s impossible to restore your rpool using the CD? Wouldn''t it mean it''s impossible to restore the rpool using anything other than a fully installed, and at least moderately updated on-hard-disk OS? Maybe you could fully install onto hard disk 2 of the system, and then upgrade, and then use that OS to restore the rpool onto disk 1 of the system... Would that be fuel to recommend people, "Never upgrade your version of zpool or zfs on your rpool?"
Edward Ned Harvey
2010-May-01 04:52 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
> From: Bob Friesenhahn [mailto:bfriesen at simple.dallas.tx.us] > Sent: Friday, April 30, 2010 1:40 PM > > With the new Oracle policies, it seems unlikely that you will be able > to reinstall the OS and achieve what you had before. An exact > recovery method (dd of partition images or recreate pool with ''zfs > receive'') seems like the only way to be assured of recovery moving > forward.What??? Confusing parts: "the new Oracle policies" "unlikely that you will be able to reinstall the OS and achieve what you had before" Didn''t you see Cindy''s post? Would you like to point out any specific flaws in what was written, that I guess she probably wrote? In particular, I found the following to be very valuable:> From: Cindy Swearingen [mailto:cindy.swearingen at oracle.com] > > the following > procedure describes how to boot from CD, recreate the root pool, and > restore the root pool snapshots: > > http://docs.sun.com/app/docs/doc/819-5461/ghzur?l=en&a=view > > I retest this process at every Solaris release.
Bob Friesenhahn
2010-May-01 15:06 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
On Sat, 1 May 2010, Edward Ned Harvey wrote:> > Would that be fuel to recommend people, "Never upgrade your version of zpool > or zfs on your rpool?"It does seem to be a wise policy to not update the pool and filesystem versions unless you require a new pool or filesystem feature. Then you would update to the minimum version required to support that feature. Note that if the default filesystem version changes and you create a new filesystem, this may also cause problems (I have been bit by that before). Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Peter Tribble
2010-May-01 21:57 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
On Fri, Apr 30, 2010 at 6:39 PM, Bob Friesenhahn <bfriesen at simple.dallas.tx.us> wrote:> On Thu, 29 Apr 2010, Edward Ned Harvey wrote: >> >> This is why I suggested the technique of: >> Reinstall the OS just like you did when you first built your machine, >> before >> the catastrophy. ?It doesn''t even matter if you make the same selections >> you > > With the new Oracle policies, it seems unlikely that you will be able to > reinstall the OS and achieve what you had before.And what policies have Oracle introduced that mean you can''t reinstall your system? -- -Peter Tribble http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
Ian Collins
2010-May-01 22:00 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
On 05/ 1/10 04:46 PM, Edward Ned Harvey wrote:> One more really important gotcha. Let''s suppose the version of zfs on the > CD supports up to zpool 14. Let''s suppose your "live" system had been fully > updated before crash, and let''s suppose the zpool had been upgraded to zpool > 15. Wouldn''t that mean it''s impossible to restore your rpool using the CD? >Just make sure you have an up to date live CD when you upgrade your pool. It''s seldom wise to upgrade a pool too quickly after an OS upgrade, you may find an issue and have to revert back to a previous BE. -- Ian.
Bob Friesenhahn
2010-May-01 23:06 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
On Sat, 1 May 2010, Peter Tribble wrote:>> >> With the new Oracle policies, it seems unlikely that you will be able to >> reinstall the OS and achieve what you had before. > > And what policies have Oracle introduced that mean you can''t reinstall > your system?The main concern is that you might not be able to get back the same OS install you had before due to loss of patch access after your service contract has expired and Oracle arbitrarily decided not to grant you a new one. Maybe if you are able to overwrite the pool with the original pristine state rather than rely on an "install", then you would be ok. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Edward Ned Harvey
2010-May-02 04:23 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
> From: Bob Friesenhahn [mailto:bfriesen at simple.dallas.tx.us] > Sent: Saturday, May 01, 2010 7:07 PM > > On Sat, 1 May 2010, Peter Tribble wrote: > >> > >> With the new Oracle policies, it seems unlikely that you will be > able to > >> reinstall the OS and achieve what you had before. > > > > And what policies have Oracle introduced that mean you can''t > reinstall > > your system? > > The main concern is that you might not be able to get back the same OS > install you had before due to loss of patch access after your service > contract has expired and Oracle arbitrarily decided not to grant you a > new one.It''s as if you didn''t even read this thread. In the proposed answers to Euan''s question, there is no need to apply any patches, or to have any service contract. As long as you still have your OS install CD, or *any* OS install CD, you install a throw-away OS, just for the sake of letting the installer create the partitions, boot record, boot properties, etc... And then you immediately obliterate and overwrite rpool, using your backup image. Since this restoration process puts the filesystem back into the exact state it was before failure ... All the patches you previously had are restored, and everything is restored just as it was before crash. There is nothing anywhere which indicates any reason you couldn''t do this, even in the future. So you''re totally spreading BS on this one.
Cindy Swearingen
2010-May-03 16:58 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
Hi Ned, Yes, I agree that it is a good idea not to update your root pool version before restoring your existing root pool snapshots. If you are using a later Solaris OS to recover your pool and root pool snapshots, you can alway create the pool with a specific version, like this: # zpool create -o version=19 rpool c1t3d0s0 I will add this info to the root pool recovery process. Thanks for the feedback... Cindyr On 04/30/10 22:46, Edward Ned Harvey wrote:>> From: Cindy Swearingen [mailto:cindy.swearingen at oracle.com] >> Sent: Friday, April 30, 2010 10:46 AM >> >> Hi Ned, >> >> Unless I misunderstand what bare metal recovery means, the following >> procedure describes how to boot from CD, recreate the root pool, and >> restore the root pool snapshots: >> >> http://docs.sun.com/app/docs/doc/819-5461/ghzur?l=en&a=view >> >> I retest this process at every Solaris release. > > You are awesome. ;-) > When I said I was 90% certain, it turns out, that was a spot-on assessment > of my own knowledge. I did not know about setting the "bootfs" property. > > I see that you are apparently storing the "zfs send" datastream in a file. > Of course, discouraged, but no problem as long as it''s no problem. I > personally prefer to "zfs send | zfs receive" directly onto removable > storage. > > One more really important gotcha. Let''s suppose the version of zfs on the > CD supports up to zpool 14. Let''s suppose your "live" system had been fully > updated before crash, and let''s suppose the zpool had been upgraded to zpool > 15. Wouldn''t that mean it''s impossible to restore your rpool using the CD? > Wouldn''t it mean it''s impossible to restore the rpool using anything other > than a fully installed, and at least moderately updated on-hard-disk OS? > Maybe you could fully install onto hard disk 2 of the system, and then > upgrade, and then use that OS to restore the rpool onto disk 1 of the > system... > > Would that be fuel to recommend people, "Never upgrade your version of zpool > or zfs on your rpool?" >
Edward Ned Harvey
2010-May-03 21:22 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
> From: Cindy Swearingen [mailto:cindy.swearingen at oracle.com] > Sent: Monday, May 03, 2010 12:58 PM > > Hi Ned, > > Yes, I agree that it is a good idea not to update your root pool > version before restoring your existing root pool snapshots. > > If you are using a later Solaris OS to recover your pool and root pool > snapshots, you can alway create the pool with a specific version, like > this: > > # zpool create -o version=19 rpool c1t3d0s0 > > I will add this info to the root pool recovery process. > > Thanks for the feedback...But if you unfortunately had a necessity to upgrade your rpool version ... Such as I recently did, when my replacement log device was 1Mb smaller than the device it was intended to mirror ... The most graceful way I can think to handle that zpool upgrade would be to also install solaris/opensolaris to a removable disk, and *test* that you can boot from it. This way, when you update your primary OS and then update the zpool ... You can also update the removable OS, and rest assured you''ve got a bootable removable media which supports the necessary zpool version, so you have actually some option available to restore rpool in the event of failure. But this technique sounds like it leaves a lot of possible failures... Such as ... Once you register your original Solaris 10 OS for updates, are you unable to get updates on the removable OS? Does sparc support booting removable hard disks? And even on the x86, which support booting the removable media, I''ve certainly seen little "gotchas" such as "Well, it *should* work..." And I recently had a server where I was able to install Solaris 10, but couldn''t install opensolaris due to driver incompatibility. So while that''s presumably unusual, it''s certainly nonzero. So I think the take-away knowledge here is: If you *must* upgrade your rpool version, and you want to be able to restore your rpool from a backup, it''s recommended (perhaps critical) that you also maintain a removable OS which supports the necessary hardware and rpool version, because without it, you might have no viable restore procedure. Test it.
Richard Elling
2010-May-03 21:53 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
more below... On May 3, 2010, at 2:22 PM, Edward Ned Harvey wrote:>> From: Cindy Swearingen [mailto:cindy.swearingen at oracle.com] >> Sent: Monday, May 03, 2010 12:58 PM >> >> Hi Ned, >> >> Yes, I agree that it is a good idea not to update your root pool >> version before restoring your existing root pool snapshots. >> >> If you are using a later Solaris OS to recover your pool and root pool >> snapshots, you can alway create the pool with a specific version, like >> this: >> >> # zpool create -o version=19 rpool c1t3d0s0 >> >> I will add this info to the root pool recovery process. >> >> Thanks for the feedback... > > But if you unfortunately had a necessity to upgrade your rpool version ... > Such as I recently did, when my replacement log device was 1Mb smaller than > the device it was intended to mirror ... The most graceful way I can think > to handle that zpool upgrade would be to also install solaris/opensolaris to > a removable disk, and *test* that you can boot from it. This way, when you > update your primary OS and then update the zpool ... You can also update the > removable OS, and rest assured you''ve got a bootable removable media which > supports the necessary zpool version, so you have actually some option > available to restore rpool in the event of failure. > > But this technique sounds like it leaves a lot of possible failures... Such > as ... Once you register your original Solaris 10 OS for updates, are you > unable to get updates on the removable OS?This is not a problem on Solaris 10. It can affect OpenSolaris, though.> Does sparc support booting removable hard disks?Yes, though it is trivial (and easier) to boot from the net.> And even on the x86, which support booting the removable media, I''ve > certainly seen little "gotchas" such as "Well, it *should* work..."well... maybe you get what you pay for? :-)> And I recently had a server where I was able to install Solaris 10, but > couldn''t install opensolaris due to driver incompatibility. So while that''s > presumably unusual, it''s certainly nonzero.Be sure to file a bug. That said, there has been a move recently to EOF some old drivers. If you file a bug against archaic hardware drivers, don''t be surprised if they are EOF.> So I think the take-away knowledge here is: If you *must* upgrade your > rpool version, and you want to be able to restore your rpool from a backup, > it''s recommended (perhaps critical) that you also maintain a removable OS > which supports the necessary hardware and rpool version, because without it, > you might have no viable restore procedure. Test it.This is SOP, no? -- richard ZFS storage and performance consulting at http://www.RichardElling.com
Edward Ned Harvey
2010-May-04 02:55 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
> From: Richard Elling [mailto:richard.elling at gmail.com] > > > Once you register your original Solaris 10 OS for updates, are > > you > > unable to get updates on the removable OS? > > This is not a problem on Solaris 10. It can affect OpenSolaris, though.That''s precisely the opposite of what I thought. Care to explain? If you have a primary OS disk, and you apply OS Updates ... in order to access those updates in Sol10, you need a registered account and login, with paid solaris support. Then, if you boot a removable hard disk, and you wish to apply updates to keep it at the same rev as the primary OS ... you''ve got to once again enter your Sol10 update download credentials, and I don''t presume it works, or will always work for a 2nd installation of Sol10. Aren''t you supposed to pay for support on each OS installation? Doesn''t that mean you''d have to pay a separate support contract for the removable boot hard drive? But in opensolaris, updates are free. Don''t require any login credentials. So if you update your primary OS, I see nothing to prevent you from booting your removable disk, and applying the same updates to the 2nd OS.
Richard Elling
2010-May-04 03:39 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
On May 3, 2010, at 7:55 PM, Edward Ned Harvey wrote:>> From: Richard Elling [mailto:richard.elling at gmail.com] >> >>> Once you register your original Solaris 10 OS for updates, are >>> you >>> unable to get updates on the removable OS? >> >> This is not a problem on Solaris 10. It can affect OpenSolaris, though. > > That''s precisely the opposite of what I thought. Care to explain?In Solaris 10, you are stuck with LiveUpgrade, so the root pool is not shared with other boot environments. -- richard> If you have a primary OS disk, and you apply OS Updates ... in order to > access those updates in Sol10, you need a registered account and login, with > paid solaris support. Then, if you boot a removable hard disk, and you wish > to apply updates to keep it at the same rev as the primary OS ... you''ve got > to once again enter your Sol10 update download credentials, and I don''t > presume it works, or will always work for a 2nd installation of Sol10. > Aren''t you supposed to pay for support on each OS installation? Doesn''t > that mean you''d have to pay a separate support contract for the removable > boot hard drive? > > But in opensolaris, updates are free. Don''t require any login credentials. > So if you update your primary OS, I see nothing to prevent you from booting > your removable disk, and applying the same updates to the 2nd OS. > >-- ZFS storage and performance consulting at http://www.RichardElling.com
Ian Collins
2010-May-04 03:47 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
On 05/ 4/10 03:39 PM, Richard Elling wrote:> On May 3, 2010, at 7:55 PM, Edward Ned Harvey wrote: > > >>> From: Richard Elling [mailto:richard.elling at gmail.com] >>> >>> >>>> Once you register your original Solaris 10 OS for updates, are >>>> you >>>> unable to get updates on the removable OS? >>>> >>> This is not a problem on Solaris 10. It can affect OpenSolaris, though. >>> >> That''s precisely the opposite of what I thought. Care to explain? >> > In Solaris 10, you are stuck with LiveUpgrade, so the root pool is > not shared with other boot environments. >All the LU BEs live in the root pool. -- Ian.
Bob Friesenhahn
2010-May-04 14:50 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
On Mon, 3 May 2010, Edward Ned Harvey wrote:> > That''s precisely the opposite of what I thought. Care to explain? > > If you have a primary OS disk, and you apply OS Updates ... in order to > access those updates in Sol10, you need a registered account and login, with > paid solaris support. Then, if you boot a removable hard disk, and you wish > to apply updates to keep it at the same rev as the primary OS ... you''ve got > to once again enter your Sol10 update download credentials, and I don''t > presume it works, or will always work for a 2nd installation of Sol10. > Aren''t you supposed to pay for support on each OS installation? Doesn''t > that mean you''d have to pay a separate support contract for the removable > boot hard drive?The Solaris 10 licensing situation has changed dramatically in recent months. It used to be that anyone was always eligible for security updates and the core kernel was always marked as a security update. Now the only eligibility for use of Solaris 10 is either via an existing service contract, or an interim 90-day period (with registration) intended for product evaluation. It is pretty common for the Solaris 10 installation from media to support an older version of zfs than the kernel now running on the system (which was updated via a patch). Due to the new Solaris 10 license and the potential need to download and apply a patch, issues emerge if this maintenance needs to be done after a service contract (or the 90-day eval entitlement) has expired. As a result, it is wise for Solaris 10 users to maintain a local repository of licensed patches in case their service contract should expire. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Bob Friesenhahn
2010-May-04 14:55 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
On Mon, 3 May 2010, Richard Elling wrote:>>> >>> This is not a problem on Solaris 10. It can affect OpenSolaris, though. >> >> That''s precisely the opposite of what I thought. Care to explain? > > In Solaris 10, you are stuck with LiveUpgrade, so the root pool is > not shared with other boot environments.Richard, You have fallen out of touch with Solaris 10, which is still a moving target. While the Live Upgrade commands you are familiar with in Solaris 10 still mostly work as before, they *do* take advantage of zfs''s features and boot environments do share the same root pool just like in OpenSolaris. Solaris 10 Live Upgrade is dramatically improved in conjunction with zfs boot. I am not sure how far behind it is from OpenSolaris new boot administration tools but under zfs its function can not be terribly different. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Richard Elling
2010-May-05 17:32 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
On May 4, 2010, at 7:55 AM, Bob Friesenhahn wrote:> On Mon, 3 May 2010, Richard Elling wrote: >>>> >>>> This is not a problem on Solaris 10. It can affect OpenSolaris, though. >>> >>> That''s precisely the opposite of what I thought. Care to explain? >> >> In Solaris 10, you are stuck with LiveUpgrade, so the root pool is >> not shared with other boot environments. > > Richard, > > You have fallen out of touch with Solaris 10, which is still a moving target. While the Live Upgrade commands you are familiar with in Solaris 10 still mostly work as before, they *do* take advantage of zfs''s features and boot environments do share the same root pool just like in OpenSolaris. Solaris 10 Live Upgrade is dramatically improved in conjunction with zfs boot. I am not sure how far behind it is from OpenSolaris new boot administration tools but under zfs its function can not be terribly different.Bob and Ian are right. I was trying to remember the last time I installed Solaris 10, and the best I can recall, it was around late fall 2007. The fine folks at Oracle have been making improvements to the product since then, even though no new significant features have been added since that time :-( -- richard -- ZFS storage and performance consulting at http://www.RichardElling.com
Ian Collins
2010-May-05 21:20 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
On 05/ 6/10 05:32 AM, Richard Elling wrote:> On May 4, 2010, at 7:55 AM, Bob Friesenhahn wrote: > >> On Mon, 3 May 2010, Richard Elling wrote: >> >>>>> This is not a problem on Solaris 10. It can affect OpenSolaris, though. >>>>> >>>> That''s precisely the opposite of what I thought. Care to explain? >>>> >>> In Solaris 10, you are stuck with LiveUpgrade, so the root pool is >>> not shared with other boot environments. >>> >> Richard, >> >> You have fallen out of touch with Solaris 10, which is still a moving target. While the Live Upgrade commands you are familiar with in Solaris 10 still mostly work as before, they *do* take advantage of zfs''s features and boot environments do share the same root pool just like in OpenSolaris. Solaris 10 Live Upgrade is dramatically improved in conjunction with zfs boot. I am not sure how far behind it is from OpenSolaris new boot administration tools but under zfs its function can not be terribly different. >> > Bob and Ian are right. I was trying to remember the last time I installed > Solaris 10, and the best I can recall, it was around late fall 2007. > The fine folks at Oracle have been making improvements to the product > since then, even though no new significant features have been added since > that time :-( >ZFS boot? -- Ian.
Simon Breden
2010-May-05 21:42 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestor
Hi Euan, You might find some of this useful: http://breden.org.uk/2009/08/29/home-fileserver-mirrored-ssd-zfs-root-boot/ http://breden.org.uk/2009/08/30/home-fileserver-zfs-boot-pool-recovery/ I backed up the rpool to a single file which I believe is frowned upon, due to the consequences of an error occurring within the sent stream, but sending to a file system instead will fix this aspect, and you may still find the rest of use. Cheers, Simon -- This message posted from opensolaris.org
Bob Friesenhahn
2010-May-05 23:31 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
On Thu, 6 May 2010, Ian Collins wrote:>> Bob and Ian are right. I was trying to remember the last time I installed >> Solaris 10, and the best I can recall, it was around late fall 2007. >> The fine folks at Oracle have been making improvements to the product >> since then, even though no new significant features have been added since >> that time :-( >> > ZFS boot?I think that Richard is referring to the fact that the PowerPC/Cell Solaris 10 port for the Sony Playstation III never emerged. ;-) Other than desktop features, as a Solaris 10 user I have seen OpenSolaris kernel features continually percolate down to Solaris 10 so I don''t feel as left out as Richard would like me to feel.>From a zfs standpoint, Solaris 10 does not seem to be behind thecurrently supported OpenSolaris release. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Ray Van Dolson
2010-May-05 23:33 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
On Wed, May 05, 2010 at 04:31:08PM -0700, Bob Friesenhahn wrote:> On Thu, 6 May 2010, Ian Collins wrote: > >> Bob and Ian are right. I was trying to remember the last time I installed > >> Solaris 10, and the best I can recall, it was around late fall 2007. > >> The fine folks at Oracle have been making improvements to the product > >> since then, even though no new significant features have been added since > >> that time :-( > >> > > ZFS boot? > > I think that Richard is referring to the fact that the PowerPC/Cell > Solaris 10 port for the Sony Playstation III never emerged. ;-) > > Other than desktop features, as a Solaris 10 user I have seen > OpenSolaris kernel features continually percolate down to Solaris 10 > so I don''t feel as left out as Richard would like me to feel. > > From a zfs standpoint, Solaris 10 does not seem to be behind the > currently supported OpenSolaris release. > > BobWell, being able to remove ZIL devices is one important feature missing. Hopefully in U9. :) Ray
Bob Friesenhahn
2010-May-06 00:03 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
On Wed, 5 May 2010, Ray Van Dolson wrote:>> >> From a zfs standpoint, Solaris 10 does not seem to be behind the >> currently supported OpenSolaris release. > > Well, being able to remove ZIL devices is one important feature > missing. Hopefully in U9. :)While the development versions of OpenSolaris are clearly well beyond Solaris 10, I don''t believe that the supported version of OpenSolaris (a year old already) has this feature yet either and Solaris 10 has been released several times since then already. When the forthcoming OpenSolaris release emerges in 2011, the situation will be far different. Solaris 10 can then play catch-up with the release of U9 in 2012. Bob -- Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
Erik Trimble
2010-May-06 00:09 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
On Wed, 2010-05-05 at 19:03 -0500, Bob Friesenhahn wrote:> On Wed, 5 May 2010, Ray Van Dolson wrote: > >> > >> From a zfs standpoint, Solaris 10 does not seem to be behind the > >> currently supported OpenSolaris release. > > > > Well, being able to remove ZIL devices is one important feature > > missing. Hopefully in U9. :) > > While the development versions of OpenSolaris are clearly well beyond > Solaris 10, I don''t believe that the supported version of OpenSolaris > (a year old already) has this feature yet either and Solaris 10 has > been released several times since then already. When the forthcoming > OpenSolaris release emerges in 2011, the situation will be far > different. Solaris 10 can then play catch-up with the release of U9 > in 2012. > > BobPessimist. ;-) s/2011/2010/ s/2012/2011/ -- Erik Trimble Java System Support Mailstop: usca22-123 Phone: x17195 Santa Clara, CA Timezone: US/Pacific (GMT-0800)
Ray Van Dolson
2010-May-06 00:11 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
On Wed, May 05, 2010 at 05:09:40PM -0700, Erik Trimble wrote:> On Wed, 2010-05-05 at 19:03 -0500, Bob Friesenhahn wrote: > > On Wed, 5 May 2010, Ray Van Dolson wrote: > > >> > > >> From a zfs standpoint, Solaris 10 does not seem to be behind the > > >> currently supported OpenSolaris release. > > > > > > Well, being able to remove ZIL devices is one important feature > > > missing. Hopefully in U9. :) > > > > While the development versions of OpenSolaris are clearly well beyond > > Solaris 10, I don''t believe that the supported version of OpenSolaris > > (a year old already) has this feature yet either and Solaris 10 has > > been released several times since then already. When the forthcoming > > OpenSolaris release emerges in 2011, the situation will be far > > different. Solaris 10 can then play catch-up with the release of U9 > > in 2012. > > > > Bob > > Pessimist. ;-) > > > s/2011/2010/ > s/2012/2011/ >Yeah, U9 in 2012 makes me very sad. I would really love to see the hot-removable ZIL''s this year. Otherwise I''ll need to rebuild a few zpools :) Ray
Edward Ned Harvey
2010-May-06 03:09 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Bob Friesenhahn > > From a zfs standpoint, Solaris 10 does not seem to be behind the > currently supported OpenSolaris release.I''m sorry, I''ll have to disagree with you there. In solaris 10, fully updated, you can only get up to zpool version 15. This is lacking many later features ... For me in particular, zpool 19 is when "zpool remove log" was first supported.
Edward Ned Harvey
2010-May-06 03:12 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Ray Van Dolson > > Well, being able to remove ZIL devices is one important feature > missing. Hopefully in U9. :)I did have a support rep confirm for me that both the log device removal, and the ability to mirror slightly smaller devices will be present in U9. But he couldn''t say when that would be. And if I happen to remember my facts wrong (or not remember my facts when I think I do) ... Please throw no stones. ;-)
Cindy Swearingen
2010-May-06 18:58 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
Hi Bob, You can review the latest Solaris 10 and OpenSolaris release dates here: http://www.oracle.com/ocom/groups/public/@ocom/documents/webcontent/059542.pdf Solaris 10 release, CY2010 OpenSolaris release, 1st half CY2010 Thanks, Cindy On 05/05/10 18:03, Bob Friesenhahn wrote:> On Wed, 5 May 2010, Ray Van Dolson wrote: >>> >>> From a zfs standpoint, Solaris 10 does not seem to be behind the >>> currently supported OpenSolaris release. >> >> Well, being able to remove ZIL devices is one important feature >> missing. Hopefully in U9. :) > > While the development versions of OpenSolaris are clearly well beyond > Solaris 10, I don''t believe that the supported version of OpenSolaris (a > year old already) has this feature yet either and Solaris 10 has been > released several times since then already. When the forthcoming > OpenSolaris release emerges in 2011, the situation will be far > different. Solaris 10 can then play catch-up with the release of U9 in > 2012. > > Bob
Euan Thoms
2010-May-10 07:24 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
> Just a quick comment for the send/recv operations, adding -R makes it recursive so you only need one line to send the rpool and all descendant filesystems.Yes, I am aware of that, but it does not work when you are sending them loose to an existing pool. Can''t remember the error message but it didn''t work for me. It seems to work fine when redirecting to standard output ( > somefile.bck) or in your case piping it through gzip and then output to a file. -- This message posted from opensolaris.org
Euan Thoms
2010-May-10 07:49 UTC
[zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore
erik.ableson said: "Just a quick comment for the send/recv operations, adding -R makes it recursive so you only need one line to send the rpool and all descendant filesystems. " Yes, I know of the -R flag, but it doesn''t seem to work with sending loose snapshots to the backup pool. It obviously works when piped to a file. Sorry I can''t remember what the error message was when I tried to ''send -R | receive backup-pool/rpool'', it does work if done individually though. -- This message posted from opensolaris.org