hello, can i create a image from ZFS with the DD command? when i work with linux i use partimage to create an image from one partitino and store it on another. so i can restore it if an error. partimage do not work with zfs, so i must use the DD command. i think so: DD IF=/dev/sda1 OF=/backup/image can i create an image this way, and restore it the other: DD IF=/backup/image OF=/dev/sda1 when i have two partitions with zfs, can i boot from the live cd, mount one partition to use it as backup target? or is it possible to create a ext2 partition and use a linux rescue cd to backup the zfs partition with dd ? This message posted from opensolaris.org
Hans wrote:> hello, > can i create a image from ZFS with the DD command? > when i work with linux i use partimage to create an image from one partitino and store it on another. so i can restore it if an error. > partimage do not work with zfs, so i must use the DD command. > i think so: > DD IF=/dev/sda1 OF=/backup/image > can i create an image this way, and restore it the other: > DD IF=/backup/image OF=/dev/sda1How about using a ZFS snapshot instead?> when i have two partitions with zfs, can i boot from the live cd, mount one partition to use it as backup target? > or is it possible to create a ext2 partition and use a linux rescue cd to backup the zfs partition with dd ?Have a look at the 2008.05 release for some ideas on how to do this sort of thing. James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
Hans wrote:> hello, > can i create a image from ZFS with the DD command?You''re probably looking for "zfs send" - have a go at the man-page and see whether that serves the purpose. HTH Michael -- Michael Schuster http://blogs.sun.com/recursion Recursion, n.: see ''Recursion''
On Wed, 7 May 2008, Hans wrote:> hello, > can i create a image from ZFS with the DD command? > when i work with linux i use partimage to create an image from one partitino and store it on another. so i can restore it if an error. > partimage do not work with zfs, so i must use the DD command. > i think so: > DD IF=/dev/sda1 OF=/backup/image > can i create an image this way, and restore it the other: > DD IF=/backup/image OF=/dev/sda1 > when i have two partitions with zfs, can i boot from the live cd, mount one partition to use it as backup target? > or is it possible to create a ext2 partition and use a linux rescue cd to backup the zfs partition with dd ?While the methods you describe are not the zfs way of doing things, they should work. The zfs pool would need to be offlined (taken completely out of service, via zpool export) before backing it up via raw devices with dd. Every raw device in the pool would need to be backed up at that time in order to make a valid restore possible. Once the devices in the pool have been copied, the pool can be re-imported to activate it. This approach is quite a lot of work and the pool is not available during this time. It is much better to do things the zfs way since then the pool can still be completely active. Taking a snapshot takes less than a second. Then you can send the filesystems to be backed up to a file or to another system. Bob =====================================Bob Friesenhahn bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
thank you for your posting. well i still have problems understanding how a pool works. when i have one partition with zfs like this: /dev/sda1 -> ZFS /dev/sda2 -> ext2 the only pool is on the sda1 device. in this way i can backup it with the dd command. now i try to understand: when i have 2 zfs partitions like this: /dev/sda1 ->ZFS /dev/sda2 ->ZFS /dev/sda3 ->ext2 i cannot copy only sda1 with dd and leave sda2 because i destroy the pool then. is it possible to seperate two partitions in this way, that i can backup one seperatley? the normal linux way is that every partition is mountet into the file-system tree, but is in his way to store data different. so at linux you can mount a ext3 and reiserfs together to one file-system tree. zfs is different. it spreads data over the partitions how it is the best way for zfs. maybe i can compare it a little with a raid 0 where data is spread over several harddisks. on a raid0 it is impossible to backup one harddisk and restore it , in this way i cannot backup a zfs partition and leafe other zfs partions. well i think a snapshot is not what i want. i want a image that i can use at any problems. so i can install an new version of solaris, installing software. and then say... not good. restore image. or whatever i want. This message posted from opensolaris.org
Hans,> hello, > can i create a image from ZFS with the DD command?Yes, with restrictions. First, a ZFS storage pool must be in the "zpool export" state to be copied, so that a write-order consistent set of data exists in the copy. ZFS does an excellent job of detecting inconsistencies in those volumes making up a single ZFS storage pool, so a copy of a imported storage pool is sure to be inconsistent, and thus unusable by ZFS. Although there are various means to copy ZFS (actually copy the individual vdevs in a single ZFS storage pool), one can not "zpool import" this copy of ZFS on the same node as the original ZFS storage pool. Unlike other Solaris filesystems, ZFS maintains metadata on each vdev that is used to reconstruct a ZFS storage pool at "zpool import" time. The logic within "zpool import" processing will correctly find all constituent volumes (vdevs) of a single ZFS storage pool, but ultimately hides / excludes other volumes (the copies) from being considered as part of the current or any other "zpool import" operation. Only the original, nots its copy, can be seen or utilized by "zpool import" If possible, the ZFS copy can be moved or accessed (using dual-ported disks, FC SAN, iSCSI SAN, Availability Suite, etc.) from another host, and then only there can the ZFS copy undergo a successful "zpool import". As a slight segue, Availability Suite (AVS), can create an instantly accessible copy of the constituent volumes (vdevs) of a ZFS storage pool (in lieu of using DD which can take minutes, or hours). This is the Point-in-Time Copy, or II (Instant Image) part of AVS. This copy can also be replicated to a remote Solaris host where it can be imported. This is the Remote Copy, of SNDR (Network Data Replicator) part of AVS. AVS also supports the ability to synchronously, or asynchronously replicate the actual ZFS storage pool to a another host, (no local copy needed), and then "zpool imported" the replica remotely. See: opensolaris.org/os/project/avs/, plus the demos.> > when i work with linux i use partimage to create an image from one > partitino and store it on another. so i can restore it if an error. > partimage do not work with zfs, so i must use the DD command. > i think so: > DD IF=/dev/sda1 OF=/backup/image > can i create an image this way, and restore it the other: > DD IF=/backup/image OF=/dev/sda1 > when i have two partitions with zfs, can i boot from the live cd, > mount one partition to use it as backup target? > or is it possible to create a ext2 partition and use a linux rescue > cd to backup the zfs partition with dd ? > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Jim Dunham Engineering Manager Storage Platform Software Group Sun Microsystems, Inc.
Hi Hans, Think what you are looking for would be a combination of a snapshot and zfs send/receive, that would give you an archive that you can use to recreate your zfs filesystems on your zpool at will at later time. So you can do something like this Create archive : zfs snapshot -r mypool at archive zfs send -R mypool at archive > mypool_archive.zfs Restore from archive: zpool create mynewpool disk1 disk2 zfs receive -d -F < mypool_archive.zfs Doing this will create an archive that contains all descendent file systems of mypool, that can be restored at a later time, with out depending on how the zpool is organized. /peter On May 7, 2008, at 23:31, Hans wrote:> thank you for your posting. > well i still have problems understanding how a pool works. > when i have one partition with zfs like this: > /dev/sda1 -> ZFS > /dev/sda2 -> ext2 > the only pool is on the sda1 device. in this way i can backup it > with the dd command. > now i try to understand: > when i have 2 zfs partitions like this: > /dev/sda1 ->ZFS > /dev/sda2 ->ZFS > /dev/sda3 ->ext2 > i cannot copy only sda1 with dd and leave sda2 because i destroy the > pool then. > is it possible to seperate two partitions in this way, that i can > backup one seperatley? > the normal linux way is that every partition is mountet into the > file-system tree, but is in his way to store data different. so at > linux you can mount a ext3 and reiserfs together to one file-system > tree. > zfs is different. it spreads data over the partitions how it is the > best way for zfs. maybe i can compare it a little with a raid 0 > where data is spread over several harddisks. on a raid0 it is > impossible to backup one harddisk and restore it , in this way i > cannot backup a zfs partition and leafe other zfs partions. > well i think a snapshot is not what i want. > i want a image that i can use at any problems. so i can install an > new version of solaris, installing software. and then say... not > good. restore image. or whatever i want. > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
hello thank you for your postings. i try to understood. but my english is not so good. :-) for exporting a zfs i must use a special command like zspool export this makes the filesystem ready to export. but i think so: when i boot from the live cd without mounting/activating the file system , the filesystem don''t know about the backup / restore with dd. because dd copy''s each sektor it is transparent for the file system. is this correct? when i create 2 file systems during install, like /dev/sda1 and /dev/sda2 and say to the opensolaris installer... use as solaris, i think they get formatet with zfs. now, when i boot from a live-cd can i copy with dd /dev/sda1 into a file on /dev/sda2 or are the data of the 2 partitions are mixed, so that i cannot copy only one partiton? sorry again for my bad english and the problems understanding zfs. it is difficult for a linux user, that never used such a file system or a raid .... This message posted from opensolaris.org
| Think what you are looking for would be a combination of a snapshot | and zfs send/receive, that would give you an archive that you can use | to recreate your zfs filesystems on your zpool at will at later time. Talking of using zfs send/recieve for backups and archives: the Solaris 10U4 zfs manpage contains some blood-curdling warnings about there being no cross-version compatability promises for the output of ''zfs send''. Can this be ignored in practice, or is it a real issue? (Speaking as a sysadmin, I certainly hope that it is a misplaced warning. Even ignoring backups and archives, imagine the fun if you cannot use ''zfs send | zfs receive'' to move a ZFS filesystem from an old but reliable server running a stable old Solaris to your new, just installed server running the latest version of Solaris.) - cks
On May 14, 2008, at 10:39 AM, Chris Siebenmann wrote:> | Think what you are looking for would be a combination of a snapshot > | and zfs send/receive, that would give you an archive that you can > use > | to recreate your zfs filesystems on your zpool at will at later > time. > > Talking of using zfs send/recieve for backups and archives: the > Solaris 10U4 zfs manpage contains some blood-curdling warnings about > there being no cross-version compatability promises for the output > of ''zfs send''. Can this be ignored in practice, or is it a real issue?It''s real! You cant send and receive between versions of ZFS.> > > (Speaking as a sysadmin, I certainly hope that it is a misplaced > warning. Even ignoring backups and archives, imagine the fun if you > cannot use ''zfs send | zfs receive'' to move a ZFS filesystem from an > old but reliable server running a stable old Solaris to your new, just > installed server running the latest version of Solaris.)If you use external storage array attached via FC,iscsi,SAS etc, you can just do a ''zpool export'', disconnect the storage from the old server, attach it to the new server then run ''zpool import'' - and then do a ''zpool upgrade''. Unfortunately this doesn''t help the thumpers so much :(> > > - cks > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-Andy
Andy Lubel wrote:> On May 14, 2008, at 10:39 AM, Chris Siebenmann wrote: > >> | Think what you are looking for would be a combination of a snapshot >> | and zfs send/receive, that would give you an archive that you can >> use >> | to recreate your zfs filesystems on your zpool at will at later >> time. >> >> Talking of using zfs send/recieve for backups and archives: the >> Solaris 10U4 zfs manpage contains some blood-curdling warnings about >> there being no cross-version compatability promises for the output >> of ''zfs send''. Can this be ignored in practice, or is it a real issue? > > It''s real! You cant send and receive between versions of ZFS.The warning is a little scary, but in practice it''s not such a big deal. The man page says this: The format of the stream is evolving. No backwards com- patibility is guaranteed. You may not be able to receive your streams on future versions of ZFS. To date, the only incompatibility is with send streams created prior to Nevada build 36 (there probably aren''t very many of those, ZFS was introduced in Nevada build 27), which cannot be received by "zfs receive" on Nevada build 89 and later. Note that this incompatibility doesn''t affect Solaris 10 at all. All s10 releases use the new stream format. More details (and instructions on how to resurrect any pre build 36 streams) can be found here: http://opensolaris.org/os/community/on/flag-days/pages/2008042301 -Chris
Andy Lubel wrote:> On May 14, 2008, at 10:39 AM, Chris Siebenmann wrote: > > >> | Think what you are looking for would be a combination of a snapshot >> | and zfs send/receive, that would give you an archive that you can >> use >> | to recreate your zfs filesystems on your zpool at will at later >> time. >> >> Talking of using zfs send/recieve for backups and archives: the >> Solaris 10U4 zfs manpage contains some blood-curdling warnings about >> there being no cross-version compatability promises for the output >> of ''zfs send''. Can this be ignored in practice, or is it a real issue? >> > > It''s real! You cant send and receive between versions of ZFS. >Caveat: we had to break with very, very, very old ZFS (NV b35, circa Feb 2006) See http://www.opensolaris.org/os/community/on/flag-days/pages/2008042301>> (Speaking as a sysadmin, I certainly hope that it is a misplaced >> warning. Even ignoring backups and archives, imagine the fun if you >> cannot use ''zfs send | zfs receive'' to move a ZFS filesystem from an >> old but reliable server running a stable old Solaris to your new, just >> installed server running the latest version of Solaris.) >> > > If you use external storage array attached via FC,iscsi,SAS etc, you > can just do a ''zpool export'', disconnect the storage from the old > server, attach it to the new server then run ''zpool import'' - and then > do a ''zpool upgrade''. Unfortunately this doesn''t help the thumpers so > much :( >Eh? I can do this with a thumper, too! You might want to watch Constantin''s CSI:Munich video on YouTube where they do a ZFS pool on USB flash drives and then do the shuffle. blogs.sun.com/constantin/entry/csi_munich_how_to_save -- richard