Hello, I am attempting to move a bunch of zfs filesystems from one pool to another. Mostly this is working fine, but one collection of file systems is causing me problems, and repeated re-reading of "man zfs" and the ZFS Administrators Guide is not helping. I would really appreciate some help/advice. Here is the scenario. I have a nested (hierarchy) of zfs file systems. Some of the deeper fs are snapshotted. All this exists on the source zpool First I recursively snapshotted the whole subtree: zfs snapshot -r naspool at xfer-11292010 Here is a subset of the source zpool: # zfs list -r naspool NAME USED AVAIL REFER MOUNTPOINT naspool 1.74T 42.4G 37.4K /naspool naspool at xfer-11292010 0 - 37.4K - naspool/openbsd 113G 42.4G 23.3G /naspool/openbsd naspool/openbsd at xfer-11292010 0 - 23.3G - naspool/openbsd/4.4 21.6G 42.4G 2.33G /naspool/openbsd/4.4 naspool/openbsd/4.4 at xfer-11292010 0 - 2.33G - naspool/openbsd/4.4/ports 592M 42.4G 200M /naspool/openbsd/4.4/ports naspool/openbsd/4.4/ports at patch000 52.5M - 169M - naspool/openbsd/4.4/ports at patch006 54.7M - 194M - naspool/openbsd/4.4/ports at patch007 54.9M - 194M - naspool/openbsd/4.4/ports at patch013 55.1M - 194M - naspool/openbsd/4.4/ports at patch016 35.1M - 200M - naspool/openbsd/4.4/ports at xfer-11292010 0 - 200M - Now I want to send this whole hierarchy to a new pool. # zfs create npool/openbsd # zfs send -R naspool/openbsd at xfer-11292010 | zfs receive -Fv npool/openbsd receiving full stream of naspool/openbsd at xfer-11292010 into npool/openbsd at xfer-11292010 received 23.5GB stream in 883 seconds (27.3MB/sec) cannot receive new filesystem stream: destination has snapshots (eg. npool/openbsd at xfer-11292010) must destroy them to overwrite it What am I doing wrong? What is the proper way to accomplish my goal here? And I have a follow up question: I had to snapshot the source zpool filesystems in order to zfs send them. Once they are received on the new zpool, I really don''t need nor want this "snapshot" on the receiving side. Is it OK to zfs destroy that snapshot? I''ve been pounding my head against this problem for a couple of days, and I would definitely appreciate any tips/pointers/advice. Don -- This message posted from opensolaris.org
Edward Ned Harvey
2010-Dec-02 02:25 UTC
[zfs-discuss] zfs send & receive problem/questions
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Don Jackson > > # zfs send -R naspool/openbsd at xfer-11292010 | zfs receive -Fv > npool/openbsd > receiving full stream of naspool/openbsd at xfer-11292010 into > npool/openbsd at xfer-11292010 received 23.5GB stream in 883 seconds > (27.3MB/sec) cannot receive new filesystem stream: destination has > snapshots (eg. npool/openbsd at xfer-11292010) must destroy them to > overwrite itSomewhere, in either the ZFS admin guide, or the ZFS troubleshooting guide, or the ZFS best practices guide, I vaguely recall that there was a bug with -R prior to some version of zpool, and the solution was to send each individual filesystem individually. Prior to solaris 10u9, I simply assume -R is broken, and I always do individual filesystems. 10u9 is not a magic number, and maybe it was fixed earlier. I''m just saying that due to blackmagic and superstition, I never trusted -R until 10u9. I notice your mention of openbsd. I presume you''re running an old version of zfs.> What am I doing wrong? What is the proper way to accomplish my goal > here?You might not be doing anything wrong. But I will suggest doing the filesystems individually anyway. You might get a different (more successful) result.> Once they are received on the new zpool, I really don''t need nor want this > "snapshot" on the receiving side. > Is it OK to zfs destroy that snapshot?Yes. It is safe to destroy snapshots, and you don''t lose the filesystem. When I script this, I just grep for the presence of ''@'' in the thing which is scheduled for destruction, and then I know I can''t possibly destroy the latest version of the filesystem.
Here is some more info on my system: This machine is running Solaris 10 U9, with all the patches as of 11/10/2010. The source zpool I am attempting to transfer from was originally created on a older OpenSolaris (specifically Nevada) release, I think it was 111. I did a zpool export on that zpool, and physically transferred those drives to the new machine, where I did a zpool import, and and then upgraded the ZFS version on the imported zpool, now: # zpool upgrade This system is currently running ZFS pool version 22. All pools are formatted using this version. The reference to OpenBSD in the directory paths in the listings I provided refers only to the data that is stored therein, the actual OS I am running here is Solaris 10. # zpool status naspool npool pool: naspool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM naspool ONLINE 0 0 0 raidz2-0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 errors: No known data errors pool: npool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM npool ONLINE 0 0 0 raidz3-0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 c0t6d0 ONLINE 0 0 0 c0t7d0 ONLINE 0 0 0 c1t4d0 ONLINE 0 0 0 c1t5d0 ONLINE 0 0 0 c1t6d0 ONLINE 0 0 0 c1t7d0 ONLINE 0 0 0 errors: No known data errors -- This message posted from opensolaris.org
Hi Don, I''m no snapshot expert but I think you will have to remove the previous receiving side snapshots, at least. I created a file system hierarchy that includes a lower-level snapshot, created a recursive snapshot of that hierarchy and sent it over to a backup pool. Then, did the same steps again. See the example below. You can see from my example that this process fails if I don''t remove the existing snapshots first. And, because I didn''t remove the original recursive snapshots on the sending side, the snapshots become nested. I''m sure someone else has a better advice. I had an example of sending root pool snapshots on the ZFS troubleshooting wiki but it was removed so I will try to restore that example. Thanks, Cindy # zfs list -r tank/home NAME USED AVAIL REFER MOUNTPOINT tank/home 1.12M 66.9G 25K /tank/home tank/home at snap2 0 - 25K - tank/home/anne 280K 66.9G 280K /tank/home/anne tank/home/anne at snap2 0 - 280K - tank/home/bob 280K 66.9G 280K /tank/home/bob tank/home/bob at snap2 0 - 280K - tank/home/cindys 561K 66.9G 281K /tank/home/cindys tank/home/cindys at snap2 0 - 281K - tank/home/cindys/dir1 280K 66.9G 280K /tank/home/cindys/dir1 tank/home/cindys/dir1 at snap1 0 - 280K - tank/home/cindys/dir1 at snap2 0 - 280K - # zfs send -R tank/home at snap2 | zfs recv -d bpool # zfs list -r bpool/home NAME USED AVAIL REFER MOUNTPOINT bpool/home 1.12M 33.2G 25K /bpool/home bpool/home at snap2 0 - 25K - bpool/home/anne 280K 33.2G 280K /bpool/home/anne bpool/home/anne at snap2 0 - 280K - bpool/home/bob 280K 33.2G 280K /bpool/home/bob bpool/home/bob at snap2 0 - 280K - bpool/home/cindys 561K 33.2G 281K /bpool/home/cindys bpool/home/cindys at snap2 0 - 281K - bpool/home/cindys/dir1 280K 33.2G 280K /bpool/home/cindys/dir1 bpool/home/cindys/dir1 at snap1 0 - 280K - bpool/home/cindys/dir1 at snap2 0 - 280K - # zfs snapshot -r tank/home at snap3 # zfs send -R tank/home at snap3 | zfs recv -dF bpool cannot receive new filesystem stream: destination has snapshots (eg. bpool/home at snap2) must destroy them to overwrite it # zfs destroy -r bpool/home at snap2 # zfs destroy bpool/home/cindys/dir1 at snap1 # zfs send -R tank/home at snap3 | zfs recv -dF bpool # zfs list -r bpool NAME USED AVAIL REFER MOUNTPOINT bpool 1.35M 33.2G 23K /bpool bpool/home 1.16M 33.2G 25K /bpool/home bpool/home at snap2 0 - 25K - bpool/home at snap3 0 - 25K - bpool/home/anne 280K 33.2G 280K /bpool/home/anne bpool/home/anne at snap2 0 - 280K - bpool/home/anne at snap3 0 - 280K - bpool/home/bob 280K 33.2G 280K /bpool/home/bob bpool/home/bob at snap2 0 - 280K - bpool/home/bob at snap3 0 - 280K - bpool/home/cindys 582K 33.2G 281K /bpool/home/cindys bpool/home/cindys at snap2 0 - 281K - bpool/home/cindys at snap3 0 - 281K - bpool/home/cindys/dir1 280K 33.2G 280K /bpool/home/cindys/dir1 bpool/home/cindys/dir1 at snap1 0 - 280K - bpool/home/cindys/dir1 at snap2 0 - 280K - bpool/home/cindys/dir1 at snap3 0 - 280K - On 12/01/10 11:30, Don Jackson wrote:> Hello, > > I am attempting to move a bunch of zfs filesystems from one pool to another. > > Mostly this is working fine, but one collection of file systems is causing me problems, and repeated re-reading of "man zfs" and the ZFS Administrators Guide is not helping. I would really appreciate some help/advice. > > Here is the scenario. > I have a nested (hierarchy) of zfs file systems. > Some of the deeper fs are snapshotted. > All this exists on the source zpool > First I recursively snapshotted the whole subtree: > > zfs snapshot -r naspool at xfer-11292010 > > Here is a subset of the source zpool: > > # zfs list -r naspool > NAME USED AVAIL REFER MOUNTPOINT > naspool 1.74T 42.4G 37.4K /naspool > naspool at xfer-11292010 0 - 37.4K - > naspool/openbsd 113G 42.4G 23.3G /naspool/openbsd > naspool/openbsd at xfer-11292010 0 - 23.3G - > naspool/openbsd/4.4 21.6G 42.4G 2.33G /naspool/openbsd/4.4 > naspool/openbsd/4.4 at xfer-11292010 0 - 2.33G - > naspool/openbsd/4.4/ports 592M 42.4G 200M /naspool/openbsd/4.4/ports > naspool/openbsd/4.4/ports at patch000 52.5M - 169M - > naspool/openbsd/4.4/ports at patch006 54.7M - 194M - > naspool/openbsd/4.4/ports at patch007 54.9M - 194M - > naspool/openbsd/4.4/ports at patch013 55.1M - 194M - > naspool/openbsd/4.4/ports at patch016 35.1M - 200M - > naspool/openbsd/4.4/ports at xfer-11292010 0 - 200M - > > Now I want to send this whole hierarchy to a new pool. > > # zfs create npool/openbsd > # zfs send -R naspool/openbsd at xfer-11292010 | zfs receive -Fv npool/openbsd > receiving full stream of naspool/openbsd at xfer-11292010 into npool/openbsd at xfer-11292010 > received 23.5GB stream in 883 seconds (27.3MB/sec) > cannot receive new filesystem stream: destination has snapshots (eg. npool/openbsd at xfer-11292010) > must destroy them to overwrite it > > What am I doing wrong? What is the proper way to accomplish my goal here? > > And I have a follow up question: > > I had to snapshot the source zpool filesystems in order to zfs send them. > > Once they are received on the new zpool, I really don''t need nor want this "snapshot" on the receiving side. > Is it OK to zfs destroy that snapshot? > > I''ve been pounding my head against this problem for a couple of days, and I would definitely appreciate any tips/pointers/advice. > > Don
On Wed, Dec 1, 2010 at 10:30 AM, Don Jackson <don.jackson at gmail.com> wrote:> > # zfs send -R naspool/openbsd at xfer-11292010 | zfs receive -Fv ?npool/openbsd > receiving full stream of naspool/openbsd at xfer-11292010 into npool/openbsd at xfer-11292010 > received 23.5GB stream in 883 seconds (27.3MB/sec) > cannot receive new filesystem stream: destination has snapshots (eg. npool/openbsd at xfer-11292010) > must destroy them to overwrite it > > What am I doing wrong? ?What is the proper way to accomplish my goal here?Try using the -d option to zfs receive.? The ability to do "zfs send -R ... | zfs receive [without -d]" was added relatively recently, and you may be encountering a bug that is specific to receiving a send of a whole pool.> And I have a follow up question: > > I had to snapshot the source zpool filesystems in order to zfs send them. > > Once they are received on the new zpool, I really don''t need nor want this "snapshot" on the receiving side. > Is it OK to zfs destroy that snapshot? >Yes, that will work just fine.? If you delete the snapshot you will not be able to receive any incremental streams starting from that snapshot, but you may not care about that. --matt
> Try using the -d option to zfs receive.? The ability > to do "zfs send > -R ... | zfs receive [without -d]" was added > relatively recently, and > you may be encountering a bug that is specific to > receiving a send of > a whole pool.I just tried this, didn''t work, new error: # zfs send -R naspool/openbsd at xfer-11292010 | zfs recv -d npool/openbsd cannot receive new filesystem stream: out of space The destination pool is much larger (by several TB) than the source pool, so I don''t see how it can not have enough disk space: # zfs list -r npool/openbsd NAME USED AVAIL REFER MOUNTPOINT npool/openbsd 82.5G 7.18T 23.5G /npool/openbsd npool/openbsd at xfer-11292010 0 - 23.5G - npool/openbsd/openbsd 59.0G 7.18T 23.5G /npool/openbsd/openbsd npool/openbsd/openbsd at xfer-11292010 0 - 23.5G - npool/openbsd/openbsd/4.5 22.3G 7.18T 1.54G /npool/openbsd/openbsd/4.5 npool/openbsd/openbsd/4.5 at xfer-11292010 0 - 1.54G - npool/openbsd/openbsd/4.5/packages 18.7G 7.18T 18.7G /npool/openbsd/openbsd/4.5/packages npool/openbsd/openbsd/4.5/packages at xfer-11292010 0 - 18.7G - npool/openbsd/openbsd/4.5/packages-local 49.7K 7.18T 49.7K /npool/openbsd/openbsd/4.5/packages-local npool/openbsd/openbsd/4.5/packages-local at xfer-11292010 0 - 49.7K - npool/openbsd/openbsd/4.5/ports 288M 7.18T 259M /npool/openbsd/openbsd/4.5/ports npool/openbsd/openbsd/4.5/ports at patch000 47.2K - 49.7K - npool/openbsd/openbsd/4.5/ports at patch005 29.0M - 261M - npool/openbsd/openbsd/4.5/ports at xfer-11292010 0 - 259M - npool/openbsd/openbsd/4.5/release 462M 7.18T 462M /npool/openbsd/openbsd/4.5/release npool/openbsd/openbsd/4.5/release at xfer-11292010 0 - 462M - npool/openbsd/openbsd/4.5/src 728M 7.18T 703M /npool/openbsd/openbsd/4.5/src npool/openbsd/openbsd/4.5/src at patch000 47.2K - 49.7K - npool/openbsd/openbsd/4.5/src at patch005 25.1M - 709M - npool/openbsd/openbsd/4.5/src at xfer-11292010 0 - 703M - npool/openbsd/openbsd/4.5/xenocara 572M 7.18T 565M /npool/openbsd/openbsd/4.5/xenocara npool/openbsd/openbsd/4.5/xenocara at patch000 47.2K - 49.7K - npool/openbsd/openbsd/4.5/xenocara at patch005 6.52M - 565M - npool/openbsd/openbsd/4.5/xenocara at xfer-11292010 0 - 565M - npool/openbsd/openbsd/4.8 13.2G 7.18T 413M /npool/openbsd/openbsd/4.8 npool/openbsd/openbsd/4.8 at xfer-11292010 0 - 413M - npool/openbsd/openbsd/4.8/packages 11.9G 7.18T 11.9G /npool/openbsd/openbsd/4.8/packages npool/openbsd/openbsd/4.8/packages at xfer-11292010 0 - 11.9G - npool/openbsd/openbsd/4.8/packages-local 49.7K 7.18T 49.7K /npool/openbsd/openbsd/4.8/packages-local npool/openbsd/openbsd/4.8/packages-local at xfer-11292010 0 - 49.7K - npool/openbsd/openbsd/4.8/ports 277M 7.18T 277M /npool/openbsd/openbsd/4.8/ports npool/openbsd/openbsd/4.8/ports at patch000 47.2K - 49.7K - npool/openbsd/openbsd/4.8/ports at xfer-11292010 0 - 277M - npool/openbsd/openbsd/4.8/release 577M 7.18T 577M /npool/openbsd/openbsd/4.8/release npool/openbsd/openbsd/4.8/release at xfer-11292010 0 - 577M - npool/openbsd/openbsd/4.8/src 96.9K 7.18T 49.7K /npool/openbsd/openbsd/4.8/src npool/openbsd/openbsd/4.8/src at patch000 47.2K - 49.7K - npool/openbsd/openbsd/4.8/src at xfer-11292010 0 - 49.7K - npool/openbsd/openbsd/4.8/xenocara 96.9K 7.18T 49.7K /npool/openbsd/openbsd/4.8/xenocara npool/openbsd/openbsd/4.8/xenocara at patch000 47.2K - 49.7K - npool/openbsd/openbsd/4.8/xenocara at xfer-11292010 0 - 49.7K - -- This message posted from opensolaris.org
Edward Ned Harvey
2010-Dec-03 14:28 UTC
[zfs-discuss] zfs send & receive problem/questions
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Don Jackson > > # zfs send -R naspool/openbsd at xfer-11292010 | zfs recv -d > npool/openbsd > cannot receive new filesystem stream: out of space > > The destination pool is much larger (by several TB) than the source pool, so I > don''t see how it can not have enough disk space:Oh. Fortunately this is an easy one to answer. Since zfs receive is an atomic operation (all or nothing) you can''t overwrite a filesystem unless there is enough disk space for *both* the old version of the filesystem, and the new one. It essentially takes a snapshot of the present filesystem, then creates the new received version, and only after successfully receiving the new one does it delete the old one. That''s why ... despite your failed receive ... You have not lost any information in your receiving filesystem. If you know you want to do this, and you clearly don''t have enough disk space to hold both the old and new filesystems at the same time, you''ll have to destroy the old filesystem in order to overwrite it.