Andrew Daugherity
2009-Jul-07 20:41 UTC
[zfs-discuss] possible to override/inherit mountpoint on received snapshots?
I attempted to migrate data from one zfs pool to another, larger one (both pools are currently mounted on the same host) using the snapshot send/receive functionality. Of course, I could use something like rsync/cpio/tar instead, but I''d have to first manually create all the target FSes, and send/receive seemed like a better option. Unfortunately, this trips up on filesystems that have mountpoints explicitly set, as after receiving the snapshot, it attempts to mount the target FS at the same mountpoint as the source FS, which obviously fails. The "zfs receive" command aborts here and doesn''t try to receive any other FSes. The source pool: ===andrew at imsfs-mirror:~$ zfs list -t filesystem -r ims_pool_mirror NAME USED AVAIL REFER MOUNTPOINT ims_pool_mirror 3.78T 498G 46.0K /ims_pool_mirror ims_pool_mirror/backup 1.99T 498G 749M /export/backup ims_pool_mirror/backup/bushlibrary 54.7G 498G 54.7G /export/backup/bushlibrary ims_pool_mirror/backup/bvcnet 429M 498G 429M /export/backup/bvcnet ims_pool_mirror/backup/isc 129G 498G 129G /export/backup/isc [several more FSes under ims_pool_mirror/backup omitted for brevity] ims_pool_mirror/ims 1.79T 498G 1.79T /export/ims ims_pool_mirror/ims/webroot 62.4M 498G 60.3M /export/ims/webroot === I took a recursive snapshot ("@0702") and attempted to copy it to the new pool (named "ims_mirror_new" for now) and ran into the above problem. I hit the same issue when dropping down a level to ims_pool_mirror/backup, but not when going down a level below that (to e.g. ims_pool_mirror/backup/bushlibrary). In a certain way this make sense, since it''s backup that has the mountpoint set, and the FSes under it just inherit their mountpoints, but it isn''t immediately obvious. What''s more confusing is that doing a dry-run succeeds (for all FSes), but removing the ''-n'' flag causes it to fail after the first FS: ===root at imsfs-mirror:/# zfs send -R ims_pool_mirror/backup at 0702 | zfs receive -nv -F -d ims_mirror_new would receive full stream of ims_pool_mirror/backup at 0702 into ims_mirror_new/backup at 0702 would receive full stream of ims_pool_mirror/backup/bvcnet at 0702 into ims_mirror_new/backup/bvcnet at 0702 would receive full stream of ims_pool_mirror/backup/zeo at 0702 into ims_mirror_new/backup/zeo at 0702 would receive full stream of ims_pool_mirror/backup/jira at 0702 into ims_mirror_new/backup/jira at 0702 would receive full stream of ims_pool_mirror/backup/zope at 0702 into ims_mirror_new/backup/zope at 0702 would receive full stream of ims_pool_mirror/backup/ocfsweb at 0702 into ims_mirror_new/backup/ocfsweb at 0702 would receive full stream of ims_pool_mirror/backup/mysql at 0702 into ims_mirror_new/backup/mysql at 0702 would receive full stream of ims_pool_mirror/backup/isc at 0702 into ims_mirror_new/backup/isc at 0702 would receive full stream of ims_pool_mirror/backup/bushlibrary at 0702 into ims_mirror_new/backup/bushlibrary at 0702 would receive full stream of ims_pool_mirror/backup/purgatory at 0702 into ims_mirror_new/backup/purgatory at 0702 would receive full stream of ims_pool_mirror/backup/ldap at 0702 into ims_mirror_new/backup/ldap at 0702 would receive full stream of ims_pool_mirror/backup/thesis at 0702 into ims_mirror_new/backup/thesis at 0702 would receive full stream of ims_pool_mirror/backup/pgsql at 0702 into ims_mirror_new/backup/pgsql at 0702 root at imsfs-mirror:/# zfs send -R ims_pool_mirror/backup at 0702 | zfs receive -v -F -d ims_mirror_new receiving full stream of ims_pool_mirror/backup at 0702 into ims_mirror_new/backup at 0702 cannot mount ''/export/backup'': directory is not empty === Of course it can''t, since that mountpoint is used by the source pool! Doing the child FSes (which inherit) one at a time works just fine: ===root at imsfs-mirror:/# zfs send -R ims_pool_mirror/backup/bvcnet at 0702 | zfs receive -v -d ims_mirror_new receiving full stream of ims_pool_mirror/backup/bvcnet at 0702 into ims_mirror_new/backup/bvcnet at 0702 received 431MB stream in 10 seconds (43.1MB/sec) === Now that I understand the behavior, I have to ask: (1) Where is this documented? I didn''t see anything in the zfs admin guide saying "receiving a snapshot with a local mountpoint set will attempt to mount it there, and the receive operation will fail if it can''t." I see it stated that the target FS name must not exist, but the examples use inherited mountpoints (e.g. the "users2" example on p.202), which I thought I was following, until I realized the local mountpoint issue was tripping me up. (2) (More importantly) Is there a workaround to force snapshots received with "zfs receive -d" to inherit their mountpoint from the pool they are imported into, and/or explicitly override it? Thanks, Andrew Daugherity Systems Analyst Division of Research & Graduate Studies Texas A&M University
Richard Elling
2009-Jul-07 21:05 UTC
[zfs-discuss] possible to override/inherit mountpoint on received snapshots?
You need the zfs receive -u option. -- richard Andrew Daugherity wrote:> I attempted to migrate data from one zfs pool to another, larger one (both pools are currently mounted on the same host) using the snapshot send/receive functionality. Of course, I could use something like rsync/cpio/tar instead, but I''d have to first manually create all the target FSes, and send/receive seemed like a better option. > > Unfortunately, this trips up on filesystems that have mountpoints explicitly set, as after receiving the snapshot, it attempts to mount the target FS at the same mountpoint as the source FS, which obviously fails. The "zfs receive" command aborts here and doesn''t try to receive any other FSes. > > The source pool: > ===> andrew at imsfs-mirror:~$ zfs list -t filesystem -r ims_pool_mirror > NAME USED AVAIL REFER MOUNTPOINT > ims_pool_mirror 3.78T 498G 46.0K /ims_pool_mirror > ims_pool_mirror/backup 1.99T 498G 749M /export/backup > ims_pool_mirror/backup/bushlibrary 54.7G 498G 54.7G /export/backup/bushlibrary > ims_pool_mirror/backup/bvcnet 429M 498G 429M /export/backup/bvcnet > ims_pool_mirror/backup/isc 129G 498G 129G /export/backup/isc > [several more FSes under ims_pool_mirror/backup omitted for brevity] > ims_pool_mirror/ims 1.79T 498G 1.79T /export/ims > ims_pool_mirror/ims/webroot 62.4M 498G 60.3M /export/ims/webroot > ===> > I took a recursive snapshot ("@0702") and attempted to copy it to the new pool (named "ims_mirror_new" for now) and ran into the above problem. I hit the same issue when dropping down a level to ims_pool_mirror/backup, but not when going down a level below that (to e.g. ims_pool_mirror/backup/bushlibrary). In a certain way this make sense, since it''s backup that has the mountpoint set, and the FSes under it just inherit their mountpoints, but it isn''t immediately obvious. > > What''s more confusing is that doing a dry-run succeeds (for all FSes), but removing the ''-n'' flag causes it to fail after the first FS: > ===> root at imsfs-mirror:/# zfs send -R ims_pool_mirror/backup at 0702 | zfs receive -nv -F -d ims_mirror_new > would receive full stream of ims_pool_mirror/backup at 0702 into ims_mirror_new/backup at 0702 > would receive full stream of ims_pool_mirror/backup/bvcnet at 0702 into ims_mirror_new/backup/bvcnet at 0702 > would receive full stream of ims_pool_mirror/backup/zeo at 0702 into ims_mirror_new/backup/zeo at 0702 > would receive full stream of ims_pool_mirror/backup/jira at 0702 into ims_mirror_new/backup/jira at 0702 > would receive full stream of ims_pool_mirror/backup/zope at 0702 into ims_mirror_new/backup/zope at 0702 > would receive full stream of ims_pool_mirror/backup/ocfsweb at 0702 into ims_mirror_new/backup/ocfsweb at 0702 > would receive full stream of ims_pool_mirror/backup/mysql at 0702 into ims_mirror_new/backup/mysql at 0702 > would receive full stream of ims_pool_mirror/backup/isc at 0702 into ims_mirror_new/backup/isc at 0702 > would receive full stream of ims_pool_mirror/backup/bushlibrary at 0702 into ims_mirror_new/backup/bushlibrary at 0702 > would receive full stream of ims_pool_mirror/backup/purgatory at 0702 into ims_mirror_new/backup/purgatory at 0702 > would receive full stream of ims_pool_mirror/backup/ldap at 0702 into ims_mirror_new/backup/ldap at 0702 > would receive full stream of ims_pool_mirror/backup/thesis at 0702 into ims_mirror_new/backup/thesis at 0702 > would receive full stream of ims_pool_mirror/backup/pgsql at 0702 into ims_mirror_new/backup/pgsql at 0702 > root at imsfs-mirror:/# zfs send -R ims_pool_mirror/backup at 0702 | zfs receive -v -F -d ims_mirror_new > receiving full stream of ims_pool_mirror/backup at 0702 into ims_mirror_new/backup at 0702 > cannot mount ''/export/backup'': directory is not empty > ===> > Of course it can''t, since that mountpoint is used by the source pool! > > Doing the child FSes (which inherit) one at a time works just fine: > ===> root at imsfs-mirror:/# zfs send -R ims_pool_mirror/backup/bvcnet at 0702 | zfs receive -v -d ims_mirror_new > receiving full stream of ims_pool_mirror/backup/bvcnet at 0702 into ims_mirror_new/backup/bvcnet at 0702 > received 431MB stream in 10 seconds (43.1MB/sec) > ===> > > Now that I understand the behavior, I have to ask: (1) Where is this documented? I didn''t see anything in the zfs admin guide saying "receiving a snapshot with a local mountpoint set will attempt to mount it there, and the receive operation will fail if it can''t." I see it stated that the target FS name must not exist, but the examples use inherited mountpoints (e.g. the "users2" example on p.202), which I thought I was following, until I realized the local mountpoint issue was tripping me up. > > (2) (More importantly) Is there a workaround to force snapshots received with "zfs receive -d" to inherit their mountpoint from the pool they are imported into, and/or explicitly override it? > > > Thanks, > > > Andrew Daugherity > Systems Analyst > Division of Research & Graduate Studies > Texas A&M University > > > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Lori Alt
2009-Jul-07 21:27 UTC
[zfs-discuss] possible to override/inherit mountpoint on received snapshots?
To elaborate, the -u option to zfs receive suppresses all mounts. The datasets you extract will STILL have mountpoints that might not work on the local system, but at least you can unpack the entire hierarchy of datasets and then modify mountpoints as needed to arrange to make the file systems mountable. It''s not a complete solution to your problem, but it should let you construct one. The -u option is not documented partly because (1) it was added for the specific purpose of enabling flash archive installation support and (2) it''s not a complete solution to the problem of moving dataset hierarchies from one system to another. However, it''s turning out to be useful enough that we should probably document it. Lori On 07/07/09 15:05, Richard Elling wrote:> You need the zfs receive -u option. > -- richard > > Andrew Daugherity wrote: >> I attempted to migrate data from one zfs pool to another, larger one >> (both pools are currently mounted on the same host) using the >> snapshot send/receive functionality. Of course, I could use >> something like rsync/cpio/tar instead, but I''d have to first manually >> create all the target FSes, and send/receive seemed like a better >> option. >> >> Unfortunately, this trips up on filesystems that have mountpoints >> explicitly set, as after receiving the snapshot, it attempts to mount >> the target FS at the same mountpoint as the source FS, which >> obviously fails. The "zfs receive" command aborts here and doesn''t >> try to receive any other FSes. >> >> The source pool: >> ===>> andrew at imsfs-mirror:~$ zfs list -t filesystem -r ims_pool_mirror >> NAME USED AVAIL REFER MOUNTPOINT >> ims_pool_mirror 3.78T 498G 46.0K >> /ims_pool_mirror >> ims_pool_mirror/backup 1.99T 498G 749M /export/backup >> ims_pool_mirror/backup/bushlibrary 54.7G 498G 54.7G >> /export/backup/bushlibrary >> ims_pool_mirror/backup/bvcnet 429M 498G 429M >> /export/backup/bvcnet >> ims_pool_mirror/backup/isc 129G 498G 129G >> /export/backup/isc >> [several more FSes under ims_pool_mirror/backup omitted for brevity] >> ims_pool_mirror/ims 1.79T 498G 1.79T /export/ims >> ims_pool_mirror/ims/webroot 62.4M 498G 60.3M >> /export/ims/webroot >> ===>> >> I took a recursive snapshot ("@0702") and attempted to copy it to the >> new pool (named "ims_mirror_new" for now) and ran into the above >> problem. I hit the same issue when dropping down a level to >> ims_pool_mirror/backup, but not when going down a level below that >> (to e.g. ims_pool_mirror/backup/bushlibrary). In a certain way this >> make sense, since it''s backup that has the mountpoint set, and the >> FSes under it just inherit their mountpoints, but it isn''t >> immediately obvious. >> >> What''s more confusing is that doing a dry-run succeeds (for all >> FSes), but removing the ''-n'' flag causes it to fail after the first FS: >> ===>> root at imsfs-mirror:/# zfs send -R ims_pool_mirror/backup at 0702 | zfs >> receive -nv -F -d ims_mirror_new >> would receive full stream of ims_pool_mirror/backup at 0702 into >> ims_mirror_new/backup at 0702 >> would receive full stream of ims_pool_mirror/backup/bvcnet at 0702 into >> ims_mirror_new/backup/bvcnet at 0702 >> would receive full stream of ims_pool_mirror/backup/zeo at 0702 into >> ims_mirror_new/backup/zeo at 0702 >> would receive full stream of ims_pool_mirror/backup/jira at 0702 into >> ims_mirror_new/backup/jira at 0702 >> would receive full stream of ims_pool_mirror/backup/zope at 0702 into >> ims_mirror_new/backup/zope at 0702 >> would receive full stream of ims_pool_mirror/backup/ocfsweb at 0702 into >> ims_mirror_new/backup/ocfsweb at 0702 >> would receive full stream of ims_pool_mirror/backup/mysql at 0702 into >> ims_mirror_new/backup/mysql at 0702 >> would receive full stream of ims_pool_mirror/backup/isc at 0702 into >> ims_mirror_new/backup/isc at 0702 >> would receive full stream of ims_pool_mirror/backup/bushlibrary at 0702 >> into ims_mirror_new/backup/bushlibrary at 0702 >> would receive full stream of ims_pool_mirror/backup/purgatory at 0702 >> into ims_mirror_new/backup/purgatory at 0702 >> would receive full stream of ims_pool_mirror/backup/ldap at 0702 into >> ims_mirror_new/backup/ldap at 0702 >> would receive full stream of ims_pool_mirror/backup/thesis at 0702 into >> ims_mirror_new/backup/thesis at 0702 >> would receive full stream of ims_pool_mirror/backup/pgsql at 0702 into >> ims_mirror_new/backup/pgsql at 0702 >> root at imsfs-mirror:/# zfs send -R ims_pool_mirror/backup at 0702 | zfs >> receive -v -F -d ims_mirror_new >> receiving full stream of ims_pool_mirror/backup at 0702 into >> ims_mirror_new/backup at 0702 >> cannot mount ''/export/backup'': directory is not empty >> ===>> >> Of course it can''t, since that mountpoint is used by the source pool! >> >> Doing the child FSes (which inherit) one at a time works just fine: >> ===>> root at imsfs-mirror:/# zfs send -R ims_pool_mirror/backup/bvcnet at 0702 | >> zfs receive -v -d ims_mirror_new >> receiving full stream of ims_pool_mirror/backup/bvcnet at 0702 into >> ims_mirror_new/backup/bvcnet at 0702 >> received 431MB stream in 10 seconds (43.1MB/sec) >> ===>> >> >> Now that I understand the behavior, I have to ask: (1) Where is this >> documented? I didn''t see anything in the zfs admin guide saying >> "receiving a snapshot with a local mountpoint set will attempt to >> mount it there, and the receive operation will fail if it can''t." I >> see it stated that the target FS name must not exist, but the >> examples use inherited mountpoints (e.g. the "users2" example on >> p.202), which I thought I was following, until I realized the local >> mountpoint issue was tripping me up. >> (2) (More importantly) Is there a workaround to force snapshots >> received with "zfs receive -d" to inherit their mountpoint from the >> pool they are imported into, and/or explicitly override it? >> >> >> Thanks, >> >> >> Andrew Daugherity >> Systems Analyst >> Division of Research & Graduate Studies >> Texas A&M University >> >> >> >> >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Cindy.Swearingen at Sun.COM
2009-Jul-07 22:12 UTC
[zfs-discuss] possible to override/inherit mountpoint on received snapshots?
FYI... The -u option is described in the ZFS admin guide and the ZFS troubleshooting wiki in the areas of restoring root pool snapshots. The -u option is described in the zfs.1m man page starting in the b115 release: http://docs.sun.com/app/docs/doc/819-2240/zfs-1m Cindy Lori Alt wrote:> > To elaborate, the -u option to zfs receive suppresses all mounts. The > datasets you extract will STILL have mountpoints that might not work on > the local system, but at least you can unpack the entire hierarchy of > datasets and then modify mountpoints as needed to arrange to make the > file systems mountable. > > It''s not a complete solution to your problem, but it should let you > construct one. > > The -u option is not documented partly because (1) it was added for the > specific purpose of enabling flash archive installation support and (2) > it''s not a complete solution to the problem of moving dataset > hierarchies from one system to another. However, it''s turning out to be > useful enough that we should probably document it. > > Lori > > > > On 07/07/09 15:05, Richard Elling wrote: > >> You need the zfs receive -u option. >> -- richard >> >> Andrew Daugherity wrote: >> >>> I attempted to migrate data from one zfs pool to another, larger one >>> (both pools are currently mounted on the same host) using the >>> snapshot send/receive functionality. Of course, I could use >>> something like rsync/cpio/tar instead, but I''d have to first manually >>> create all the target FSes, and send/receive seemed like a better >>> option. >>> >>> Unfortunately, this trips up on filesystems that have mountpoints >>> explicitly set, as after receiving the snapshot, it attempts to mount >>> the target FS at the same mountpoint as the source FS, which >>> obviously fails. The "zfs receive" command aborts here and doesn''t >>> try to receive any other FSes. >>> >>> The source pool: >>> ===>>> andrew at imsfs-mirror:~$ zfs list -t filesystem -r ims_pool_mirror >>> NAME USED AVAIL REFER MOUNTPOINT >>> ims_pool_mirror 3.78T 498G 46.0K >>> /ims_pool_mirror >>> ims_pool_mirror/backup 1.99T 498G 749M /export/backup >>> ims_pool_mirror/backup/bushlibrary 54.7G 498G 54.7G >>> /export/backup/bushlibrary >>> ims_pool_mirror/backup/bvcnet 429M 498G 429M >>> /export/backup/bvcnet >>> ims_pool_mirror/backup/isc 129G 498G 129G >>> /export/backup/isc >>> [several more FSes under ims_pool_mirror/backup omitted for brevity] >>> ims_pool_mirror/ims 1.79T 498G 1.79T /export/ims >>> ims_pool_mirror/ims/webroot 62.4M 498G 60.3M >>> /export/ims/webroot >>> ===>>> >>> I took a recursive snapshot ("@0702") and attempted to copy it to the >>> new pool (named "ims_mirror_new" for now) and ran into the above >>> problem. I hit the same issue when dropping down a level to >>> ims_pool_mirror/backup, but not when going down a level below that >>> (to e.g. ims_pool_mirror/backup/bushlibrary). In a certain way this >>> make sense, since it''s backup that has the mountpoint set, and the >>> FSes under it just inherit their mountpoints, but it isn''t >>> immediately obvious. >>> >>> What''s more confusing is that doing a dry-run succeeds (for all >>> FSes), but removing the ''-n'' flag causes it to fail after the first FS: >>> ===>>> root at imsfs-mirror:/# zfs send -R ims_pool_mirror/backup at 0702 | zfs >>> receive -nv -F -d ims_mirror_new >>> would receive full stream of ims_pool_mirror/backup at 0702 into >>> ims_mirror_new/backup at 0702 >>> would receive full stream of ims_pool_mirror/backup/bvcnet at 0702 into >>> ims_mirror_new/backup/bvcnet at 0702 >>> would receive full stream of ims_pool_mirror/backup/zeo at 0702 into >>> ims_mirror_new/backup/zeo at 0702 >>> would receive full stream of ims_pool_mirror/backup/jira at 0702 into >>> ims_mirror_new/backup/jira at 0702 >>> would receive full stream of ims_pool_mirror/backup/zope at 0702 into >>> ims_mirror_new/backup/zope at 0702 >>> would receive full stream of ims_pool_mirror/backup/ocfsweb at 0702 into >>> ims_mirror_new/backup/ocfsweb at 0702 >>> would receive full stream of ims_pool_mirror/backup/mysql at 0702 into >>> ims_mirror_new/backup/mysql at 0702 >>> would receive full stream of ims_pool_mirror/backup/isc at 0702 into >>> ims_mirror_new/backup/isc at 0702 >>> would receive full stream of ims_pool_mirror/backup/bushlibrary at 0702 >>> into ims_mirror_new/backup/bushlibrary at 0702 >>> would receive full stream of ims_pool_mirror/backup/purgatory at 0702 >>> into ims_mirror_new/backup/purgatory at 0702 >>> would receive full stream of ims_pool_mirror/backup/ldap at 0702 into >>> ims_mirror_new/backup/ldap at 0702 >>> would receive full stream of ims_pool_mirror/backup/thesis at 0702 into >>> ims_mirror_new/backup/thesis at 0702 >>> would receive full stream of ims_pool_mirror/backup/pgsql at 0702 into >>> ims_mirror_new/backup/pgsql at 0702 >>> root at imsfs-mirror:/# zfs send -R ims_pool_mirror/backup at 0702 | zfs >>> receive -v -F -d ims_mirror_new >>> receiving full stream of ims_pool_mirror/backup at 0702 into >>> ims_mirror_new/backup at 0702 >>> cannot mount ''/export/backup'': directory is not empty >>> ===>>> >>> Of course it can''t, since that mountpoint is used by the source pool! >>> >>> Doing the child FSes (which inherit) one at a time works just fine: >>> ===>>> root at imsfs-mirror:/# zfs send -R ims_pool_mirror/backup/bvcnet at 0702 | >>> zfs receive -v -d ims_mirror_new >>> receiving full stream of ims_pool_mirror/backup/bvcnet at 0702 into >>> ims_mirror_new/backup/bvcnet at 0702 >>> received 431MB stream in 10 seconds (43.1MB/sec) >>> ===>>> >>> >>> Now that I understand the behavior, I have to ask: (1) Where is this >>> documented? I didn''t see anything in the zfs admin guide saying >>> "receiving a snapshot with a local mountpoint set will attempt to >>> mount it there, and the receive operation will fail if it can''t." I >>> see it stated that the target FS name must not exist, but the >>> examples use inherited mountpoints (e.g. the "users2" example on >>> p.202), which I thought I was following, until I realized the local >>> mountpoint issue was tripping me up. (2) (More importantly) Is there >>> a workaround to force snapshots received with "zfs receive -d" to >>> inherit their mountpoint from the pool they are imported into, and/or >>> explicitly override it? >>> >>> >>> Thanks, >>> >>> >>> Andrew Daugherity >>> Systems Analyst >>> Division of Research & Graduate Studies >>> Texas A&M University >>> >>> >>> >>> >>> >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>> >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss