Is it possible to send an entire pool (including all its zfs filesystems) to a zfs filesystem in a different pool on another host? Or must I send each zfs filesystem one at a time? Thanks! jlc
On Wed 29/07/09 10:09 , "Joseph L. Casale" JCasale at activenetwerx.com sent:> Is it possible to send an entire pool (including all its zfsfilesystems) > to a zfs filesystem in a different pool on another host? Or must I send each > zfs filesystem one at a time?Yes, use -R on the sending side and -d on the receiving side. -- Ian.
>Yes, use -R on the sending side and -d on the receiving side.I tried that first, going from Solaris 10 to osol 0906: # zfs send -vR mypool2 at snap |ssh joe at catania "pfexec /usr/sbin/zfs recv -dF mypool/somename" didn''t create any of the zfs filesystems under mypool2? Thanks! jlc
Try send/receive to the same host (ssh localhost). I used this when trying send/receive as it removes ssh between hosts "problems" The on disk format of ZFS has changed there is something about it in the man pages from memory so I don''t think you can go S10 -> OpenSolaris without doing an upgrade, but I could be wrong! Joseph L. Casale wrote: Yes, use -R on the sending side and -d on the receiving side. I tried that first, going from Solaris 10 to osol 0906: # zfs send -vR mypool2@snap |ssh joe@catania "pfexec /usr/sbin/zfs recv -dF mypool/somename" didn''t create any of the zfs filesystems under mypool2? Thanks! jlc _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss www.eagle.co.nz This email is confidential and may be legally privileged. If received in error please destroy and immediately notify us. _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Wed 29/07/09 10:49 , "Joseph L. Casale" JCasale at activenetwerx.com sent:> >Yes, use -R on the sending side and -d on the receiving side.> I tried that first, going from Solaris 10 to osol 0906: > > # zfs send -vR mypool2 at snap|ssh joe at catania "pfexec /usr/sbin/zfs recv -dF mypool/somename" > didn''t create any of the zfs filesystems under mypool2?What happens if you try it on the local host where you can just pipe from the send to the receive (no need for ssh)? zfs send -R mypool2 at snap | zfs recv -d -n -v newpool/somename Another thing to try is use "-n -v" on the receive end to see what would be created id -n were omitted. I find -v more useful on the receiving side than on the send. -- Ian
I apologize for replying in the middle of this thread, but I never saw the initial snapshot syntax of mypool2, which needs to be recursive (zfs snapshot -r mypool2 at snap) to snapshot all the datasets in mypool2. Then, use zfs send -R to pick up and restore all the dataset properties. What was the original snapshot syntax? Cindy ----- Original Message ----- From: Ian Collins <ian at ianshome.com> Date: Tuesday, July 28, 2009 5:53 pm Subject: Re: [zfs-discuss] zfs send/recv syntax To: "zfs-discuss at opensolaris.org" <zfs-discuss at opensolaris.org>, "Joseph L. Casale" <JCasale at activenetwerx.com>> On Wed 29/07/09 10:49 , "Joseph L. Casale" JCasale at activenetwerx.com > sent: > > > >Yes, use -R on the sending side and -d on the receiving side. > > > I tried that first, going from Solaris 10 to osol 0906: > > > > # zfs send -vR mypool2 at snap|ssh joe at catania "pfexec /usr/sbin/zfs > recv -dF mypool/somename" > > didn''t create any of the zfs filesystems under mypool2? > > What happens if you try it on the local host where you can just pipe > from the send to the receive (no need for ssh)? > > zfs send -R mypool2 at snap | zfs recv -d -n -v newpool/somename > > Another thing to try is use "-n -v" on the receive end to see what > would be created id -n were omitted. > > I find -v more useful on the receiving side than on the send. > > -- > Ian > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>I apologize for replying in the middle of this thread, but I never >saw the initial snapshot syntax of mypool2, which needs to be >recursive (zfs snapshot -r mypool2 at snap) to snapshot all the >datasets in mypool2. Then, use zfs send -R to pick up and >restore all the dataset properties. > >What was the original snapshot syntax?Cindy, You figured it out! I forgot the -r :) I don''t have the room to try to send locally so I reran the ssh and it''s showing what would get transferred with Ian''s syntax. I just ran the following: zfs send -vR mypool2 at snap |ssh joe at host "pfexec /usr/sbin/zfs recv -Fdnv mypool/zfsname" Looking at the man page, it doesn''t explicitly state the behavior I am noticing but looking at the switch''s, I can see a _lot_ of traffic going from the sending host to the receiving host. Does the -n just not write it but allow it to be sent? The command has not returned... Thanks everyone! jlc
Joseph L. Casale wrote:>> I apologize for replying in the middle of this thread, but I never >> saw the initial snapshot syntax of mypool2, which needs to be >> recursive (zfs snapshot -r mypool2 at snap) to snapshot all the >> datasets in mypool2. Then, use zfs send -R to pick up and >> restore all the dataset properties. >> >> What was the original snapshot syntax? >> > > Cindy, > You figured it out! I forgot the -r :) > I don''t have the room to try to send locally so I reran the ssh > and it''s showing what would get transferred with Ian''s syntax. > > I just ran the following: > zfs send -vR mypool2 at snap |ssh joe at host "pfexec /usr/sbin/zfs recv -Fdnv mypool/zfsname" > Looking at the man page, it doesn''t explicitly state the behavior I am > noticing but looking at the switch''s, I can see a _lot_ of traffic going > from the sending host to the receiving host. Does the -n just not write it > but allow it to be sent? The command has not returned... > >Correct, the sending side will be happily sending into a void. Kill it and re-run without the -n. -- Ian.