Trevor Watson
2007-Feb-08 17:23 UTC
[zfs-discuss] Peculiar behaviour of snapshot after zfs receive
I am seeing what I think is very peculiar behaviour of ZFS after sending a full stream to a remote host - the upshot being that I can''t send an incremental stream afterwards. What I did was this: host1 is Solaris 10 Update 2 SPARC host2 is Solaris 10 Update 2 x86 host1 # zfs snapshot work/home at snap1 host1 # zfs send work/home at snap1 | ssh host2 zfs recv export/home host1 # ssh host2 host2 # zfs list export/home 1.02G 47.8G 1.02G /export/home export/home at snap1 70.5K - 1.02G - host2 # Note that the snapshot on the remote system is showing changes to the underlying filesystem, even though it is not accessed by any application on host2. Now, I try to send an incremental stream: host1 # zfs snapshot work/home at snap2 host1 # zfs send -i work/home at snap1 work/home at snap2 | ssh host2 zfs recv export/home cannot receive: destination has been modified since most recent snapshot -- use ''zfs rollback'' to discard changes Am I using send/recv incorrectly or is there something else going on here that I am missing? Thanks, Trev -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3253 bytes Desc: S/MIME Cryptographic Signature URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070208/428fd5dc/attachment.bin>
Robert Milkowski
2007-Feb-08 18:53 UTC
[zfs-discuss] Peculiar behaviour of snapshot after zfs receive
Hello Trevor, Thursday, February 8, 2007, 6:23:21 PM, you wrote: TW> I am seeing what I think is very peculiar behaviour of ZFS after sending a TW> full stream to a remote host - the upshot being that I can''t send an TW> incremental stream afterwards. TW> What I did was this: TW> host1 is Solaris 10 Update 2 SPARC TW> host2 is Solaris 10 Update 2 x86 TW> host1 # zfs snapshot work/home at snap1 TW> host1 # zfs send work/home at snap1 | ssh host2 zfs recv export/home TW> host1 # ssh host2 TW> host2 # zfs list TW> export/home 1.02G 47.8G 1.02G /export/home TW> export/home at snap1 70.5K - 1.02G - TW> host2 # TW> Note that the snapshot on the remote system is showing changes to the TW> underlying filesystem, even though it is not accessed by any application on host2. TW> Now, I try to send an incremental stream: TW> host1 # zfs snapshot work/home at snap2 TW> host1 # zfs send -i work/home at snap1 work/home at snap2 | ssh host2 zfs recv TW> export/home TW> cannot receive: destination has been modified since most recent snapshot -- TW> use ''zfs rollback'' to discard changes TW> Am I using send/recv incorrectly or is there something else going on here that TW> I am missing? It''s a known bug. umount and rollback file system on host 2. You should see 0 used space on a snapshot and then it should work. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Wade.Stuart at fallon.com
2007-Feb-08 19:00 UTC
[zfs-discuss] Peculiar behavior of snapshot after zfs receive
> > TW> Am I using send/recv incorrectly or is there something else > going on here that > TW> I am missing? > > > It''s a known bug. > > umount and rollback file system on host 2. You should see 0 used space > on a snapshot and then it should work.Bug ID? Is it related to atime changes?> > -- > Best regards, > Robert mailto:rmilkowski at task.gda.pl > http://milek.blogspot.com
Robert Milkowski
2007-Feb-08 21:20 UTC
[zfs-discuss] Peculiar behavior of snapshot after zfs receive
Hello Wade, Thursday, February 8, 2007, 8:00:40 PM, you wrote:>> >> TW> Am I using send/recv incorrectly or is there something else >> going on here that >> TW> I am missing? >> >> >> It''s a known bug. >> >> umount and rollback file system on host 2. You should see 0 used space >> on a snapshot and then it should work.WSfc> Bug ID? Is it related to atime changes? It has to do with delete queue being processed when fs is mounted. The bug id is: 6343779 http://bugs.opensolaris.org/view_bug.do?bug_id=6343779 -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
eric kustarz
2007-Feb-08 21:40 UTC
[zfs-discuss] Peculiar behaviour of snapshot after zfs receive
On Feb 8, 2007, at 10:53 AM, Robert Milkowski wrote:> Hello Trevor, > > Thursday, February 8, 2007, 6:23:21 PM, you wrote: > > TW> I am seeing what I think is very peculiar behaviour of ZFS > after sending a > TW> full stream to a remote host - the upshot being that I can''t > send an > TW> incremental stream afterwards. > > TW> What I did was this: > > TW> host1 is Solaris 10 Update 2 SPARC > TW> host2 is Solaris 10 Update 2 x86 > > > TW> host1 # zfs snapshot work/home at snap1 > TW> host1 # zfs send work/home at snap1 | ssh host2 zfs recv export/home > TW> host1 # ssh host2 > TW> host2 # zfs list > TW> export/home 1.02G 47.8G 1.02G /export/home > TW> export/home at snap1 70.5K - 1.02G - > TW> host2 # > > TW> Note that the snapshot on the remote system is showing changes > to the > TW> underlying filesystem, even though it is not accessed by any > application on host2. > > TW> Now, I try to send an incremental stream: > > TW> host1 # zfs snapshot work/home at snap2 > TW> host1 # zfs send -i work/home at snap1 work/home at snap2 | ssh host2 > zfs recv > TW> export/home > TW> cannot receive: destination has been modified since most recent > snapshot -- > TW> use ''zfs rollback'' to discard changes > > TW> Am I using send/recv incorrectly or is there something else > going on here that > TW> I am missing? > > > It''s a known bug. > > umount and rollback file system on host 2. You should see 0 used space > on a snapshot and then it should work. >And with snv_48 (s10u4 when it becomes available), you can use ''zfs recv -F'' to force the rollback. eric
Wade.Stuart at fallon.com
2007-Feb-08 21:55 UTC
[zfs-discuss] Peculiar behavior of snapshot after zfs receive
> Hello Wade, > > Thursday, February 8, 2007, 8:00:40 PM, you wrote: > > > > > >> > >> TW> Am I using send/recv incorrectly or is there something else > >> going on here that > >> TW> I am missing? > >> > >> > >> It''s a known bug. > >> > >> umount and rollback file system on host 2. You should see 0 used space > >> on a snapshot and then it should work. > > WSfc> Bug ID? Is it related to atime changes? > > It has to do with delete queue being processed when fs is mounted. > > > The bug id is: 6343779 > http://bugs.opensolaris.org/view_bug.do?bug_id=6343779 >Robert, Thanks! This is good to know, I was having issues with one of my boxes and zfs send/recive that may very well have been this bug. -Wade
Trevor Watson
2007-Feb-09 12:49 UTC
[zfs-discuss] Peculiar behavior of snapshot after zfs receive
Thanks Robert, that did the trick for me! Robert Milkowski wrote:> Hello Wade, > > Thursday, February 8, 2007, 8:00:40 PM, you wrote: > > > > >>> TW> Am I using send/recv incorrectly or is there something else >>> going on here that >>> TW> I am missing? >>> >>> >>> It''s a known bug. >>> >>> umount and rollback file system on host 2. You should see 0 used space >>> on a snapshot and then it should work. > > WSfc> Bug ID? Is it related to atime changes? > > It has to do with delete queue being processed when fs is mounted. > > > The bug id is: 6343779 > http://bugs.opensolaris.org/view_bug.do?bug_id=6343779 >-------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3253 bytes Desc: S/MIME Cryptographic Signature URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070209/afc1701b/attachment.bin>