[root at milk:~]# zfs list -r export/zone/www/html NAME USED AVAIL REFER MOUNTPOINT export/zone/www/html 20.9M 225G 10.4M /export/zone/www/html export/zone/www/html at 0 10.4M - 10.4M - export/zone/www/html at 1 0 - 10.4M - export/zone/www/html at 2 0 - 10.4M - export/zone/www/html at 3 0 - 10.4M - export/zone/www/html at 4 0 - 10.4M - export/zone/www/html at 5 0 - 10.4M - [root at milk:~]# zfs send export/zone/www/html at 4 | ssh cookies zfs recv export/zone/www/html at 4 [root at milk:~]# zfs send -i export/zone/www/html at 4 export/zone/www/html at 5 | ssh cookies zfs recv export/zone/www/html cannot receive: destination has been modified since most recent snapshot -- use ''zfs rollback'' to discard changes [root at milk:~]# The machine cookies is idle. S10_U2/x86 patched (18855-19). I was going to try deleting all snaps and start over with a new snap but I thought someone might be interested in figuring out what''s going on here. -frank
Frank Cusack wrote:> [root at milk:~]# zfs send -i export/zone/www/html at 4 export/zone/www/html at 5 > | ssh cookies zfs recv export/zone/www/html > cannot receive: destination has been modified since most recent snapshot -- > use ''zfs rollback'' to discard changes> I was going to try deleting all snaps and start over with a new snap but I > thought someone might be interested in figuring out what''s going on here.That should not be necessary! I assume that you already followed the suggestion of doing ''zfs rollback'', and you got the same message after trying the incremental recv again. If not, try that first. There are a couple of things that could cause this. One is that some process is inadvertently modifying the destination (eg. by reading something, causing the atime to be updated). You can get around this by making the destination fs readonly=on. Another possibility is that you''re hitting 6343779 "ZPL''s delete queue causes ''zfs restore'' to fail". In either case, you can fix the problem by using "zfs recv -F" which will do the rollback for you and make sure nothing happens between the rollback and the recv. You need to be running build 48 or later to use ''zfs recv -F''. If you can''t run build 48 or later, then you can workaround the problem by not mounting the filesystem in between the ''rollback'' and the ''recv'': cookies# zfs set mountpoint=none export/zone/www/html cookies# zfs rollback export/zone/www/html at 4 milk# zfs send -i @4 export/zone/www/html at 5 | ssh cookies zfs recv export/zone/www/html Let me know if one of those options works for you. --matt
On October 6, 2006 2:34:36 PM -0700 Matthew Ahrens <Matthew.Ahrens at sun.com> wrote:> Frank Cusack wrote: >> [root at milk:~]# zfs send -i export/zone/www/html at 4 export/zone/www/html at 5 >> | ssh cookies zfs recv export/zone/www/html >> cannot receive: destination has been modified since most recent snapshot >> -- use ''zfs rollback'' to discard changes > >> I was going to try deleting all snaps and start over with a new snap but >> I thought someone might be interested in figuring out what''s going on >> here. > > That should not be necessary! > > I assume that you already followed the suggestion of doing ''zfs > rollback'', and you got the same message after trying the incremental recv > again. If not, try that first.Yup, tried that.> There are a couple of things that could cause this. One is that some > process is inadvertently modifying the destination (eg. by reading > something, causing the atime to be updated). You can get around this by > making the destination fs readonly=on. > > Another possibility is that you''re hitting 6343779 "ZPL''s delete queue > causes ''zfs restore'' to fail". > > In either case, you can fix the problem by using "zfs recv -F" which will > do the rollback for you and make sure nothing happens between the > rollback and the recv. You need to be running build 48 or later to use > ''zfs recv -F''. > > If you can''t run build 48 or later, then you can workaround the problem > by not mounting the filesystem in between the ''rollback'' and the ''recv'': > > cookies# zfs set mountpoint=none export/zone/www/html > cookies# zfs rollback export/zone/www/html at 4 > milk# zfs send -i @4 export/zone/www/html at 5 | ssh cookies zfs recv > export/zone/www/html > > Let me know if one of those options works for you.Setting mountpoint=none works, but once I set the mountpoint option back it fails again. That is, I successfully send the incremental, reset the mountpoint option, rollback and send and it fails. So I guess there is a filesystem access somewhere somehow immediately after the rollback. I can''t run b48 (any idea if -F will be in 11/06?). What can I do to find out what is changing the fs? I notice that it takes about 5s for the USED column to change from 0 to non-0 on the snapshot, after rollback, when doing it interactively. However, I really do this via a script which does a rollback then immediately does the send. This script always fails. It''s lucky if it takes 0.1s between rollback and send. readonly=on doesn''t help. That is, cookies# zfs set readonly=on export/zone/www/html cookies# zfs rollback export/zone/www/html at 4 milk# zfs send ... ... destination has been modified ... -frank
Frank Cusack wrote:>> If you can''t run build 48 or later, then you can workaround the problem >> by not mounting the filesystem in between the ''rollback'' and the ''recv'': >> >> cookies# zfs set mountpoint=none export/zone/www/html >> cookies# zfs rollback export/zone/www/html at 4 >> milk# zfs send -i @4 export/zone/www/html at 5 | ssh cookies zfs recv >> export/zone/www/html >> >> Let me know if one of those options works for you. > > Setting mountpoint=none works, but once I set the mountpoint option back > it fails again. That is, I successfully send the incremental, reset the > mountpoint option, rollback and send and it fails.I don''t follow... could you list the exact sequence of commands you used and their output? I think you''re saying that you were able to successfully receive the @4- at 5 incremental, but when you tried the @5- at 6 incremental without doing mountpoint=none, the recv failed. So you''re saying that you need mountpoint=none for any incremental recv''s, not just @4- at 5?> So I guess there is a filesystem access somewhere somehow immediately after > the rollback. I can''t run b48 (any idea if -F will be in 11/06?).I don''t think so. Look for it in Solaris 10 update 4.> However, I really do this via > a script which does a rollback then immediately does the send. This script > always fails.It sounds like the mountpoint=none trick works for you, so can''t you just incorporate it into your script? Eg: while (want to send snap) { zfs set mountpoint=none destfs zfs rollback destfs at mostrecentsnap zfs send -i @bla fs at snap | ssh desthost zfs recv bla zfs inherit mountpoint destfs sleep ... }> readonly=on doesn''t help. That is, > > cookies# zfs set readonly=on export/zone/www/html > cookies# zfs rollback export/zone/www/html at 4 > milk# zfs send ... > ... destination has been modified ...This implies that you are hitting 6343779 (or some other bug) which is causing your fs to be modified, rather than some spurious process. But I would expect that to be rare, so it would be surprising if you see this happening with many different snapshots. --matt
On October 6, 2006 3:09:09 PM -0700 Matthew Ahrens <Matthew.Ahrens at sun.com> wrote:> Frank Cusack wrote: >>> If you can''t run build 48 or later, then you can workaround the problem >>> by not mounting the filesystem in between the ''rollback'' and the ''recv'': >>> >>> cookies# zfs set mountpoint=none export/zone/www/html >>> cookies# zfs rollback export/zone/www/html at 4 >>> milk# zfs send -i @4 export/zone/www/html at 5 | ssh cookies zfs recv >>> export/zone/www/html >>> >>> Let me know if one of those options works for you. >> >> Setting mountpoint=none works, but once I set the mountpoint option back >> it fails again. That is, I successfully send the incremental, reset the >> mountpoint option, rollback and send and it fails. > > I don''t follow... could you list the exact sequence of commands you used > and their output? I think you''re saying that you were able to > successfully receive the @4- at 5 incremental, but when you tried the @5- at 6 > incremental without doing mountpoint=none, the recv failed. So you''re > saying that you need mountpoint=none for any incremental recv''s, not just > @4- at 5?No, I just tried the @4- at 5 incremental again. I didn''t think to try another incremental. So I was basically doing the mountpoint=none trick, they trying @4- at 5 again without doing mountpoint=none. Now, if I try @5- at 6 it does work.> It sounds like the mountpoint=none trick works for you, so can''t you just > incorporate it into your script? Eg:Sure. I was just trying to identify the problem correctly, in case this isn''t just another instance of an already-known problem. mountpoint=none is really suboptimal for me though, it means i cannot have services running on the receiving host. I was hoping readonly=on would do the trick.>> readonly=on doesn''t help. That is, >> >> cookies# zfs set readonly=on export/zone/www/html >> cookies# zfs rollback export/zone/www/html at 4 >> milk# zfs send ... >> ... destination has been modified ... > > This implies that you are hitting 6343779 (or some other bug) which is > causing your fs to be modified, rather than some spurious process. But I > would expect that to be rare, so it would be surprising if you see this > happening with many different snapshots.It''s all existing snapshots on that one filesystem. If I take a new snapshot (@6) and send it, it works. Which seems weird to me. It seems to be something to do with the sending host, not the receiving host. -frank
Frank Cusack wrote:> No, I just tried the @4- at 5 incremental again. I didn''t think to try > another incremental. So I was basically doing the mountpoint=none trick, > they trying @4- at 5 again without doing mountpoint=none.Again, seeing the exact sequence of commands you ran would make it quicker for me to diagnose this. I think you''re saying that you ran: zfs set mountpoint=none destfs zfs rollback destfs at prevsnap zfs send -i @4 bla at 5 | zfs recv ... -> success zfs inherit mountpoint destfs zfs rollback -r destfs at 4 zfs send -i @4 bla at 5 | zfs recv ... -> failure This would be consistent with hitting bug 6343779.>> It sounds like the mountpoint=none trick works for you, so can''t you just >> incorporate it into your script? Eg: > > Sure. I was just trying to identify the problem correctly, in case > this isn''t just another instance of an already-known problem. > mountpoint=none is really suboptimal for me though, it means i cannot > have services running on the receiving host. I was hoping readonly=on > would do the trick.Really? I find it hard to believe that mountpoint=none causes any more problems than ''zfs recv'' by itself, since ''zfs recv'' of an incremental stream always unmounts the destination fs while the recv is taking place.> It''s all existing snapshots on that one filesystem. If I take a new > snapshot (@6) and send it, it works. Which seems weird to me. It seems > to be something to do with the sending host, not the receiving host.From the information you''ve provided, my best guess is that the problem is associated with your @4 snapshot, and you are hitting 6343779. Here is the bug description: Even when not accessing a filesystem, it can become dirty due to the zpl''s delete queue. This means that even if you are just ''zfs restore''-ing incremental backups into the filesystem, it may fail because the filesystem has been modified. One possible solution would be to make filesystems created by ''zfs restore'' be readonly by default, and have the zpl not process the delete queue if it is mounted readonly. *** (#1 of 2): 2005-10-31 03:31:02 PST matthew.ahrens at sun.com Note, currently even if you manually set the filesystem to be readonly, the ZPL will still process the delete queue, making it particularly difficult to ensure there are no changes since a most recent snapshot which has entries in the delete queue. The only workaround I could find is to not mount the filesystem. *** (#2 of 2): 2005-10-31 03:34:56 PST matthew.ahrens at sun.com --matt
On October 6, 2006 3:42:48 PM -0700 Matthew Ahrens <Matthew.Ahrens at sun.com> wrote:> Frank Cusack wrote: >> No, I just tried the @4- at 5 incremental again. I didn''t think to try >> another incremental. So I was basically doing the mountpoint=none trick, >> they trying @4- at 5 again without doing mountpoint=none. > > Again, seeing the exact sequence of commands you ran would make it > quicker for me to diagnose this. > > I think you''re saying that you ran: > > zfs set mountpoint=none destfs > zfs rollback destfs at prevsnap > zfs send -i @4 bla at 5 | zfs recv ... -> success > zfs inherit mountpoint destfs > zfs rollback -r destfs at 4 > zfs send -i @4 bla at 5 | zfs recv ... -> failure > > This would be consistent with hitting bug 6343779.That''s right. Sorry for not being explicit.>>> It sounds like the mountpoint=none trick works for you, so can''t you >>> just incorporate it into your script? Eg: >> >> Sure. I was just trying to identify the problem correctly, in case >> this isn''t just another instance of an already-known problem. >> mountpoint=none is really suboptimal for me though, it means i cannot >> have services running on the receiving host. I was hoping readonly=on >> would do the trick. > > Really? I find it hard to believe that mountpoint=none causes any more > problems than ''zfs recv'' by itself, since ''zfs recv'' of an incremental > stream always unmounts the destination fs while the recv is taking place.You''re right. I forgot I was having problems with this anyway.>> It''s all existing snapshots on that one filesystem. If I take a new >> snapshot (@6) and send it, it works. Which seems weird to me. It seems >> to be something to do with the sending host, not the receiving host. > > From the information you''ve provided, my best guess is that the problem > is associated with your @4 snapshot, and you are hitting 6343779.Well, all existing snapshots (@0, @1 ... @4). I will add changing of the mountpoint property to my script. thanks -frank
Frank Cusack wrote:>> Really? I find it hard to believe that mountpoint=none causes any more >> problems than ''zfs recv'' by itself, since ''zfs recv'' of an incremental >> stream always unmounts the destination fs while the recv is taking place. > > You''re right. I forgot I was having problems with this anyway.You''d probably be interested in RFE 6425096 "want online (read-only) ''zfs recv''". Unfortunately this isn''t a priority at the moment.>>> It''s all existing snapshots on that one filesystem. If I take a new >>> snapshot (@6) and send it, it works. Which seems weird to me. It seems >>> to be something to do with the sending host, not the receiving host. >> >> From the information you''ve provided, my best guess is that the problem >> is associated with your @4 snapshot, and you are hitting 6343779. > > Well, all existing snapshots (@0, @1 ... @4). I will add changing of > the mountpoint property to my script.That''s a bit surprising, but I''m glad we have a workaround for you. ''zfs recv -F'' will make this a bit smoother once you have it. --matt