I followed the formula in zfs(1M) to replicate a zfs filesytem remotely. The initial transfer works as I expected (and as documented). The filesystems were created on the remote side and the data transferred. The man page is really difficult to interpret as to whether the data will be in the filesystem or only in the snapshot, but anyway it did what I wanted. Then I added a file to one of the filesystems and took a second snapshot, and sent it with ''zfs send -i''. No errors from the ''zfs recv'' but the data does not appear. There *is* a new snapshot on the remote side, but it''s not accessible. Also, the first snapshot is now no longer accessible, in fact all of .zfs is no longer accessible. This is a zoned filesystem. Here you can see they contain the same amount of data: [root at milk:~]# zfs list -r export/zone/smb/share/tmp NAME USED AVAIL REFER MOUNTPOINT export/zone/smb/share/tmp 48K 225G 25.5K /share/tmp export/zone/smb/share/tmp at 7 22.5K - 24.5K - export/zone/smb/share/tmp at 7.1 0 - 25.5K - [root at milk:~]# [root at cookies:~]# zfs list -r export/zone/smb/share/tmp NAME USED AVAIL REFER MOUNTPOINT export/zone/smb/share/tmp 48K 225G 25.5K /share/tmp export/zone/smb/share/tmp at 7 22.5K - 24.5K - export/zone/smb/share/tmp at 7.1 0 - 25.5K - [root at cookies:~]# But when I login to zone smb and cd to /share/tmp/.zfs I get ''no such file or directory''. This does exist for other filesystems like /zone/eng/.zfs. hmm ok if I reboot the zone .zfs shows up and the filesystem itself (not just the snapshot) contains the new data. Curiously the 7.1 snapshot contains data now. NAME USED AVAIL REFER MOUNTPOINT export/zone/smb/share/tmp 69.5K 225G 25.5K /share/tmp export/zone/smb/share/tmp at 7 22.5K - 24.5K - export/zone/smb/share/tmp at 7.1 21.5K - 25.5K - [root at cookies:~]# So, questions: 1. Is the need to reboot a bug? Certainly having the other snapshots go missing seems like a bug. 2. Why does the 7.1 snapshot have the extra space? This is S10 06/06 on x86. thanks -frank
On August 19, 2006 7:06:06 PM -0700 Matthew Ahrens <ahrens at eng.sun.com> wrote:> On Sat, Aug 19, 2006 at 06:31:47PM -0700, Frank Cusack wrote: >> But when I login to zone smb and cd to /share/tmp/.zfs I get ''no such >> file or directory''. This does exist for other filesystems like >> /zone/eng/.zfs. > > My guess is that the filesystem is not mounted. It should be remounted > after the ''zfs recv'', but perhaps that is not happening correctly. You > can see if it''s mounted by running ''df'' or ''zfs list -o name,mounted''.You are right, it''s not mounted.> Did the ''zfs recv'' print any error messages?nope.> Are you able to reproduce this behavior?easily.>> 1. Is the need to reboot a bug? Certainly having the other snapshots >> go missing seems like a bug. > > Yes, it is a bug for the filesystem to not be remounted after the ''zfs > recv''. FYI, you should be able to get it mounted again by running ''zfs > mount -a'', you don''t have to reboot the entire zone.yay, that works.>> 2. Why does the 7.1 snapshot have the extra space? > > If the filesystem (export/zone/smb/share/tmp) is modified, then some of > the data that it shared with the snapshot (@7.1) will not be shared any > longer, so it will become unique to the snapshot and show up as "used" > space. Even if you didn''t explicitly change anything in the filesystem, > with the default settings, simply reading files in the filesystem will > cause their atimes to be modified.ah ok. Note that if I do zfs send; zfs send -i on the "local side", then do zfs list; zfs mount -a on the "remote side", I still show space used in the @7.1 snapshot, even though I didn''t touch anything. I guess mounting accesses the mount point and updates the atime. On the "local" side, how come after I take the 7.1 snapshot and then ''ls'', the 7.1 snapshot doesn''t start using up space? Shouldn''t my ls of the mountpoint update the atime also? -frank
On Sat, Aug 19, 2006 at 07:21:52PM -0700, Frank Cusack wrote:> On August 19, 2006 7:06:06 PM -0700 Matthew Ahrens <ahrens at eng.sun.com> > wrote: > >My guess is that the filesystem is not mounted. It should be remounted > >after the ''zfs recv'', but perhaps that is not happening correctly. You > >can see if it''s mounted by running ''df'' or ''zfs list -o name,mounted''. > > You are right, it''s not mounted. > > >Did the ''zfs recv'' print any error messages? > > nope. > > > Are you able to reproduce this behavior? > > easily.Hmm, I think there must be something special about your filesystems or configuration; I''m not able to reproduce it. One possible cause for trouble is if you are doing the ''zfs receive'' into a filesystem which has descendent filesystems (eg, you are doing ''zfs recv pool/fs at snap'' and pool/fs/child exists). This isn''t handled correctly now, but you should get an error message in that case. (This will be fixed by some changes Noel is going to putback next week.) Could you send me the output of ''truss zfs recv ...'', and ''zfs list'' and ''zfs get -r all <pool>'' on both the source and destination systems?> ah ok. Note that if I do zfs send; zfs send -i on the "local side", then > do zfs list; zfs mount -a on the "remote side", I still show space used > in the @7.1 snapshot, even though I didn''t touch anything. I guess mounting > accesses the mount point and updates the atime.Hmm, maybe. I''m not sure if that''s exactly what''s happening, because mounting and unmounting a filesystem doesn''t seem to update the atime for me. Does the @7.1 snapshot show used space before you do the ''zfs mount -a''?> On the "local" side, how come after I take the 7.1 snapshot and then ''ls'', > the 7.1 snapshot doesn''t start using up space? Shouldn''t my ls of the > mountpoint update the atime also?I believe what''s happening here is that although we update the in-core atime, we sometimes defer pushing it to disk. You can force the atime to be pushed to disk by unmounting the filesystem. --matt