Hi I am facing some problems after rolling back the snapshots created on pool. Environment: bash-3.00# uname -a SunOS hostname 5.10 Generic_118833-17 sun4u sparc SUNW,Sun-Blade-100 ZFS version: bash-3.00# zpool upgrade This system is currently running ZFS version 2. All pools are formatted using this version. I have a zpool called "testpol" with 10G This is the initial pool status of the pool bash-3.00# zpool list NAME SIZE USED AVAIL CAP HEALTH ALTROOT testpol 9.94G 90K 9.94G 0% ONLINE - bash-3.00# zfs list NAME USED AVAIL REFER MOUNTPOINT testpol 84K 9.78G 24.5K /testpol Now i run the following commands bash-3.00# mkfile 10m /testpol/10megfile bash-3.00# zfs create testpol/fs1 bash-3.00# mkfile 20m /testpol/fs1/20megfile [b]bash-3.00# zfs snapshot testpol at snap[/b] bash-3.00# zfs create testpol/fs2 bash-3.00# mkfile 30m /testpol/fs2/30megfile bash-3.00# mkfile 15m /testpol/15megfile Output of zfs list command after running the above commands bash-3.00# zfs list(shows that all the above commands were successfully executed) NAME USED AVAIL REFER MOUNTPOINT testpol 75.2M 9.71G 25.0M /testpol testpol at snap 23.5K - 10.0M - testpol/fs1 20.0M 9.71G 20.0M /testpol/fs1 testpol/fs2 30.0M 9.71G 30.0M /testpol/fs2 The following are the file/file system entries under /testpol bash-3.00# ls -lR /testpol /testpol: total 51222 -rw------T 1 root root 10485760 Jan 29 13:32 10megfile -rw------T 1 root root 15728640 Jan 29 13:34 15megfile drwxr-xr-x 2 root sys 3 Jan 29 13:33 fs1 drwxr-xr-x 2 root sys 3 Jan 29 13:34 fs2 /testpol/fs1: total 40977 -rw------T 1 root root 20971520 Jan 29 13:33 20megfile /testpol/fs2: total 61461 -rw------T 1 root root 31457280 Jan 29 13:34 30megfile Everything shows up correctly until i rollback to the snapshot testpol at snap bash-3.00# zfs rollback testpol at snap bash-3.00# zfs list NAME USED AVAIL REFER MOUNTPOINT testpol 60.2M 9.72G 10.0M /testpol testpol at snap 0 - 10.0M - testpol/fs1 20.0M 9.72G 20.0M /testpol/fs1 testpol/fs2 30.0M 9.72G 30.0M /testpol/fs2 bash-3.00# ls -lR /testpol/ /testpol/: total 20490 -rw------T 1 root root 10485760 Jan 29 13:32 10megfile drwxr-xr-x 2 root root 2 Jan 29 13:32 fs1 [b]fs1 is treated as a normal directory. "rm fs1" will succeed, which would fail in case of a file system[/b] /testpol/fs1: total 0 [b]fs1 is empty.[/b] As expected fs2 (which was created after snapshot "testpol at snap" was taken) is not listed under directories. Issues after rolling back: 1. Before snapshot was taken "fs1" contained "20megfile" which not present after the snapshot is rolled back. 2. Though file system "fs2" is not present on the disk, zfs list shows "fs2" 3. The size of the file system "fs1" is incorrect 4. After performing the rollback operation "fs1" is not treated as a file system bash-3.00# mkfile 45m /testpol/fs1/45megfile bash-3.00# zfs list NAME USED AVAIL REFER MOUNTPOINT testpol 105M 9.68G 55.0M /testpol testpol at snap 23.5K - 10.0M - testpol/fs1 20.0M 9.68G 20.0M /testpol/fs1 testpol/fs2 30.0M 9.68G 30.0M /testpol/fs2 You could see 45m got added to /testpol not fs1 Did i do something that i shouldn''t be doing? Can anyone please explain me what is wrong with this behavior? -Abishek -- This message posted from opensolaris.org
Snapshots are not on a per-pool basis but a per-file-system basis. Thus, when you took a snapshot of "testpol", you didn''t actually snapshot the pool; rather, you took a snapshot of the top level file system (which has an implicit name matching that of the pool). Thus, you haven''t actually affected file systems fs1 or fs2 at all. However, apparently you were able to roll back the file system, which either unmounted or broke the mounts to fs1 and fs2. This probably shouldn''t have been allowed. (I wonder what would happen with an explicit non-ZFS mount to a ZFS directory which is removed by a rollback?) Your fs1 and fs2 file systems still exist, but they''re not attached to their old names any more. Maybe they got unmounted. You could probably mount them, either on the fs1 directory and on a new fs2 directory if you create one, or at a different point in your file system hierarchy. Anton -- This message posted from opensolaris.org
> Snapshots are not on a per-pool basis but a > per-file-system basis. Thus, when you took a > snapshot of "testpol", you didn''t actually snapshot > the pool; rather, you took a snapshot of the top > level file system (which has an implicit name > matching that of the pool). > > Thus, you haven''t actually affected file systems fs1 > or fs2 at all. > > However, apparently you were able to roll back the > file system, which either unmounted or broke the > mounts to fs1 and fs2. This probably shouldn''t have > been allowed. (I wonder what would happen with an > explicit non-ZFS mount to a ZFS directory which is > removed by a rollback?)Yes the feature to take snapshots directly on pool must not be allowed.> Your fs1 and fs2 file systems still exist, but > they''re not attached to their old names any more. > Maybe they got unmounted. You could probably mount > them, either on the fs1 directory and on a new fs2 > directory if you create one, or at a different point > in your file system hierarchy. >You are right, they got unmounted. zfs get mounted testpol/fs1 ---------> says no zfs get mounted testpol/fs2 ---------> says no I understand that mounted attribute is a read only property of a zfs file system. I tried to mount the fs1 and fs2, but i was unsuccessful in doing so. Is there any specific way to mount zfs file systems? I have observed another strange behavior, in the same way as discussed in my previous post, i created the pool structure. When i roll back the snapshot for the first time, everything seems to be working perfectly. I could see that file systems fs1 and fs2 are not affected. However when i roll back the snapshot for the second time the file systems are unmounted. Any ideas? -- This message posted from opensolaris.org
If creation of snapshot is allowed on a top level file system, roll back of snapshot created on top level file system must take care not to disturb other file systems that were created under it. -Abishek -- This message posted from opensolaris.org