Let''s say I have a zfs called "pool/backups" and it contains two zfs''es, "pool/backups/server1" and "pool/backups/server2" I have sharenfs=on for pool/backups and it''s inherited by the sub-zfs''es. I can then nfs mount pool/backups/server1 or pool/backups/server2, no problem. If I mount pool/backups on a system running Solaris Express build 81, I can see the contents of pool/backups/server1 and pool/backups/server2 as I''d expect. But when I mount pool/backups on Solaris 10 or Solaris 8, I just see empty directories for server1 and server2. And if I actually write there, the files go in /pool/backups (and they can be seen on the nfs server if I unmount the sub-zfs''es). And that''s extra bad because if I reboot the nfs server, the sub-zfs''es fail to mount because their mountpoints are not empty, and so it won''t come up in multi-user). (the whole idea here is that I really want just the one nfs mount, but I want to be able to separate the data into separate zfs''es). So why does this work with the build 81 nfs client, and not others, and is it possible to make it work? Right now the number of sub-zfs''es is only a handful so I can mount them individually, but it''s not the way I want it to work. This message posted from opensolaris.org
Because of the mirror mount feature that integrated into that Solaris Express, build 77. You can read about here on page 20 of the ZFS Admin Guide: http://opensolaris.org/os/community/zfs/docs/zfsadmin.pdf Cindy Andrew Tefft wrote:> Let''s say I have a zfs called "pool/backups" and it contains two zfs''es, "pool/backups/server1" and "pool/backups/server2" > > I have sharenfs=on for pool/backups and it''s inherited by the sub-zfs''es. I can then nfs mount pool/backups/server1 or pool/backups/server2, no problem. > > If I mount pool/backups on a system running Solaris Express build 81, I can see the contents of pool/backups/server1 and pool/backups/server2 as I''d expect. But when I mount pool/backups on Solaris 10 or Solaris 8, I just see empty directories for server1 and server2. And if I actually write there, the files go in /pool/backups (and they can be seen on the nfs server if I unmount the sub-zfs''es). And that''s extra bad because if I reboot the nfs server, the sub-zfs''es fail to mount because their mountpoints are not empty, and so it won''t come up in multi-user). > > (the whole idea here is that I really want just the one nfs mount, but I want to be able to separate the data into separate zfs''es). > > So why does this work with the build 81 nfs client, and not others, and is it possible to make it work? Right now the number of sub-zfs''es is only a handful so I can mount them individually, but it''s not the way I want it to work. > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, Feb 07, 2008 at 01:54:58PM -0800, Andrew Tefft wrote:> Let''s say I have a zfs called "pool/backups" and it contains two > zfs''es, "pool/backups/server1" and "pool/backups/server2" > > I have sharenfs=on for pool/backups and it''s inherited by the > sub-zfs''es. I can then nfs mount pool/backups/server1 or > pool/backups/server2, no problem. > > If I mount pool/backups on a system running Solaris Express build 81,The NFSv3 client, and the NFSv4 client up to some older snv build (I forget which) will *not* follow the sub-mounts that exist on the server side. In recent snv builds the NFSv4 client will follow the sub-mounts that exist on the server side. If you use the -hosts automount map (/net) then the NFSv3 client and older NFSv4 clients will mount the server-side sub-mounts, but only as they existed when the automount was made. Nico --
Thanks. I guess this makes sense, now that I think about it, since this would be the same behavior exporting nested ufs filesystems. It just took me by surprise since I was testing by accessing these on my workstation, and then when the jobs ran overnight on the server the behavior was different. If I was using this setup on a larger scale I''d automount one level deeper. I was just hoping to just have the single mount (which is already automounted as it is) along with the benefits of the separate filesystems. This message posted from opensolaris.org