I''m not sure if this is an nfs/autofs problem or zfs problem... But I''ll try here first... On our server, I''ve got a zfs directory called "cube/builds/izick/". In this directory I have a number of mountpoints to other zfs file systems.. The problem happens when we clone a new zfs file system, say cube/builds/izick/foo, any client system that already have cube/builds/izick mounted, can see the new directory foo, but cannot see the contents. It looks like a blank directory on the client systems, but on the server it would be fully populated with data.. All the zfs file systems are shared.. Restarting autofs and nfs/client does nothing.. The only way to fix this is to unmount the directory on the client, which can be invasive to a desktop machine.. Could there be a problem because the zfs files systems are nested? Is there a known issue with zfs-nfs interactions where zfs doesn''t tell nfs properly that there has been an update, other than the just the mountpoint? thanks... Tony This message posted from opensolaris.org
Anthony J. Scarpino wrote:> I''m not sure if this is an nfs/autofs problem or zfs problem... But I''ll try here first... > > On our server, I''ve got a zfs directory called "cube/builds/izick/". In this directory I have a number of mountpoints to other zfs file systems.. The problem happens when we clone a new zfs file system, say cube/builds/izick/foo, any client system that already have cube/builds/izick mounted, can see the new directory foo, but cannot see the contents. It looks like a blank directory on the client systems, but on the server it would be fully populated with data.. All the zfs file systems are shared.. Restarting autofs and nfs/client does nothing.. The only way to fix this is to unmount the directory on the client, which can be invasive to a desktop machine.. > > Could there be a problem because the zfs files systems are nested? Is there a known issue with zfs-nfs interactions where zfs doesn''t tell nfs properly that there has been an update, other than the just the mountpoint? thanks...This is a known limitation - you would need to add entries to your automounter maps to let the client know to do mounts for those ''nested'' entries. We''re working on it - since the client can see the new directories and detect that they''re different filesystems, we could do what we call ''mirror mounts'' to make them available. See http://opensolaris.org/os/project/nfs-namespace/ for more on this and other work. Rob T
Robert Thurlow wrote:> Anthony J. Scarpino wrote: >> I''m not sure if this is an nfs/autofs problem or zfs problem... But I''ll try here first... >> >> On our server, I''ve got a zfs directory called "cube/builds/izick/". In this directory I have a >> number of mountpoints to other zfs file systems.. The problem happens when we clone >> a new zfs file system, say cube/builds/izick/foo, any client system that already have >> cube/builds/izick mounted, can see the new directory foo, but cannot see the contents. It >> looks like a blank directory on the client systems, but on the server it would be fully >> populated with data.. All the zfs file systems are shared.. Restarting autofs and nfs/client >> does nothing.. The only way to fix this is to unmount the directory on the client, which >> can be invasive to a desktop machine.. >> >> Could there be a problem because the zfs files systems are nested? Is there a known >> issue with zfs-nfs interactions where zfs doesn''t tell nfs properly that there has been an >> update, other than the just the mountpoint? thanks... > > This is a known limitation - you would need to add entries to your > automounter maps to let the client know to do mounts for those > ''nested'' entries. We''re working on it - since the client can see > the new directories and detect that they''re different filesystems, > we could do what we call ''mirror mounts'' to make them available. > See http://opensolaris.org/os/project/nfs-namespace/ for more on > this and other work. > > Rob TOk... thanks for the link.. I''m happy this is known and being worked on.. Any targets yet when this would integrated? thanks.. Tony This message posted from opensolaris.org
Anthony Scarpino wrote:> Robert.Thurlow at sun.com wrote: >> Anthony J. Scarpino wrote: >>> I''m not sure if this is an nfs/autofs problem or zfs problem... But >>> I''ll try here first... >>> >>> On our server, I''ve got a zfs directory called "cube/builds/izick/". >>> In this directory I have a number of mountpoints to other zfs file >>> systems.. The problem happens when we clone a new zfs file system, >>> say cube/builds/izick/foo, any client system that already have >>> cube/builds/izick mounted, can see the new directory foo, but cannot >>> see the contents. It looks like a blank directory on the client >>> systems, but on the server it would be fully populated with data.. >>> All the zfs file systems are shared.. Restarting autofs and >>> nfs/client does nothing.. The only way to fix this is to unmount the >>> directory on the client, which can be invasive to a desktop machine.. >>> >>> Could there be a problem because the zfs files systems are nested? >>> Is there a known issue with zfs-nfs interactions where zfs doesn''t >>> tell nfs properly that there has been an update, other than the just >>> the mountpoint? thanks... >> >> This is a known limitation - you would need to add entries to your >> automounter maps to let the client know to do mounts for those >> ''nested'' entries. We''re working on it - since the client can see >> the new directories and detect that they''re different filesystems, >> we could do what we call ''mirror mounts'' to make them available. >> See http://opensolaris.org/os/project/nfs-namespace/ for more on >> this and other work. >> >> Rob T > > Ok... thanks for the link.. I''m happy this is known and being worked > on.. Any targets yet when this would integrated?Early summer, we hope :-) Rob T
On Wed, Apr 11, 2007 at 04:23:12PM -0600, Robert.Thurlow at Sun.COM wrote:> Anthony J. Scarpino wrote: > > > >On our server, I''ve got a zfs directory called "cube/builds/izick/". In > >this directory I have a number of mountpoints to other zfs file systems.. > >The problem happens when we clone a new zfs file system, say > >cube/builds/izick/foo, any client system that already have > >cube/builds/izick mounted, can see the new directory foo, but cannot see > >the contents. It looks like a blank directory on the client systems, but > >on the server it would be fully populated with data.. All the zfs file...> > This is a known limitation - you would need to add entries to your > automounter maps to let the client know to do mounts for those > ''nested'' entries. We''re working on it - since the client can see > the new directories and detect that they''re different filesystems, > we could do what we call ''mirror mounts'' to make them available. > See http://opensolaris.org/os/project/nfs-namespace/ for more on > this and other work.Wat I''m doing to workaround this problem is to use the hosts autocreated maps lofsing /net/$server/$bla . I.e. auto_master.org_dir map has e.g. the entries: /net -hosts -nosuid,nobrowse /develop develop and develop.org_dir looks like this: src software:/export/src lnf localhost:/net/software/export/lnf docs software:/export/docs/dev Which results in: software:/export/lnf on /net/software/export/lnf type nfs remote/read/write/nosetuid/nodevices/xattr/dev=47c01b2 on Fri Apr 13 02:28:19 2007 /net/software/export/lnf on /develop/lnf type lofs read/write/setuid/devices/dev=47c01b2 on Fri Apr 13 02:28:19 2007 software:/export/lnf/i386 on /net/software/export/lnf/i386 type nfs remote/read/write/nosetuid/nodevices/xattr/dev=47c01b3 on Fri Apr 13 02:28:22 2007 software:/export/lnf/sparc on /net/software/export/lnf/sparc type nfs remote/read/write/nosetuid/nodevices/xattr/dev=47c01b4 on Fri Apr 13 02:28:31 2007 software:/export/lnf/linux on /net/software/export/lnf/linux type nfs remote/read/write/nosetuid/nodevices/xattr/dev=47c01b5 on Fri Apr 13 02:28:38 2007 software:/export/src on /develop/src type nfs remote/read/write/setuid/devices/xattr/dev=47c01b6 on Fri Apr 13 02:35:30 2007 software:/export/docs/dev on /develop/docs type nfs remote/read/write/setuid/devices/xattr/dev=47c01b7 on Fri Apr 13 02:50:01 2007 The only problem I encountered with this approach was with pkgmk: If e.g. /develop/lnf/i386 is not mounted, when it runs, pkgmk doesn''t trigger an automount and thinks, the target FS has a size of 0 bytes - no space available. However a short cd /develop/lnf/i386 ; cd - before pkgmk solves that problem. Have fun, jel. PS: For completeness, the related zfs are: pool1/lnf 2.44G 4.44T 31.5K /export/lnf pool1/lnf/i386 1.10G 4.44T 1.10G /export/lnf/i386 pool1/lnf/linux 26K 4.44T 26K /export/lnf/linux pool1/lnf/sparc 1.34G 4.44T 1.34G /export/lnf/sparc pool2/src 1.07G 5.30T 1.07G /export/src pool1/docs 2.58G 4.44T 34K /export/docs pool1/docs/admin 630M 4.44T 630M /export/docs/admin pool1/docs/dev 1.70G 498G 1.70G /export/docs/dev pool1/docs/iws 269M 4.44T 269M /export/docs/iws -- Otto-von-Guericke University http://www.cs.uni-magdeburg.de/ Department of Computer Science Geb. 29 R 027, Universitaetsplatz 2 39106 Magdeburg, Germany Tel: +49 391 67 12768
Jens Elkner wrote:> The only problem I encountered with this approach was with pkgmk: > If e.g. /develop/lnf/i386 is not mounted, when it runs, pkgmk doesn''t > trigger an automount and thinks, the target FS has a size of 0 bytes - no > space available. However a short cd /develop/lnf/i386 ; cd - > before pkgmk solves that problem.Another trick for this is to use a path of /develop/lnf/i386/. which will trigger an automount because of the lookup of ''.''. One of my favorite automounter tricks :-) Rob T