I''ve got my first server deployment with ZFS. Consolidating a pair of other file servers that used to have a dozen or so NFS exports in /etc/dfs/dfstab similar to; /export/solaris/images /export/tools /export/ws ..... and so on.... For the new server, I have one large zfs pool; -bash-3.00# df -hl bigpool 16T 1.5T 15T 10% /export that I am starting to populate. Should I simply share /export, or should I separately share the individual dirs in /export like the old dfstab did? I am assuming that one single command; # zfs set sharenfs=ro bigpool would share /export as a read-only NFS point? Opinions/comments/tutoring? Thanks, Neal
Neal Pollack wrote:> I''ve got my first server deployment with ZFS. > Consolidating a pair of other file servers that used to have > a dozen or so NFS exports in /etc/dfs/dfstab similar to; > > /export/solaris/images > /export/tools > /export/ws > ..... and so on.... > > For the new server, I have one large zfs pool; > -bash-3.00# df -hl > bigpool 16T 1.5T 15T 10% /export > > that I am starting to populate. Should I simply share /export, > or should I separately share the individual dirs in /export > like the old dfstab did? > > I am assuming that one single command; > # zfs set sharenfs=ro bigpool > would share /export as a read-only NFS point? > > Opinions/comments/tutoring?The only thing I found in docs was page 99 of the admin guide. So it says I should do; zfs set sharenfs=on bigpool to get all sub dirs shared rw via NFS, and then do zfs set sharenfs=ro bigpool/dirname for those I want to protect read-only. Is that the current best practice? Thanks> > Thanks, > > Neal > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On January 30, 2007 9:59:45 AM -0800 Neal Pollack <Neal.Pollack at Sun.COM> wrote:> I''ve got my first server deployment with ZFS. > Consolidating a pair of other file servers that used to have > a dozen or so NFS exports in /etc/dfs/dfstab similar to; > > /export/solaris/images > /export/tools > /export/ws > ..... and so on.... > > For the new server, I have one large zfs pool; > -bash-3.00# df -hl > bigpool 16T 1.5T 15T 10% /export > > that I am starting to populate. Should I simply share /export, > or should I separately share the individual dirs in /export > like the old dfstab did?Just share /export.> I am assuming that one single command; ># zfs set sharenfs=ro bigpool > would share /export as a read-only NFS point? > > Opinions/comments/tutoring?Unless you have different share option requirements for different dirs (say rw vs ro or different network access rules), just sharing the top level is probably better (easier to manage). Clients can still mount subdirs and not the entire pool. Now, if you create each dir under /export as a zfs filesystem, the clients will HAVE to mount the individual filesystems. If they just mount /export they will not traverse the fs mount on the server when they descend /export. (today) -frank
On 01/30/07 17:59, Neal Pollack wrote:> I am assuming that one single command; > # zfs set sharenfs=ro bigpool > would share /export as a read-only NFS point?It will share /export as read-only. The property will also be inherited by all filesystem below export, so they too will be shared read-only. You can over-ride an choose different sharenfs property values for particular filesystems that you want to share differently. Note that a subdirectory of a directory that is already shared cannot itself be shared unless it''s in a different filesystem (I think that summarizes the behaviour). I have arranged filesystems into hierarchies that will share most property values, including sharenfs. For example I have the following filesystems: tank tank/scratch tank/scratch/<login> tank/src tank/src/Codemgr_wsdata_rw tank/tools tank/tools/ON tank/tools/ON/on10 tank/tools/ON/on28 tank/tools/ON/on297 tank/tools/ON/on81 tank/tools/ON/on998 tank/tools/ON/onnv tank/tools/ON/onnv/i386 tank/tools/ON/onnv/sparc tank/tools/rootimages tank/u tank/u/<login> tank itself has sharenfs=off. tank/src has sharenfs=ro so all source is read-only but tank/src/Codemgr_wsdata_rw is read-write since Teamware needs read-write access to Codemge_wsdata for bringover; so each workspace (exported ro) symlinks to a /net/host/tank/src/Codemgr_wsdata_ws/directory for a writable dir. Similarly tank/scratch is shared with root access (for nightly builds before Solaris 10). In seeing what you have specified local overrides for the -s option to zfs get is great. So here''s all my sharenfs etc properties that are not inherited (excluded quotas to reduce output and used -o to try and make it format for email): (gavinm at tb2: ~ )-> zfs get -s local -o name,property,value all | grep -v quota NAME PROPERTY VALUE tank com.sun.cte.eu:backup no tank/scratch sharenfs anon=0,sec=sys,rw,root=pod3:pod4 tank/src sharenfs ro tank/src compression on tank/src com.sun.cte.eu:backup yes tank/src/Codemgr_wsdata_rw mountpoint /export/src/Codemgr_wsdata_rw tank/src/Codemgr_wsdata_rw sharenfs rw tank/src/Codemgr_wsdata_rw compression on tank/tools com.sun.cte.eu:backup yes tank/tools/ON sharenfs ro tank/tools/cluster sharenfs ro tank/tools/rootimages sharenfs ro,anon=0,root=pod3,root=pod4 tank/tools/www sharenfs ro tank/u sharenfs rw tank/u com.sun.cte.eu:backup yes tank/u/localsrc mountpoint /u/localsrc tank/u/localsrc sharenfs on tank/u/nightly sharenfs rw,root=pod3:pod4,anon=0 tank/u/nightly com.sun.cte.eu:backup no tank/u/nightly/system mountpoint /u/nightly/system The com.sun.cte.eu:backup is a local property that determines whether a filesystem is backed up. A script generates the list of filesystems and that gets sucked into Networker. Grouping by functionality helps keeps this simple, as most filesystems inherit their backup property from the parent and I just override at the top of branches that I want to backup (and possibly exclude some bits further down). Hope that helps Gavin