I was wondering if there is a special option to share out a set of nested directories? Currently if I share out a directory with /pool/mydir1/mydir2 on a system, mydir1 shows up, and I can see mydir2, but nothing in mydir2. mydir1 and mydir2 are each a zfs filesystem, each shared with the proper sharenfs permissions. Did I miss a browse or traverse option somewhere? - Cassandra Unix Administrator "From a little spark may burst a mighty flame." -Dante Alighieri -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100527/ee992254/attachment.html>
Roy Sigurd Karlsbakk
2010-May-27 20:49 UTC
[zfs-discuss] nfs share of nested zfs directories?
----- "Cassandra Pugh" <cpugh at pppl.gov> skrev: I was wondering if there is a special option to share out a set of nested directories? Currently if I share out a directory with /pool/mydir1/mydir2 on a system, mydir1 shows up, and I can see mydir2, but nothing in mydir2. mydir1 and mydir2 are each a zfs filesystem, each shared with the proper sharenfs permissions. Did I miss a browse or traverse option somewhere? is mydir2 on a separate filesystem/dataset? -- Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ for alle pedagoger ? unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100527/07d0fed2/attachment.html>
I share filesystems all the time this way, and have never had this problem. My first guess would be a problem with NFS or directory permissions. You are using NFS, right? - Garrett On 5/27/2010 1:02 PM, Cassandra Pugh wrote:> I was wondering if there is a special option to share out a set of > nested > directories? Currently if I share out a directory with > /pool/mydir1/mydir2 > on a system, mydir1 shows up, and I can see mydir2, but nothing in > mydir2. > mydir1 and mydir2 are each a zfs filesystem, each shared with the > proper > sharenfs permissions. > Did I miss a browse or traverse option somewhere? > - > Cassandra > Unix Administrator > "From a little spark may burst a mighty flame." > -Dante Alighieri > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100527/cdee52d6/attachment.html>
Cassandra, Which Solaris release is this? This is working for me between an Solaris 10 server and a OpenSolaris client. Nested mount points can be tricky and I''m not sure if you are looking for the mirror mount feature that is not available in the Solaris 10 release, where new directory contents are accessible on the client. See the examples below. Thanks, Cindy On the server: # zpool create pool c1t3d0 # zfs create pool/myfs1 # cp /usr/dict/words /pool/myfs1/file.1 # zfs create -o mountpoint=/pool/myfs1/myfs2 pool/myfs2 # ls /pool/myfs1 file.1 myfs2 # cp /usr/dict/words /pool/myfs1/myfs2/file.2 # ls /pool/myfs1/myfs2/ file.2 # zfs set sharenfs=on pool/myfs1 # zfs set sharenfs=on pool/myfs2 # share - /pool/myfs1 rw "" - /pool/myfs1/myfs2 rw " On the client: # ls /net/t2k-brm-03/pool/myfs1 file.1 myfs2 # ls /net/t2k-brm-03/pool/myfs1/myfs2 file.2 # mount -F nfs t2k-brm-03:/pool/myfs1 /mnt # ls /mnt file.1 myfs2 # ls /mnt/myfs2 file.2 On the server: # touch /pool/myfs1/myfs2/file.3 On the client: # ls /mnt/myfs2 file.2 file.3 On 05/27/10 14:02, Cassandra Pugh wrote:> I was wondering if there is a special option to share out a set of > nested > directories? Currently if I share out a directory with > /pool/mydir1/mydir2 > on a system, mydir1 shows up, and I can see mydir2, but nothing in > mydir2. > mydir1 and mydir2 are each a zfs filesystem, each shared with the proper > sharenfs permissions. > Did I miss a browse or traverse option somewhere? > - > Cassandra > Unix Administrator > "From a little spark may burst a mighty flame." > -Dante Alighieri > > > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Thu, May 27, 2010 at 1:02 PM, Cassandra Pugh <cpugh at pppl.gov> wrote:> ? ?I was wondering if there is a special option to share out a set of nested > ? ?directories? ?Currently if I share out a directory with > /pool/mydir1/mydir2 > ? ?on a system, mydir1 shows up, and I can see mydir2, but nothing in > mydir2. > ? ?mydir1 and mydir2 are each a zfs filesystem, each shared with the proper > ? ?sharenfs permissions. > ? ?Did I miss a browse or traverse option somewhere?What kind of client are you mounting on? Linux clients don''t properly follow nested exports. -B -- Brandon High : bhigh at freaks.com
Reshekel Shedwitz
2010-May-28 01:30 UTC
[zfs-discuss] nfs share of nested zfs directories?
Some tips? (1) Do a zfs mount -a and a zfs share -a. Just in case something didn''t get shared out correctly (though that''s supposed to automatically happen, I think) (2) The Solaris automounter (i.e. in a NIS environment) does not seem to automatically mount descendent filesystems (i.e. if the NIS automounter has a map for /public pointing to myserver:/mnt/zfs/public but on myserver, I create a descendent filesystem in /mnt/zfs/public/folder1, browsing to /public/folder1 on another computer will just show an empty directory all the time). If you''re in that sort of environment, you need to add another map on NIS. (3) Try using /net mounts. If you''re not aware of how this works, you can browse to /net/<computer name> to see all the NFS mounts. On Solaris, /net *will* automatically mount descendent filesystems (unlike NIS). -- This message posted from opensolaris.org
On 5/27/2010 9:30 PM, Reshekel Shedwitz wrote:> Some tips? > > (1) Do a zfs mount -a and a zfs share -a. Just in case something didn''t get shared out correctly (though that''s supposed to automatically happen, I think) > > (2) The Solaris automounter (i.e. in a NIS environment) does not seem to automatically mount descendent filesystems (i.e. if the NIS automounter has a map for /public pointing to myserver:/mnt/zfs/public but on myserver, I create a descendent filesystem in /mnt/zfs/public/folder1, browsing to /public/folder1 on another computer will just show an empty directory all the time). >The automounter behaves the same irregardless of whether NIS is invovled or not (or LDAP for that matter.) The Automounter can be configured with files locally, and that won''t change it''s behavior. The behavior your describing has been the behavior of all flavors of NFS since it was born, and also doesn''t have anything to do with the automounter - it was by design. No automounter I''m aware of is capable of learning on it''s own that ''folder1'' is a new filesystem (not a new directory) and mounting it. So this isn''t limited to Solaris.> If you''re in that sort of environment, you need to add another map on NIS. >Your example doesn''t specify if /public is a direct or indirect mount, being in / kind of implies it''s direct, and those mounts can be more limiting (more so in the past) and most admins avoid using the auto.direct map for these reasons. If the example was /import/public with /import being defined by the auto.import map, then the solution to this problem is not an entirely new entry in the the map for /import/public/folder1, but to convert the entry for /import/folder1 to a hierarchical mount entry, specifying explicitly the folder1 sub mount. A hierarchical mount can even mount folder1 from a different server than public came from. In the past (SunOS4 and early Solaris timeframe) heirarchical mounts had some limitations (mainly issues with unmounting them) that made people wary of them. Most if not all of those have been eliminated. In general the Solaris automounter is very reliable and flexible and can be configured to do almost anything you want. Recent linux automounters (autofs4??) have come very close to the Solaris ones, however earlier ones had some missing fieatures, buggy features, and some different interpretations of the maps. But the issues described in this thread is not an automounter issue, it''s a design issue of NFS - at least for all versions of NFS before v4. Version 4 has a feature that others have mentioned called "mirror mounts" that tries to pass along the information trequired for the client to re-create the sub-mount - Even if the original fileserver mounted the sub-filesystem from another server! It''s a cool feature, but NFS v4 suport in clients isn''t complete yet, so specifying the full hierarchical mount tree in the automount maps is still required.> (3) Try using /net mounts. If you''re not aware of how this works, you can browse to /net/<computer name> to see all the NFS mounts. On Solaris, /net *will* automatically mount descendent filesystems (unlike NIS). >In general /net mounts are a bad idea. While it will basically scan the output of ''showmount -e'' for everything the server exports, and mount it all, that''s not exactly what you always want. It will only pick up sub-filesystem that are explicitly shared (which NFSv4 might also only do I''m not sure) and it will miss branches of the tree if they are mounted from another server. Also most automounters that I''m aware of will only mount all the exported filesystems at the time of the access to /net/hostname, and (unles it''s unused long enough to be unmounted) will miss all changes in what is exported on the server until the mount is triggered again. On top of that, /net/hostname mounts encourage embedding the hostname of the server in config files, scripts, and binaries (-R path for shared libraries) and that''s not good since you then can''t move a filesystem from one host to another, since you need to maintain that /net/hostname path forever - or edit many files and recompile programs. (If I recall correctly, this was once used as one of the arguments against shared libraries by some.) Because of this, by using /net/hostname, you give up one of the biggest benefits of the automounter - redirection. By making an auto.import map that has an entry for ''public'' you allow yourself to be able to clone public to a new server, and modify the map to (over time as it is unmounted and remounted) migrate the clients to the new server. Lastly using /net also diables the load-sharing and failover abilities of read-only automounts, since you are by definition limiting yourself to one hostname. That was longer than I expected, but hopefully it will help some. :) -Kyle
Brandon High wrote:> On Thu, May 27, 2010 at 1:02 PM, Cassandra Pugh <cpugh at pppl.gov> wrote: > >> I was wondering if there is a special option to share out a set of nested >> directories? Currently if I share out a directory with >> /pool/mydir1/mydir2 >> on a system, mydir1 shows up, and I can see mydir2, but nothing in >> mydir2. >> mydir1 and mydir2 are each a zfs filesystem, each shared with the proper >> sharenfs permissions. >> Did I miss a browse or traverse option somewhere? >> > > What kind of client are you mounting on? Linux clients don''t properly > follow nested exports. > > -B >This behavior is not limited to Linux clients nor to nfs shares. I''ve seen it with Windows (SMB) clients and CIFS shares. The CIFS version is referenced here: Nested ZFS Filesystems in a CIFS Share http://mail.opensolaris.org/pipermail/cifs-discuss/2008-June/000358.html http://bugs.opensolaris.org/view_bug.do?bug_id=6582165 Is there any commonality besides the observed behaviors? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100527/de7abd43/attachment.html>
On 05/27/10 09:49 PM, Haudy Kazemi wrote:> Brandon High wrote: >> On Thu, May 27, 2010 at 1:02 PM, Cassandra Pugh <cpugh at pppl.gov> wrote: >>> I was wondering if there is a special option to share out a set of >>> nested >>> directories? Currently if I share out a directory with >>> /pool/mydir1/mydir2 >>> on a system, mydir1 shows up, and I can see mydir2, but nothing in >>> mydir2. >>> mydir1 and mydir2 are each a zfs filesystem, each shared with the proper >>> sharenfs permissions. >>> Did I miss a browse or traverse option somewhere? >> >> What kind of client are you mounting on? Linux clients don''t properly >> follow nested exports. >> >> -B > > This behavior is not limited to Linux clients nor to nfs shares. I''ve > seen it with Windows (SMB) clients and CIFS shares. The CIFS version is > referenced here: > > Nested ZFS Filesystems in a CIFS Share > http://mail.opensolaris.org/pipermail/cifs-discuss/2008-June/000358.html > http://bugs.opensolaris.org/view_bug.do?bug_id=6582165 > > Is there any commonality besides the observed behaviors?No, the SMB/CIFS share limitation is that we have not yet added support for child mounts over SMB; this is completely unrelated to any configuration problems encountered with NFS. Alan
Edward Ned Harvey
2010-May-29 14:06 UTC
[zfs-discuss] nfs share of nested zfs directories?
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Cassandra Pugh > > ? ?I was wondering if there is a special option to share out a set of > nested > ? ?directories? ?Currently if I share out a directory with > /pool/mydir1/mydir2 > ? ?on a system, mydir1 shows up, and I can see mydir2, but nothing in > mydir2. > ? ?mydir1 and mydir2 are each a zfs filesystem, each shared with the > proper > ? ?sharenfs permissions. > ? ?Did I miss a browse or traverse option somewhere?My understanding is thus: If you set the sharenfs property, then the property is inherited by child filesystems, and consequently automatically exported. However, if you use the dfstab, you''re doing it yourself manually, and the child filesystems are not automatically exported. Furthermore ... Exporting is only half of the problem. There is still the question of mounting. I don''t know how it works, but my understanding is that solaris/opensolaris nfs clients automatically follow nested exports. But linux is a different matter.
Thanks for getting back to me! I am using Solaris 10 10/09 (update 8) I have created multiple nested zfs directories in order to compress some but not all sub directories in a directory. I have ensured that they all have a sharenfs option, as I have done with other shares. This is a special case to me, since instead of just #zfs create pool/mydir and then just using mkdir to make everything thereafter, I have done: #zfs create mypool/mydir/ #zfs create mypool/mydir/dir1 #zfs create mypool/mydir/dir1/compressed1 #zfs create mypool/mydir/dir1/compressedir2 #zfs create mypool/mydir/dir1/uncompressedir i had hoped that i would then export this, and mount it on the client and see: #ls /mnt/mydir/* dir: compressedir1 compressedir2 uncompressedir and the files thereafter. however what i see is : #ls /mnt/mydir/* dir: My client is linux. I would assume we are using nfs v3. I also notice that the permissions are not showing through correctly. The mount options used are our "defaults" (hard,rw,nosuid,nodev,intr,noacl) I am not sure what this mirror mounting is? Would that help me? Is there something else I could be doing to approach this better? Thank you for your insight. - Cassandra Unix Administrator On Thu, May 27, 2010 at 5:25 PM, Cindy Swearingen < cindy.swearingen at oracle.com> wrote:> Cassandra, > > Which Solaris release is this? > > This is working for me between an Solaris 10 server and a OpenSolaris > client. > > Nested mount points can be tricky and I''m not sure if you are looking > for the mirror mount feature that is not available in the Solaris 10 > release, where new directory contents are accessible on the client. > > See the examples below. > > > Thanks, > > Cindy > > On the server: > > # zpool create pool c1t3d0 > # zfs create pool/myfs1 > # cp /usr/dict/words /pool/myfs1/file.1 > # zfs create -o mountpoint=/pool/myfs1/myfs2 pool/myfs2 > # ls /pool/myfs1 > file.1 myfs2 > # cp /usr/dict/words /pool/myfs1/myfs2/file.2 > # ls /pool/myfs1/myfs2/ > file.2 > # zfs set sharenfs=on pool/myfs1 > # zfs set sharenfs=on pool/myfs2 > # share > - /pool/myfs1 rw "" > - /pool/myfs1/myfs2 rw " > > On the client: > > # ls /net/t2k-brm-03/pool/myfs1 > file.1 myfs2 > # ls /net/t2k-brm-03/pool/myfs1/myfs2 > file.2 > # mount -F nfs t2k-brm-03:/pool/myfs1 /mnt > # ls /mnt > file.1 myfs2 > # ls /mnt/myfs2 > file.2 > > On the server: > > # touch /pool/myfs1/myfs2/file.3 > > On the client: > > # ls /mnt/myfs2 > file.2 file.3 > > > On 05/27/10 14:02, Cassandra Pugh wrote: > >> I was wondering if there is a special option to share out a set of >> nested >> directories? Currently if I share out a directory with >> /pool/mydir1/mydir2 >> on a system, mydir1 shows up, and I can see mydir2, but nothing in >> mydir2. >> mydir1 and mydir2 are each a zfs filesystem, each shared with the proper >> sharenfs permissions. >> Did I miss a browse or traverse option somewhere? >> - >> Cassandra >> Unix Administrator >> "From a little spark may burst a mighty flame." >> -Dante Alighieri >> >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100603/70e55d30/attachment.html>
On Thu, Jun 3, 2010 at 10:53 AM, Cassandra Pugh <cpugh at pppl.gov> wrote:> I have ensured that they all have a sharenfs option, as I have done with other shares.You can verify this from your linux client with: # showmount -e nfs_server> My client is linux. I would assume we are using nfs v3. > I also notice that the permissions are not showing through correctly. > The mount options used are our "defaults" (hard,rw,nosuid,nodev,intr,noacl)Recent Linux distros have support for nfs v4, but requires mounting the filesystem as nfs4, not nfs. I''m not sure if there is support for mirror mounts.> I am not sure what this mirror mounting is?? Would that help me? > Is there something else I could be doing to approach this better?This is probably a technically wrong description, but mirror mounts let the server tell the client to mount a new share. In your example, the client would automatically mount mypool/mydir/dir1 and mypool/mydir/dir1/compressed1, etc. A possible workaround is to set up an automount map for your client, or to explicitly state the mounts in /etc/mtab . I''d favor the latter, as the automounter in Linux can be pretty buggy. -B -- Brandon High : bhigh at freaks.com
Hi Cassandra, The mirror mount feature allows the client to access files and dirs that are newly created on the server, but this doesn''t look like your problem described below. My guess is that you need to resolve the username/permission issues before this will work, but some versions of Linux don''t support traversing nested mount points. I''m no NFS expert and many on this list are, but things to check are: - I''ll assume that hostnames are resolving between systems since you can share/mount the resources. - If you are seeing "nobody" instead of user names, then you need to make sure the domain name is specified in NFSMAPID_DOMAIN. For example, add company.com to the /etc/default/nfs file and then restart this server: # svcs | grep mapid online May_27 svc:/network/nfs/mapid:default # svcadm restart svc:/network/nfs/mapid:default - Permissions won''t resolve correctly until the above two issues are cleared. - You might be able to rule out the Linux client support of nested mount points by just sharing a simple test dataset, like this: # zfs create mypool/test # cp /usr/dict/words /mypool/test/file.1 # zfs set sharenfs=on mypool/test and see if file.1 is visible on the Linux client. Thanks, Cindy On 06/03/10 11:53, Cassandra Pugh wrote:> Thanks for getting back to me! > > I am using Solaris 10 10/09 (update 8) > > I have created multiple nested zfs directories in order to compress some > but not all sub directories in a directory. > I have ensured that they all have a sharenfs option, as I have done with > other shares. > > This is a special case to me, since instead of just > #zfs create pool/mydir > > and then just using mkdir to make everything thereafter, I have done: > #zfs create mypool/mydir/ > #zfs create mypool/mydir/dir1 > #zfs create mypool/mydir/dir1/compressed1 > #zfs create mypool/mydir/dir1/compressedir2 > #zfs create mypool/mydir/dir1/uncompressedir > > > i had hoped that i would then export this, and mount it on the client > and see: > #ls /mnt/mydir/* > > dir: > compressedir1 compressedir2 uncompressedir > > and the files thereafter. > > however what i see is : > > #ls /mnt/mydir/* > > dir: > > My client is linux. I would assume we are using nfs v3. > I also notice that the permissions are not showing through correctly. > The mount options used are our "defaults" (hard,rw,nosuid,nodev,intr,noacl) > > > I am not sure what this mirror mounting is? Would that help me? > Is there something else I could be doing to approach this better? > > Thank you for your insight. > > - > > Cassandra > Unix Administrator > > > On Thu, May 27, 2010 at 5:25 PM, Cindy Swearingen > <cindy.swearingen at oracle.com <mailto:cindy.swearingen at oracle.com>> wrote: > > Cassandra, > > Which Solaris release is this? > > This is working for me between an Solaris 10 server and a > OpenSolaris client. > > Nested mount points can be tricky and I''m not sure if you are looking > for the mirror mount feature that is not available in the Solaris 10 > release, where new directory contents are accessible on the client. > > See the examples below. > > > Thanks, > > Cindy > > On the server: > > # zpool create pool c1t3d0 > # zfs create pool/myfs1 > # cp /usr/dict/words /pool/myfs1/file.1 > # zfs create -o mountpoint=/pool/myfs1/myfs2 pool/myfs2 > # ls /pool/myfs1 > file.1 myfs2 > # cp /usr/dict/words /pool/myfs1/myfs2/file.2 > # ls /pool/myfs1/myfs2/ > file.2 > # zfs set sharenfs=on pool/myfs1 > # zfs set sharenfs=on pool/myfs2 > # share > - /pool/myfs1 rw "" > - /pool/myfs1/myfs2 rw " > > On the client: > > # ls /net/t2k-brm-03/pool/myfs1 > file.1 myfs2 > # ls /net/t2k-brm-03/pool/myfs1/myfs2 > file.2 > # mount -F nfs t2k-brm-03:/pool/myfs1 /mnt > # ls /mnt > file.1 myfs2 > # ls /mnt/myfs2 > file.2 > > On the server: > > # touch /pool/myfs1/myfs2/file.3 > > On the client: > > # ls /mnt/myfs2 > file.2 file.3 > > > On 05/27/10 14:02, Cassandra Pugh wrote: > > I was wondering if there is a special option to share out a > set of nested > directories? Currently if I share out a directory with > /pool/mydir1/mydir2 > on a system, mydir1 shows up, and I can see mydir2, but > nothing in mydir2. > mydir1 and mydir2 are each a zfs filesystem, each shared with > the proper > sharenfs permissions. > Did I miss a browse or traverse option somewhere? > - > Cassandra > Unix Administrator > "From a little spark may burst a mighty flame." > -Dante Alighieri > > > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org <mailto:zfs-discuss at opensolaris.org> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >
No usernames is not an issue. I have many shares that work, but they are single zfs file systems. The special case here is that I am trying to traverse NESTED zfs systems, for the purpose of having compressed and uncompressed directories. - Cassandra (609) 243-2413 Unix Administrator "From a little spark may burst a mighty flame." -Dante Alighieri On Thu, Jun 3, 2010 at 3:00 PM, Cindy Swearingen < cindy.swearingen at oracle.com> wrote:> Hi Cassandra, > > The mirror mount feature allows the client to access files and dirs that > are newly created on the server, but this doesn''t look like your problem > described below. > > My guess is that you need to resolve the username/permission issues > before this will work, but some versions of Linux don''t support > traversing nested mount points. > > I''m no NFS expert and many on this list are, but things to check are: > > - I''ll assume that hostnames are resolving between systems since > you can share/mount the resources. > > - If you are seeing "nobody" instead of user names, then you need to > make sure the domain name is specified in NFSMAPID_DOMAIN. For example, > add company.com to the /etc/default/nfs file and then restart this > server: > # svcs | grep mapid > online May_27 svc:/network/nfs/mapid:default > # svcadm restart svc:/network/nfs/mapid:default > > - Permissions won''t resolve correctly until the above two issues are > cleared. > > - You might be able to rule out the Linux client support of nested > mount points by just sharing a simple test dataset, like this: > > # zfs create mypool/test > # cp /usr/dict/words /mypool/test/file.1 > # zfs set sharenfs=on mypool/test > > and see if file.1 is visible on the Linux client. > > Thanks, > > Cindy > > > On 06/03/10 11:53, Cassandra Pugh wrote: > >> Thanks for getting back to me! >> >> I am using Solaris 10 10/09 (update 8) >> >> I have created multiple nested zfs directories in order to compress some >> but not all sub directories in a directory. >> I have ensured that they all have a sharenfs option, as I have done with >> other shares. >> >> This is a special case to me, since instead of just >> #zfs create pool/mydir >> >> and then just using mkdir to make everything thereafter, I have done: >> #zfs create mypool/mydir/ >> #zfs create mypool/mydir/dir1 >> #zfs create mypool/mydir/dir1/compressed1 >> #zfs create mypool/mydir/dir1/compressedir2 >> #zfs create mypool/mydir/dir1/uncompressedir >> >> >> i had hoped that i would then export this, and mount it on the client and >> see: >> #ls /mnt/mydir/* >> >> dir: >> compressedir1 compressedir2 uncompressedir >> >> and the files thereafter. >> >> however what i see is : >> >> #ls /mnt/mydir/* >> >> dir: >> >> My client is linux. I would assume we are using nfs v3. I also notice that >> the permissions are not showing through correctly. >> The mount options used are our "defaults" >> (hard,rw,nosuid,nodev,intr,noacl) >> >> >> I am not sure what this mirror mounting is? Would that help me? >> Is there something else I could be doing to approach this better? >> >> Thank you for your insight. >> >> - >> >> Cassandra >> Unix Administrator >> >> >> On Thu, May 27, 2010 at 5:25 PM, Cindy Swearingen < >> cindy.swearingen at oracle.com <mailto:cindy.swearingen at oracle.com>> wrote: >> >> Cassandra, >> >> Which Solaris release is this? >> >> This is working for me between an Solaris 10 server and a >> OpenSolaris client. >> >> Nested mount points can be tricky and I''m not sure if you are looking >> for the mirror mount feature that is not available in the Solaris 10 >> release, where new directory contents are accessible on the client. >> >> See the examples below. >> >> >> Thanks, >> >> Cindy >> >> On the server: >> >> # zpool create pool c1t3d0 >> # zfs create pool/myfs1 >> # cp /usr/dict/words /pool/myfs1/file.1 >> # zfs create -o mountpoint=/pool/myfs1/myfs2 pool/myfs2 >> # ls /pool/myfs1 >> file.1 myfs2 >> # cp /usr/dict/words /pool/myfs1/myfs2/file.2 >> # ls /pool/myfs1/myfs2/ >> file.2 >> # zfs set sharenfs=on pool/myfs1 >> # zfs set sharenfs=on pool/myfs2 >> # share >> - /pool/myfs1 rw "" >> - /pool/myfs1/myfs2 rw " >> >> On the client: >> >> # ls /net/t2k-brm-03/pool/myfs1 >> file.1 myfs2 >> # ls /net/t2k-brm-03/pool/myfs1/myfs2 >> file.2 >> # mount -F nfs t2k-brm-03:/pool/myfs1 /mnt >> # ls /mnt >> file.1 myfs2 >> # ls /mnt/myfs2 >> file.2 >> >> On the server: >> >> # touch /pool/myfs1/myfs2/file.3 >> >> On the client: >> >> # ls /mnt/myfs2 >> file.2 file.3 >> >> >> On 05/27/10 14:02, Cassandra Pugh wrote: >> >> I was wondering if there is a special option to share out a >> set of nested >> directories? Currently if I share out a directory with >> /pool/mydir1/mydir2 >> on a system, mydir1 shows up, and I can see mydir2, but >> nothing in mydir2. >> mydir1 and mydir2 are each a zfs filesystem, each shared with >> the proper >> sharenfs permissions. >> Did I miss a browse or traverse option somewhere? >> - >> Cassandra >> Unix Administrator >> "From a little spark may burst a mighty flame." >> -Dante Alighieri >> >> >> >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org <mailto:zfs-discuss at opensolaris.org> >> >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> >>-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100603/0f7f781d/attachment.html>
I am trying to set this up as an automount. Currently I am trying to set mounts for each area, but I have a lot to mount. When I run showmount -e nfs_server I do see all of the shared directories. - Cassandra (609) 243-2413 Unix Administrator "From a little spark may burst a mighty flame." -Dante Alighieri On Thu, Jun 3, 2010 at 2:26 PM, Brandon High <bhigh at freaks.com> wrote:> showmount -e nfs_server >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100603/bb32990d/attachment.html>
If your other single ZFS shares are working, then I think the answer is that the Linux client version doesn''t support the nested access feature, I''m guessing. You could also test the nested access between your Solaris 10 10/09 server and a Solaris 10 10/09 client, if possible, to be sure this is a Linux client issue, and not a different configuration problem. Cindy On 06/03/10 13:50, Cassandra Pugh wrote:> No usernames is not an issue. I have many shares that work, but they > are single zfs file systems. > The special case here is that I am trying to traverse NESTED zfs > systems, for the purpose of having compressed and uncompressed > directories. > > > - > Cassandra > (609) 243-2413 > Unix Administrator > > > "From a little spark may burst a mighty flame." > -Dante Alighieri > > > On Thu, Jun 3, 2010 at 3:00 PM, Cindy Swearingen > <cindy.swearingen at oracle.com <mailto:cindy.swearingen at oracle.com>> wrote: > > Hi Cassandra, > > The mirror mount feature allows the client to access files and dirs > that are newly created on the server, but this doesn''t look like > your problem > described below. > > My guess is that you need to resolve the username/permission issues > before this will work, but some versions of Linux don''t support > traversing nested mount points. > > I''m no NFS expert and many on this list are, but things to check are: > > - I''ll assume that hostnames are resolving between systems since > you can share/mount the resources. > > - If you are seeing "nobody" instead of user names, then you need to > make sure the domain name is specified in NFSMAPID_DOMAIN. For example, > add company.com <http://company.com> to the /etc/default/nfs file > and then restart this > server: > # svcs | grep mapid > online May_27 svc:/network/nfs/mapid:default > # svcadm restart svc:/network/nfs/mapid:default > > - Permissions won''t resolve correctly until the above two issues are > cleared. > > - You might be able to rule out the Linux client support of nested > mount points by just sharing a simple test dataset, like this: > > # zfs create mypool/test > # cp /usr/dict/words /mypool/test/file.1 > # zfs set sharenfs=on mypool/test > > and see if file.1 is visible on the Linux client. > > Thanks, > > Cindy > > > On 06/03/10 11:53, Cassandra Pugh wrote: > > Thanks for getting back to me! > > I am using Solaris 10 10/09 (update 8) > > I have created multiple nested zfs directories in order to > compress some but not all sub directories in a directory. > I have ensured that they all have a sharenfs option, as I have > done with other shares. > > This is a special case to me, since instead of just > #zfs create pool/mydir > > and then just using mkdir to make everything thereafter, I have > done: > #zfs create mypool/mydir/ > #zfs create mypool/mydir/dir1 > #zfs create mypool/mydir/dir1/compressed1 > #zfs create mypool/mydir/dir1/compressedir2 > #zfs create mypool/mydir/dir1/uncompressedir > > > i had hoped that i would then export this, and mount it on the > client and see: > #ls /mnt/mydir/* > > dir: > compressedir1 compressedir2 uncompressedir > > and the files thereafter. > > however what i see is : > > #ls /mnt/mydir/* > > dir: > > My client is linux. I would assume we are using nfs v3. I also > notice that the permissions are not showing through correctly. > The mount options used are our "defaults" > (hard,rw,nosuid,nodev,intr,noacl) > > > I am not sure what this mirror mounting is? Would that help me? > Is there something else I could be doing to approach this better? > > Thank you for your insight. > > - > > Cassandra > Unix Administrator > > > On Thu, May 27, 2010 at 5:25 PM, Cindy Swearingen > <cindy.swearingen at oracle.com > <mailto:cindy.swearingen at oracle.com> > <mailto:cindy.swearingen at oracle.com > <mailto:cindy.swearingen at oracle.com>>> wrote: > > Cassandra, > > Which Solaris release is this? > > This is working for me between an Solaris 10 server and a > OpenSolaris client. > > Nested mount points can be tricky and I''m not sure if you are > looking > for the mirror mount feature that is not available in the > Solaris 10 > release, where new directory contents are accessible on the > client. > > See the examples below. > > > Thanks, > > Cindy > > On the server: > > # zpool create pool c1t3d0 > # zfs create pool/myfs1 > # cp /usr/dict/words /pool/myfs1/file.1 > # zfs create -o mountpoint=/pool/myfs1/myfs2 pool/myfs2 > # ls /pool/myfs1 > file.1 myfs2 > # cp /usr/dict/words /pool/myfs1/myfs2/file.2 > # ls /pool/myfs1/myfs2/ > file.2 > # zfs set sharenfs=on pool/myfs1 > # zfs set sharenfs=on pool/myfs2 > # share > - /pool/myfs1 rw "" > - /pool/myfs1/myfs2 rw " > > On the client: > > # ls /net/t2k-brm-03/pool/myfs1 > file.1 myfs2 > # ls /net/t2k-brm-03/pool/myfs1/myfs2 > file.2 > # mount -F nfs t2k-brm-03:/pool/myfs1 /mnt > # ls /mnt > file.1 myfs2 > # ls /mnt/myfs2 > file.2 > > On the server: > > # touch /pool/myfs1/myfs2/file.3 > > On the client: > > # ls /mnt/myfs2 > file.2 file.3 > > > On 05/27/10 14:02, Cassandra Pugh wrote: > > I was wondering if there is a special option to share > out a > set of nested > directories? Currently if I share out a directory with > /pool/mydir1/mydir2 > on a system, mydir1 shows up, and I can see mydir2, but > nothing in mydir2. > mydir1 and mydir2 are each a zfs filesystem, each > shared with > the proper > sharenfs permissions. > Did I miss a browse or traverse option somewhere? > - > Cassandra > Unix Administrator > "From a little spark may burst a mighty flame." > -Dante Alighieri > > > > > ------------------------------------------------------------------------ > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > <mailto:zfs-discuss at opensolaris.org> > <mailto:zfs-discuss at opensolaris.org > <mailto:zfs-discuss at opensolaris.org>> > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > >
On Thu, Jun 3, 2010 at 12:50 PM, Cassandra Pugh <cpugh at pppl.gov> wrote:> The special case here is that I am trying to traverse NESTED zfs systems, > for the purpose of having compressed and uncompressed directories.Make sure to use "mount -t nfs4" on your linux client. The standard "nfs" type only supports nfs v2/v3. -B -- Brandon High : bhigh at freaks.com
Cassandra Pugh <cpugh at pppl.gov> writes:> I am trying to set this up as an automount. > > Currently I am trying to set mounts for each area, but I have a lot to > mount. > > When I run showmount -e nfs_server I do see all of the shared directories.I ran into this same problem some mnths ago... I can''t remember where my research came up with this, but I learned that even with version 4, linux will not mount nestedly (if thats a word). I resorted to a for loop to mount my shares... But I sounds like my case is simpler than yours. I have top level /projects/ Then a subdir for each host on the lan, under that. Even some subdir for specific projects in a given host... so the deepest nest would be 3 layers. I just added the `for loop'' to linux boot time startups through `local'' (it comes last on bootup. (On gentoo linux) I suspect there may be all sorts of ugly things that COULD happen with this setup, but none of them has as yet over several mnths (knock on wood).
>>>>> "cs" == Cindy Swearingen <cindy.swearingen at oracle.com> writes:okay wtf. Why is this thread still alive? cs> The mirror mount feature It''s unclear to me from this what state the feature''s in: http://hub.opensolaris.org/bin/view/Project+nfs-namespace/ It sounds like mirror mounts are done but referrals are not, but I don''t know. Are the client and server *both* done? I assume so, because I don''t know how else it could be tested. Is the bug with ''find'' fixed? It looks like it was fixed, but very recently: http://opensolaris.org/jive/message.jspa?messageID=409895#409895 and it sounds like there could be problems with other programs that have a --one-file-system option like gnutar and rsync because the fix is sort of ad-hoc---it''s done by making changes to the solaris userland. Are all the features described at: http://hub.opensolaris.org/bin/download/Project+nfs-namespace/files/mm-PRS-open.html actually implemented, including automounter overrides, automatic unmounting, recursive unmounting? not sure. Are you even using NFSv4 in Linux? It''s very unlikely. probably you are using NFSv3. People are reporting unresolved problems with NFSv4 with connections bouncing and not properly simulating the ``statelessness'''' that allows servers to reboot when clients don''t: http://mail.opensolaris.org/pipermail/nfs-discuss/2010-April/002087.html granted, ISTR some of the problems are reported by people doing goofy bullshit through firewalls, like bank admins that don''t seem to understand TCP/IP and are flailing around with the blamestick because they are in a CYA environment and don''t have reasonable control of their own systems. but I am not sure it''s worth the trouble! AFAICT you cannot even net-boot opensolaris over NFSv4: ''/'' comes up mounted with NFSv3. It seems to me every time this ``I can''t see subdirectories'''' comes up it''s from someone who doesn''t understand how NFS and Unix works, doesn''t know how to mount ANY filesystem much less NFS, has no idea what version of NFS he is using much less how to determine his NFSv4 idmap domain (answer is: ''cat /var/run/nfs4_domain''). The right answer is ``you need to mount the underlying filesystem. You need one mount command or mount line in /etc/{v,}fstab per one exported filesystem on the server.'''' very simple, very reasonable. But the answer pitched at them is all this convoluted bleeding edge mess about mirror mounts, coming from people who don''t have any experience actually USING mirror mounts, always with the caveat ``I''m not sure if your client supports BUT ...''''!!! But what? Are you even sure if the feature works ANYwhere, if you''ve never used it yourself? It sounds like a simple feature, but it just isn''t. If it actually worked the question would not even exist, so how can it be the answer? It is like ``Q. Can you please help me? / A. You might not even be here. Maybe we are not having this conversation because everything works perfectly. Let me explain to you what `working perfectly'' means and then you can tell me if you are real or not.'''' I would suggest you forget about this noise for the moment and write heirarchical automount maps. This works on both Linux and Solaris, except that you don''t have the full value of the automounter here because you cannot refresh parts of the subtree while the parent is still mounted, which is part of what the automounter is good for. It''s normal that an aoutmounter won''t consider new map data for things that are already mounted, but for heirarchical automounts, AFAICT you have to unmount the whole tree before any changes deep inside the tree will be refreshed from the map, which is less than ideal but reflects the ad-hoc way the automounter''s corner cases were slowly semifixed, especially on Linux. There are examples of heirarchical automounts in the man page, and if you don''t understand the examples then simply do not use the automounter at all. You do not even need to use the automounter. You can just put your filesystems into /etc/fstab and walk away from it. Honestly I think it is crazy that it takes you over a month simply to get one NFS subdirectory mounted inside another. This should take one hour. Please just forget about all this newfangled bullshit and mount the filesystem. see ''man mount'' and just DOOOOO it! Like this in /etc/fstab on Linux: terabithia:/arrchive /arrchive nfs rw,noacl,nodev 0 0 terabithia:/arrchive/music /arrchive/music nfs rw,noacl,nodev 0 0 *DONE*. There is no NFSv4. It is NFSv3. There is no automounter. There are no ``mirror mounts'''' and no referrals. If you add more ZFS filesystems, you add more lines to /etc/fstab on every Linux client. okay? If you are afraid you are using NFSv4, stop that from happening by saying ''-o vers=3'' on Solaris or ''-t nfs'' in Linux. But if you''re using Linux, you''re not using NFSv4. Solaris uses v4 by default, sometimes but not others. Linux uses v3 unless you say ''-t nfs4'', so just don''t say that. and if you see ''+'' signs after your permissions, like this: getzi:~# ls -ld /etc drwxr-xr-x+ 76 root root 184 May 30 16:45 /etc then add ''-o noacl'' to the client''s mount commandline to turn off all this overpresumptuous solaris autoconversion ACL-faking. It is really sad something so simple has become so polluted with fail. HTH. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100603/3f7b94e2/attachment.bin>
Thank you, when I manually mount using the "mount -t nfs4" option, I am able to see the entire tree, however, the permissions are set as nfsnobody. "Warning: rpc.idmapd appears not to be running. All uids will be mapped to the nobody uid." - Cassandra (609) 243-2413 Unix Administrator "From a little spark may burst a mighty flame." -Dante Alighieri On Thu, Jun 3, 2010 at 4:33 PM, Brandon High <bhigh at freaks.com> wrote:> On Thu, Jun 3, 2010 at 12:50 PM, Cassandra Pugh <cpugh at pppl.gov> wrote: > > The special case here is that I am trying to traverse NESTED zfs systems, > > for the purpose of having compressed and uncompressed directories. > > Make sure to use "mount -t nfs4" on your linux client. The standard > "nfs" type only supports nfs v2/v3. > > -B > > -- > Brandon High : bhigh at freaks.com >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100604/8e36df33/attachment.html>
On Fri, Jun 04, 2010 at 08:43:32AM -0400, Cassandra Pugh wrote:> Thank you, when I manually mount using the "mount -t nfs4" option, I am > able to see the entire tree, however, the permissions are set as > nfsnobody. > "Warning: rpc.idmapd appears not to be running. > All uids will be mapped to the nobody uid." >Did you actually read the error message? :) Finding a solution shouldn''t be too difficult after that.. -- Pasi> - > Cassandra > (609) 243-2413 > Unix Administrator > > "From a little spark may burst a mighty flame." > -Dante Alighieri > > On Thu, Jun 3, 2010 at 4:33 PM, Brandon High <[1]bhigh at freaks.com> wrote: > > On Thu, Jun 3, 2010 at 12:50 PM, Cassandra Pugh <[2]cpugh at pppl.gov> > wrote: > > The special case here is that I am trying to traverse NESTED zfs > systems, > > for the purpose of having compressed and uncompressed directories. > > Make sure to use "mount -t nfs4" on your linux client. The standard > "nfs" type only supports nfs v2/v3. > > -B > -- > Brandon High : [3]bhigh at freaks.com > > References > > Visible links > 1. mailto:bhigh at freaks.com > 2. mailto:cpugh at pppl.gov > 3. mailto:bhigh at freaks.com> _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Well, yes I understand I need to research the issue of running the idmapd service, but I also need to figure out how to use nfsv4 and automount. - Cassandra (609) 243-2413 Unix Administrator "From a little spark may burst a mighty flame." -Dante Alighieri On Fri, Jun 4, 2010 at 10:00 AM, Pasi K?rkk?inen <pasik at iki.fi> wrote:> On Fri, Jun 04, 2010 at 08:43:32AM -0400, Cassandra Pugh wrote: > > Thank you, when I manually mount using the "mount -t nfs4" option, I > am > > able to see the entire tree, however, the permissions are set as > > nfsnobody. > > "Warning: rpc.idmapd appears not to be running. > > All uids will be mapped to the nobody uid." > > > > Did you actually read the error message? :) > Finding a solution shouldn''t be too difficult after that.. > > -- Pasi > > > - > > Cassandra > > (609) 243-2413 > > Unix Administrator > > > > "From a little spark may burst a mighty flame." > > -Dante Alighieri > > > > On Thu, Jun 3, 2010 at 4:33 PM, Brandon High <[1]bhigh at freaks.com> > wrote: > > > > On Thu, Jun 3, 2010 at 12:50 PM, Cassandra Pugh <[2]cpugh at pppl.gov> > > wrote: > > > The special case here is that I am trying to traverse NESTED zfs > > systems, > > > for the purpose of having compressed and uncompressed directories. > > > > Make sure to use "mount -t nfs4" on your linux client. The standard > > "nfs" type only supports nfs v2/v3. > > > > -B > > -- > > Brandon High : [3]bhigh at freaks.com > > > > References > > > > Visible links > > 1. mailto:bhigh at freaks.com > > 2. mailto:cpugh at pppl.gov > > 3. mailto:bhigh at freaks.com > > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss at opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100604/fc59e30d/attachment.html>