Folks, When using sharenfs, do I really need to NFS export the parent zfs filesystem *and* all of its children? For example, if I have /zfshome /zfshome/user1 /zfshome/user1+n it seems to me like I need to mount each of these exported filesystems individually on the NFS client. This scheme doesn''t seem optimal. If I have tens of thousands of users, each with their own little ZFS filesystem (necessary because ZFS doesn''t do user-based quotas), I don''t want to NFS mount all of these filesystems on a single node. Also, on Linux, since anonymous filesystems like NFS do not have a block device associated with them I can only have 255 of them mounted on a single host. Am I missing something? -- Robert Petkus Brookhaven National Laboratory Physics Dept. - Bldg. 510A Upton, New York 11973 http://www.bnl.gov/RHIC http://www.acf.bnl.gov
Robert Petkus wrote:>Folks, >When using sharenfs, do I really need to NFS export the parent zfs >filesystem *and* all of its children? For example, if I have >/zfshome >/zfshome/user1 >/zfshome/user1+n >it seems to me like I need to mount each of these exported filesystems >individually on the NFS client. This scheme doesn''t seem optimal. If I >have tens of thousands of users, each with their own little ZFS >filesystem (necessary because ZFS doesn''t do user-based quotas), I don''t >want to NFS mount all of these filesystems on a single node. Also, on >Linux, since anonymous filesystems like NFS do not have a block device >associated with them I can only have 255 of them mounted on a single host. > >Am I missing something? >Well, it would seem that one appropriate direction to investigate taking here would be to stop using an operating system like Linux that places arbitrary restrictions on the number of NFS filesystems that you can have mounted :-) Can you use an automounter on Linux so that it only mounts the NFS filesystem for users active on the machine or do you have tens of thousands of concurrent users or is it something like a mail server, storing mail in $HOME ? Darren
Robert Petkus wrote:> When using sharenfs, do I really need to NFS export the parent zfs > filesystem *and* all of its children? For example, if I have > /zfshome > /zfshome/user1 > /zfshome/user1+n > it seems to me like I need to mount each of these exported filesystems > individually on the NFS client. This scheme doesn''t seem optimal. If I > have tens of thousands of users, each with their own little ZFS > filesystem (necessary because ZFS doesn''t do user-based quotas), I don''t > want to NFS mount all of these filesystems on a single node.Most people use an automounter to mount NFS filesystems on demand to solve an issue like this. Getting changes to maps distributed does get more painful with a lot more distinct filesystems, but being able to mount only what you need can be quite a good thing. An alternate way will be to use NFSv4. When an NFSv4 client crosses a mountpoint on the server, it can detect this and mount the filesystem. It can feel like a "lite" version of the automounter in practice, as you just have to mount the root and discover the filesystems as needed. The Solaris NFSv4 client can''t do this yet. Rob T
Robert Thurlow wrote:> Robert Petkus wrote: > >> When using sharenfs, do I really need to NFS export the parent zfs >> filesystem *and* all of its children? For example, if I have >> /zfshome >> /zfshome/user1 >> /zfshome/user1+n >> it seems to me like I need to mount each of these exported filesystems >> individually on the NFS client. This scheme doesn''t seem optimal. If I >> have tens of thousands of users, each with their own little ZFS >> filesystem (necessary because ZFS doesn''t do user-based quotas), I don''t >> want to NFS mount all of these filesystems on a single node. > > Most people use an automounter to mount NFS filesystems on demand > to solve an issue like this. Getting changes to maps distributed does > get more painful with a lot more distinct filesystems, but being able > to mount only what you need can be quite a good thing.Unfortunately, I am already using automounter to ameliorate the massive NFS mount problem (NFSv4 poses the very same problem). We have a >3k node compute cluster that mounts hundreds of terabytes from NFS, AFS, Panasas with automount maps distributed via LDAP. One major issue is that some of our experiment groups actively use user quotas on many of the NFS exported filesystems. If we were to switch to ZFS, which did fare well in our performance tests, our problems would be conflated. Disappointing. -- Robert Petkus Brookhaven National Laboratory Physics Dept. - Bldg. 510A Upton, New York 11973 http://www.bnl.gov/RHIC http://www.acf.bnl.gov
> > An alternate way will be to use NFSv4. When an NFSv4 > client crosses > a mountpoint on the server, it can detect this and > mount the filesystem. > It can feel like a "lite" version of the automounter > in practice, as > you just have to mount the root and discover the > filesystems as needed. > The Solaris NFSv4 client can''t do this yet.Any news on when the Solaris NFSv4 client will be able to do this? This message posted from opensolaris.org
Chris Gerhard wrote:>>An alternate way will be to use NFSv4. When an NFSv4 >>client crosses >>a mountpoint on the server, it can detect this and >>mount the filesystem. >>It can feel like a "lite" version of the automounter >>in practice, as >>you just have to mount the root and discover the >>filesystems as needed. >>The Solaris NFSv4 client can''t do this yet. > > > Any news on when the Solaris NFSv4 client will be able to do this? >We have someone actively working on it, so sooner than later. eric