We''ve been kicking around the question of whether or not zfs root mounts should appear in /etc/vfstab (i.e., be legacy mount) or use the new zfs approach to mounts. Instead of writing up the issues again, here''s a blog entry that I just posted on the subject: http://blogs.sun.com/lalt/date/20070525 Weigh in if you care. Lori
> We''ve been kicking around the question of whether or > not zfs root mounts should appear in /etc/vfstab > (i.e., be > legacy mount) or use the new zfs approach to mounts. > Instead of writing up the issues again, here''s a blog > entry that I just posted on the subject: > > http://blogs.sun.com/lalt/date/20070525 > > Weigh in if you care.Interesting. Is there an ARC case that is related to some of these issues? ---Bob This message posted from opensolaris.org
> http://blogs.sun.com/lalt/date/20070525 > > Weigh in if you care.How about having the vfstab file virtualized, like a special node? Upon reading it, the system would take the actual cloaked vfstab and add the root and other ZFS entries to it on the fly. The same for writing to it, where the system would filter the respective entries out before writing to the cloaked vfstab. -mg
Bob Palowoda wrote:>> We''ve been kicking around the question of whether or >> not zfs root mounts should appear in /etc/vfstab >> (i.e., be >> legacy mount) or use the new zfs approach to mounts. >> Instead of writing up the issues again, here''s a blog >> entry that I just posted on the subject: >> >> http://blogs.sun.com/lalt/date/20070525 >> >> Weigh in if you care. >> > > Interesting. Is there an ARC case that is related to some of these issues? >The ARC case for using zfs as a root file system is PSARC/2006/370, but there isn''t much there yet. I''m preparing the documents for the case and this is one of the issues I wanted to get some feedback on from the external community before I make a proposal for what to do. I don''t know of any other ARC cases that would be relevant. I''m not sure how old the getvfsent interface is. If that interface got ARC''d, some of the documents for it might be relevant. I''ll check it out. Lori
On Fri, 25 May 2007, Lori Alt wrote:> We''ve been kicking around the question of whether or > not zfs root mounts should appear in /etc/vfstab (i.e., be > legacy mount) or use the new zfs approach to mounts. > Instead of writing up the issues again, here''s a blog > entry that I just posted on the subject: > > http://blogs.sun.com/lalt/date/20070525 > > Weigh in if you care.ZFS is a paradigm shift and Nevada has not been released. Therefore I vote for implementing it the "ZFS way" - going forward. Place the burden on the "other" developers to fix their "bugs". In general, once the user has adopted ZFS and learned the new paradigm, introducing behavior that may make it easier for legacy code to transition to ZFS, or co-exist with ZFS, will have the side effect of "polluting" the ZFS paradigm and will detract for its purity. IOW (and I hope I''m explaining my thinking) - a brand new OpenSolaris user who just loaded SXCE build 64a[1] should not have to think within the ZFS domain with reference to behavioral warts that can only be explained thoroughly by examining history. And besides, over the long term [2], these "behavioral warts" will only become more obvious and annoying. [1] quite probably because they *need* ZFS! [2] with ref to Jeff Bonwicks'' observation that filesystems tend to be around for decades Regards, Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
Hi Lori, Are there any changes to build 64a that will affect ZFS bootability? Will the conversion script for build 62 still do its magic? Thanks, Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
On Fri, May 25, 2007 at 02:50:15PM -0500, Al Hopper wrote:> On Fri, 25 May 2007, Lori Alt wrote: > > >We''ve been kicking around the question of whether or > >not zfs root mounts should appear in /etc/vfstab (i.e., be > >legacy mount) or use the new zfs approach to mounts. > >Instead of writing up the issues again, here''s a blog > >entry that I just posted on the subject: > > > >http://blogs.sun.com/lalt/date/20070525 > > > >Weigh in if you care. > > ZFS is a paradigm shift and Nevada has not been released. Therefore I > vote for implementing it the "ZFS way" - going forward. Place the burden > on the "other" developers to fix their "bugs".I second Al''s point. In fact, I couldn''t have said it better myself. :) -brian -- "Perl can be fast and elegant as much as J2EE can be fast and elegant. In the hands of a skilled artisan, it can and does happen; it''s just that most of the shit out there is built by people who''d be better suited to making sure that my burger is cooked thoroughly." -- Jonathan Patschke
On Fri, 2007-05-25 at 10:20 -0600, Lori Alt wrote:> We''ve been kicking around the question of whether or > not zfs root mounts should appear in /etc/vfstab (i.e., be > legacy mount) or use the new zfs approach to mounts. > Instead of writing up the issues again, here''s a blog > entry that I just posted on the subject: > > http://blogs.sun.com/lalt/date/20070525 > > Weigh in if you care.IMHO, there should be no need to put any ZFS filesystems in /etc/vfstab, but (this is something of a digression based on discussion kicked up by PSARC 2007/297) it''s become clear to me that ZFS filesystems *should* be mounted by mountall and mount -a rather than via a special-case invocation of "zfs mount" at the end of the fs-local method script. in other words: teach "mount" how to find the list of filesystems in attached pools and mix them in to the dependency graph it builds to mount filesystems in the right order, rather than mounting everything-but-zfs first and then zfs later. - Bill
Build 64a has bug 6553537 (zfs root fails to boot from a snv_63+zfsboot-pfinstall netinstall image), for which I don''t have a ready workaround. So I recommend waiting for build 65 (which should be out soon, I think). Lori Al Hopper wrote:> > Hi Lori, > > Are there any changes to build 64a that will affect ZFS bootability? > Will the conversion script for build 62 still do its magic? > > Thanks, > > Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com > Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT > OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 > http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/ > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Bill Sommerfeld wrote:> On Fri, 2007-05-25 at 10:20 -0600, Lori Alt wrote: > >> We''ve been kicking around the question of whether or >> not zfs root mounts should appear in /etc/vfstab (i.e., be >> legacy mount) or use the new zfs approach to mounts. >> Instead of writing up the issues again, here''s a blog >> entry that I just posted on the subject: >> >> http://blogs.sun.com/lalt/date/20070525 >> >> Weigh in if you care. >> > > IMHO, there should be no need to put any ZFS filesystems in /etc/vfstab, > but (this is something of a digression based on discussion kicked up by > PSARC 2007/297) it''s become clear to me that ZFS filesystems *should* be > mounted by mountall and mount -a rather than via a special-case > invocation of "zfs mount" at the end of the fs-local method script. > > in other words: teach "mount" how to find the list of filesystems in > attached pools and mix them in to the dependency graph it builds to > mount filesystems in the right order, rather than mounting > everything-but-zfs first and then zfs later. > > >I agree with this. This seems like a necessary response to both PSARC/2007/297 and also necessary for eliminating legacy mounts for zfs root file systems. The problem of the interaction between legacy and non-legacy mounts will just get worse once we are using non-legacy mounts for the file systems in the BE. Lori
On Fri, 2007-05-25 at 14:29 -0600, Lori Alt wrote:> Bill Sommerfeld wrote: > > IMHO, there should be no need to put any ZFS filesystems in /etc/vfstab, > > but (this is something of a digression based on discussion kicked up by > > PSARC 2007/297) it''s become clear to me that ZFS filesystems *should* be > > mounted by mountall and mount -a rather than via a special-case > > invocation of "zfs mount" at the end of the fs-local method script. > > > > in other words: teach "mount" how to find the list of filesystems in > > attached pools and mix them in to the dependency graph it builds to > > mount filesystems in the right order, rather than mounting > > everything-but-zfs first and then zfs later. > > > > > > > I agree with this. This seems like a necessary response to > both PSARC/2007/297 and also necessary for eliminating > legacy mounts for zfs root file systems. The problem of > the interaction between legacy and non-legacy mounts will just > get worse once we are using non-legacy mounts for the > file systems in the BE.Could we also look into why system-console insists on waiting for ALL the zfs mounts to be available? Shouldn''t the main file system food groups be mounted and then allow console-login (much like single user or safe-mode)? Would help in many cases where an admin needs to work on a system but doesn''t need, say 20k users home directories mounted, to do this work.> > Lori > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Mike Dotson
Mike Dotson wrote:> On Fri, 2007-05-25 at 14:29 -0600, Lori Alt wrote: > >> Bill Sommerfeld wrote: >> >>> IMHO, there should be no need to put any ZFS filesystems in /etc/vfstab, >>> but (this is something of a digression based on discussion kicked up by >>> PSARC 2007/297) it''s become clear to me that ZFS filesystems *should* be >>> mounted by mountall and mount -a rather than via a special-case >>> invocation of "zfs mount" at the end of the fs-local method script. >>> >>> in other words: teach "mount" how to find the list of filesystems in >>> attached pools and mix them in to the dependency graph it builds to >>> mount filesystems in the right order, rather than mounting >>> everything-but-zfs first and then zfs later. >>> >>> >>> >>> >> I agree with this. This seems like a necessary response to >> both PSARC/2007/297 and also necessary for eliminating >> legacy mounts for zfs root file systems. The problem of >> the interaction between legacy and non-legacy mounts will just >> get worse once we are using non-legacy mounts for the >> file systems in the BE. >> > > Could we also look into why system-console insists on waiting for ALL > the zfs mounts to be available? Shouldn''t the main file system food > groups be mounted and then allow console-login (much like single user or > safe-mode)? > > Would help in many cases where an admin needs to work on a system but > doesn''t need, say 20k users home directories mounted, to do this work. >So single-user mode is not sufficient for this? Lori
On Fri, 2007-05-25 at 15:50 -0600, Lori Alt wrote:> Mike Dotson wrote: > > On Fri, 2007-05-25 at 14:29 -0600, Lori Alt wrote: > > > Would help in many cases where an admin needs to work on a system but > > doesn''t need, say 20k users home directories mounted, to do this work. > > > So single-user mode is not sufficient for this? >Not all work needs to be done in single user:) And I wouldn''t consider a 4+ hour boot time just for mounting file systems a good use of cpu time when an admin could be doing other things - preparation for next patching, configuring changes to webserver, etc. Or just monitoring the status of the file system mounts to give an update to management on how many file systems are mounted and how many are left. Point is, why is console-login dependent on *all* the file systems being mounted in *multiboot*. Does it really need to depend on *all* the file systems being mounted?> > Lori > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Thanks... Mike Dotson Area System Support Engineer - ACS West Phone: (503) 343-5157 Mike.Dotson at Sun.Com
>On Fri, 2007-05-25 at 15:50 -0600, Lori Alt wrote: >> Mike Dotson wrote: >> > On Fri, 2007-05-25 at 14:29 -0600, Lori Alt wrote: >> >> > Would help in many cases where an admin needs to work on a system but >> > doesn''t need, say 20k users home directories mounted, to do this work. >> > >> So single-user mode is not sufficient for this? >> > >Not all work needs to be done in single user:) And I wouldn''t consider a >4+ hour boot time just for mounting file systems a good use of cpu time >when an admin could be doing other things - preparation for next >patching, configuring changes to webserver, etc. Or just monitoring the >status of the file system mounts to give an update to management on how >many file systems are mounted and how many are left. > >Point is, why is console-login dependent on *all* the file systems being >mounted in *multiboot*. Does it really need to depend on *all* the file >systems being mounted?Why do we need the filesystems mounted at all, ever, if they are not used? Mounts could be more magic than that. Casper
On Fri, May 25, 2007 at 03:01:20PM -0700, Mike Dotson wrote:> On Fri, 2007-05-25 at 15:50 -0600, Lori Alt wrote: > > Mike Dotson wrote: > > > On Fri, 2007-05-25 at 14:29 -0600, Lori Alt wrote: > > > > > Would help in many cases where an admin needs to work on a system but > > > doesn''t need, say 20k users home directories mounted, to do this work. > > > > > So single-user mode is not sufficient for this? > > > > Not all work needs to be done in single user:) And I wouldn''t consider a > 4+ hour boot time just for mounting file systems a good use of cpu time > when an admin could be doing other things - preparation for next > patching, configuring changes to webserver, etc. Or just monitoring the > status of the file system mounts to give an update to management on how > many file systems are mounted and how many are left. > > Point is, why is console-login dependent on *all* the file systems being > mounted in *multiboot*. Does it really need to depend on *all* the file > systems being mounted? >This has been discussed many times in smf-discuss, for all types of login. Basically, there is no way to say "console login for root only". As long as any user can log in, we need to have all the filesystems mounted because we don''t know what dependencies there may be. Simply changing the definition of console-login isn''t a solution because it breaks existing assumptions and software. A much better option is the ''trigger mount'' RFE that would allow ZFS to quickly ''mount'' a filesystem but not pull all the necessary data off disk until it''s first accessed. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Why not simply have a SMF sequence that does early in boot, after / and /usr are mounted: create /etc/nologin (contents="coming up, not ready yet") enable login later in boot, when user filesystems are all mounted: delete /etc/nologin Wouldn''t this would give the desired behavior? -John Eric Schrock wrote:> This has been discussed many times in smf-discuss, for all types of > login. Basically, there is no way to say "console login for root > only".
On Fri, 2007-05-25 at 15:19 -0700, Eric Schrock wrote:> This has been discussed many times in smf-discuss, for all types of > login. Basically, there is no way to say "console login for root > only". As long as any user can log in, we need to have all the > filesystems mounted because we don''t know what dependencies there may > be. Simply changing the definition of console-login isn''t a > solution because it breaks existing assumptions and software.<devils_advocate> So how are you guaranteeing NFS server and automount with autofs are up, running and working for the user for console-login. </devils_advocate> I don''t buy this argument and you don''t have to say "console-login for root only" you just have to have console-login and the services available are minimal and may not include *all* services much like when a nfs server is down, etc. If the software depends on a file system or all the file systems to be mounted, it adds that as a dependency (filesystem/local). console-login does not require this - only non-root users. (I remember a smf config bug with apache not requiring filesystem/local and failing to start) What software is dependent on console-login? helios(3):> svcs -D console-login STATE STIME FMRI In fact the console-login depends on filesystem/minimal which to me means minimal file systems not all file systems and there is no software dependent on console-login - where''s the disconnect?>From what I see, problem is auditd is dependent on filesystem/localwhich is where we possibly have the hangup.> > A much better option is the ''trigger mount'' RFE that would allow ZFS to > quickly ''mount'' a filesystem but not pull all the necessary data off > disk until it''s first accessed.Agreed but there''s still the issue with console-login being dependent on all file systems instead of minimal file systems.> > - Eric > > > -- > Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock-- Mike Dotson
I didn''t mean to imply that it wasn''t technically possible, only that there is no "one size fits all" solution for OpenSolaris as a whole. Even getting this to work in an easily tunable form is quite tricky, since you must dynamically determine dependencies in the process (filesystem/minimal vs. filesystem/user). If someone wants to pursue this, I would suggest moving the discussion to smf-discuss. - Eric On Fri, May 25, 2007 at 03:32:52PM -0700, John Plocher wrote:> Why not simply have a SMF sequence that does > > early in boot, after / and /usr are mounted: > create /etc/nologin (contents="coming up, not ready yet") > enable login > later in boot, when user filesystems are all mounted: > delete /etc/nologin > > Wouldn''t this would give the desired behavior? > -John > > > Eric Schrock wrote: > >This has been discussed many times in smf-discuss, for all types of > >login. Basically, there is no way to say "console login for root > >only".-- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
On Fri, May 25, 2007 at 03:39:11PM -0700, Mike Dotson wrote:> > In fact the console-login depends on filesystem/minimal which to me > means minimal file systems not all file systems and there is no software > dependent on console-login - where''s the disconnect? >You''re correct - I thought console-login depended in filesystem/local, not filesystem/minimal. ZFS filesystems are not mounted as part of filesystem/minimal, so remind me what the promlem is? - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
On Fri, 2007-05-25 at 15:46 -0700, Eric Schrock wrote:> On Fri, May 25, 2007 at 03:39:11PM -0700, Mike Dotson wrote: > > > > In fact the console-login depends on filesystem/minimal which to me > > means minimal file systems not all file systems and there is no software > > dependent on console-login - where''s the disconnect? > > > > You''re correct - I thought console-login depended in filesystem/local, > not > filesystem/minimal. ZFS filesystems are not mounted as part of > filesystem/minimal, so remind me what the promlem is?Create 20k zfs file systems and reboot. Console login waits for all the zfs file systems to be mounted (fully loaded 880, you''re looking at about 4 hours so have some coffee ready). The *only* place I can see the filesystem/local dependency is in svc:/system/auditd:default, however, on my systems it''s disabled. Haven''t had a chance to really prune out the dependency tree to really find the disconnect but once /, /var, /tmp and /usr are mounted, the conditions for console-login should be met. As you mentioned, best solution for this number of filesystems in zfs land is the *automount* fs option where it mounts the filesystems as needed to reduce the *boot time*.> > - Eric > > -- > Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock-- Thanks... Mike Dotson Area System Support Engineer - ACS West Phone: (503) 343-5157 Mike.Dotson at Sun.Com
><devils_advocate> >So how are you guaranteeing NFS server and automount with autofs are up, >running and working for the user for console-login. ></devils_advocate>Irrelevant; chances are that when someone boots a system (e.g., laptop or desktop) he/she is sitting their waiting at the console until the login prompt shows up. And then he/she logs in only to find out that it does not work. Autofs is, of course, detectable; the NFS server less so but there will be a long pause and a message when the NFS server is not present. Of course, an alternate mechanism could be to create an /etc/nologin file at boot when the login services are enabled (only root can login) and then remove that when the "multi user mounts" are up. I''ve been bit many a time with a "login after reboot" when it was possible to do so prior to all mounts present. Casper
Mike Dotson <Mike.Dotson at Sun.COM> wrote:> Create 20k zfs file systems and reboot. Console login waits for all the > zfs file systems to be mounted (fully loaded 880, you''re looking at > about 4 hours so have some coffee ready).Does this mean, we will get quotas for ZFS in the future? We need it e.g. for the Berlios fileserver. It will stay with UFS as long as we did not find another quota solution. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
>Mike Dotson <Mike.Dotson at Sun.COM> wrote: > >> Create 20k zfs file systems and reboot. Console login waits for all the >> zfs file systems to be mounted (fully loaded 880, you''re looking at >> about 4 hours so have some coffee ready). > >Does this mean, we will get quotas for ZFS in the future? > >We need it e.g. for the Berlios fileserver. It will stay with UFS as long >as we did not find another quota solution.Would it help if filesystems were much cheaper so you could use per user filesystems? To me, ZFS has not fullfilled the promise of "cheap filesystems"; cheap to create, but much more expensive to mount. Trigger mounts should fix that and one of the reason automount can cheaply create 1000s of mounts is that it simply sits and waits on /home and does not actually perform the mounts or even the bookkeeping associated with creating them. As zfs is by and large hierarchical in the same nature, an /export/home zfs filesystem would need only one trigger mount point (and a trigger mount could trigger other trigger mounts, clearly) But this also requires a further integration of ZFS and NFS; either the NFS clients will need to know how to traverse mount points (why does my NFS client need to know that I am sharing /export/home/<eachuser> when it really ought to know only about /export/home only?) But the further integration should minimally consist of NFS knowing how to share filesystems it does not yet know the filehandle off. So when you share /export/home/casper NFS should really only know that pathname; when someone requests the mount NFS should then trigger the mount simply by doing the appropriate getattr calls. Some of the issues with ZFS are not that ZFS is a departure from "the old ways"; but that the rest of the system needs to follow suit; ZFS is not yet integrated enough. What I personally do for ZFS loopback mounts, such as required for /tftpboot/I86PC.Solaris_11 on install server, is making them into auto_direct mounts. The existance of "/" in /etc/vfstab has always been a anomaly of sorts: the system knew what it was mounted and it needed to be told again? The filesystem "service" should know what to mount where and trigger mount points would be the only ones in existance, initially. Casper
Casper.Dik at Sun.COM wrote:> > >Mike Dotson <Mike.Dotson at Sun.COM> wrote: > > > >> Create 20k zfs file systems and reboot. Console login waits for all the > >> zfs file systems to be mounted (fully loaded 880, you''re looking at > >> about 4 hours so have some coffee ready). > > > >Does this mean, we will get quotas for ZFS in the future? > > > >We need it e.g. for the Berlios fileserver. It will stay with UFS as long > >as we did not find another quota solution. > > Would it help if filesystems were much cheaper so you could use > per user filesystems?The problem is that Berlios is created on top of projects that are run by persons. Each person owns a home directory and ad a member of a project group is able to put data to the virtual web server, the ftp project dir and to a web based download. Ideally we need group quotas for this and I am still thinking about group quotas after we did fix some ufs quota issues (I need to run quotacheck at least once a month in order to get rid of too high quota that may be a result of background deletes). If we would use many filesystems, we would need to mount them to a neutral place, let them contain all subdirectories for all services and then create symlinks from the service root directories to the project filesystems in order to let all files from a project be on a single filesystem. But then we would currently need 30000 filesystems. I suspect this will not work with Linux clients even though we might not need all of them to be mounted all the time.> To me, ZFS has not fullfilled the promise of "cheap filesystems"; > cheap to create, but much more expensive to mount. > > Trigger mounts should fix that and one of the reason automount > can cheaply create 1000s of mounts is that it simply sits and > waits on /home and does not actually perform the mounts or even > the bookkeeping associated with creating them. > > As zfs is by and large hierarchical in the same nature, an > /export/home zfs filesystem would need only one trigger mount > point (and a trigger mount could trigger other trigger mounts, > clearly)If I could create a ZFS pool for /export/nfs and export only /export/nfs while "subdirectories" inside /export/nfs are those 30000 filesystems, it could work.> So when you share /export/home/casper NFS should really only > know that pathname; when someone requests the mount NFS should > then trigger the mount simply by doing the appropriate getattr > calls.I am not sure whether I did understand you correctly. Do you think of a similar solution as I did mention above? If yes, this could work if a BFS root/mount filehandle for /export/nfs would allow to traverse to /export/nfs/groups/star although this is a "filesystem" in a pool.> Some of the issues with ZFS are not that ZFS is a departure > from "the old ways"; but that the rest of the system needs > to follow suit; ZFS is not yet integrated enough.ZFS is only a few years old now ;-) UFS started in 1981. We need a solution that does not only work for Solaris clients but for NFS clients of other platforms too. Some of our servers that are NFS clients to the global file server run Linux. J?rg -- EMail:joerg at schily.isdn.cs.tu-berlin.de (home) J?rg Schilling D-13353 Berlin js at cs.tu-berlin.de (uni) schilling at fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
On Sat, 26 May 2007, Casper.Dik at Sun.COM wrote:> >> Mike Dotson <Mike.Dotson at Sun.COM> wrote: >> >>> Create 20k zfs file systems and reboot. Console login waits for all the >>> zfs file systems to be mounted (fully loaded 880, you''re looking at >>> about 4 hours so have some coffee ready). >> >> Does this mean, we will get quotas for ZFS in the future? >> >> We need it e.g. for the Berlios fileserver. It will stay with UFS as long >> as we did not find another quota solution. > > Would it help if filesystems were much cheaper so you could use > per user filesystems? > > > To me, ZFS has not fullfilled the promise of "cheap filesystems"; > cheap to create, but much more expensive to mount. > > Trigger mounts should fix that and one of the reason automount > can cheaply create 1000s of mounts is that it simply sits and > waits on /home and does not actually perform the mounts or even > the bookkeeping associated with creating them.Another D-BUS service..> As zfs is by and large hierarchical in the same nature, an > /export/home zfs filesystem would need only one trigger mount > point (and a trigger mount could trigger other trigger mounts, > clearly) > > But this also requires a further integration of ZFS and NFS; > either the NFS clients will need to know how to traverse mount > points (why does my NFS client need to know that I am sharing > /export/home/<eachuser> when it really ought to know only about > /export/home only?) But the further integration should minimally > consist of NFS knowing how to share filesystems it does not yet > know the filehandle off. > > So when you share /export/home/casper NFS should really only > know that pathname; when someone requests the mount NFS should > then trigger the mount simply by doing the appropriate getattr > calls. > > Some of the issues with ZFS are not that ZFS is a departure > from "the old ways"; but that the rest of the system needs > to follow suit; ZFS is not yet integrated enough.Agreed. The pressure is on the developers to follow ZFSs'' lead.> What I personally do for ZFS loopback mounts, such as required > for /tftpboot/I86PC.Solaris_11 on install server, is making > them into auto_direct mounts.OK - I know this is entirely obvious to you (Casper) - but can you provide more detail for those who are not lucky enough to work on OpenSolaris full-time! :)> The existance of "/" in /etc/vfstab has always been a > anomaly of sorts: the system knew what it was mounted and > it needed to be told again?:)> The filesystem "service" should know what to mount where > and trigger mount points would be the only ones in existance, > initially.An enthusiastic +1 As usual, Casper sees the big picture! Regards, Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
>> What I personally do for ZFS loopback mounts, such as required >> for /tftpboot/I86PC.Solaris_11 on install server, is making >> them into auto_direct mounts. > >OK - I know this is entirely obvious to you (Casper) - but can you >provide more detail for those who are not lucky enough to work on >OpenSolaris full-time! :)Just add a line for each loopback mount to /etc/auto_direct instead: /tftpboot/I86PC.Solaris_11 -ro localhost:/export/install/combined.nvx_wos/latest/boot Casper