Hi, [Sorry for cross-posting, but I think either list can provide the solution I''m looking for.] I have been up all night researching zones and ZFS for a particular project we are going to build soon. It''s going to feature the latest and greatest of OpenSolaris, and use ofcourse ZFS pool to manage the available disk without allowing disk device files in the zones. The thing I want to be done (sorry if this is a really stupid question, but I''m a bit of a newbie in ZFS and zones, althought I like the concept a lot!), is to create a pool, say pool/home, and then have that pool mounted read write in a couple of zones running probably OpenSolaris on Nexenta, depending on the breaks. So my question is, is it possible to mount the pool/home pool to several different zones? I''ve been trying just about every concievable name combination in google, but haven''t found a definitive answer. The thing is, one zone is going to run sendmail/ postfix/whatever, that stores the mail in the /home/$user/Maildir. Then there''ll be another zone that runs some imap server application, like Dovecot for example, and it reads the mail from /home/$user/Maildir. Can this be done with ZFS? Reading documentation on the net, I came to wonder about this kind of solution host# zfs create pool/home host# zonecfg -z myzone> add datasetdataset> set name=pool/home dataset> end> ^Dhost# zoneadm -z myzone boot host# zlogin myzone myzone# zfs set mountpoint=/home pool/home Will this work? I''m sorry, but I don''t have a OpenSolaris machine on to test this theory. Any pointers to documentation that explains things would be greatly appreciated! warm regards, and have a nice 2008, Bo Granlund
Bo Granlund wrote:> Hi, > > [Sorry for cross-posting, but I think either list can provide the > solution I''m looking for.] > > I have been up all night researching zones and ZFS for a particular > project we are going to build soon. It''s going to feature the latest > and greatest of OpenSolaris, and use ofcourse ZFS pool to manage the > available disk without allowing disk device files in the zones. > > The thing I want to be done (sorry if this is a really stupid question, > but I''m a bit of a newbie in ZFS and zones, althought I like the concept > a lot!), is to create a pool, say pool/home, and then have that pool > mounted read write in a couple of zones running probably OpenSolaris > on Nexenta, depending on the breaks. So my question is, is it possible > to mount the pool/home pool to several different zones? I''ve been trying > just about every concievable name combination in google, but haven''t > found a definitive answer. The thing is, one zone is going to run sendmail/ > postfix/whatever, that stores the mail in the /home/$user/Maildir. Then > there''ll be another zone that runs some imap server application, like > Dovecot for example, and it reads the mail from /home/$user/Maildir. > Can this be done with ZFS? > > Reading documentation on the net, I came to wonder about this kind of > solution > host# zfs create pool/home > host# zonecfg -z myzone >> add dataset > dataset> set name=pool/home > dataset> end >> ^D > host# zoneadm -z myzone boot > host# zlogin myzone > > myzone# zfs set mountpoint=/home pool/homeI think you''re on the right track. Here''s what I have in my global zone: $ zfs list sink/home NAME USED AVAIL REFER MOUNTPOINT sink/home 6.68G 59.9G 6.68G /export/home $ zfs get mountpoint sink/home NAME PROPERTY VALUE SOURCE sink/home mountpoint /export/home local set with "zfs set mountpoint=/export/home sink/home" Then in each of my zones, I have this in the zone config file: <filesystem special="/export/home/jmcp" directory="/export/home/jmcp" type="lofs"> <fsoption name="rw"/> </filesystem> So then in each zone''s /etc/auto_home I have jmcp localhost:/export/home/jmcp James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
Louis F. Springer
2008-Jan-02 05:41 UTC
[zfs-discuss] [zones-discuss] ZFS shared /home between zones
You probably want to share pool/home as an NFS share then mount it in the zones. The zfs file system itself can''t actually be mounted to multiple mountpoints, its not a shared filesystem like NFS or QFS. zfs set sharenfs=on pool/home then in the zones mount globalzonehost:/home /home Where "globalzonehost" is the host name of the server shareing the zfs filesystem over nfs. Note that a better approach might be to create a separate zfs filesystem for each user rather than just mounting the whole home tree. See: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#ZFS_NFS_Server_Practices Bo Granlund wrote:> Hi, > > [Sorry for cross-posting, but I think either list can provide the > solution I''m looking for.] > > I have been up all night researching zones and ZFS for a particular > project we are going to build soon. It''s going to feature the latest > and greatest of OpenSolaris, and use ofcourse ZFS pool to manage the > available disk without allowing disk device files in the zones. > > The thing I want to be done (sorry if this is a really stupid question, > but I''m a bit of a newbie in ZFS and zones, althought I like the concept > a lot!), is to create a pool, say pool/home, and then have that pool > mounted read write in a couple of zones running probably OpenSolaris > on Nexenta, depending on the breaks. So my question is, is it possible > to mount the pool/home pool to several different zones? I''ve been trying > just about every concievable name combination in google, but haven''t > found a definitive answer. The thing is, one zone is going to run sendmail/ > postfix/whatever, that stores the mail in the /home/$user/Maildir. Then > there''ll be another zone that runs some imap server application, like > Dovecot for example, and it reads the mail from /home/$user/Maildir. > Can this be done with ZFS? > > Reading documentation on the net, I came to wonder about this kind of > solution > host# zfs create pool/home > host# zonecfg -z myzone > >> add dataset >> > dataset> set name=pool/home > dataset> end > >> ^D >> > host# zoneadm -z myzone boot > host# zlogin myzone > > myzone# zfs set mountpoint=/home pool/home > > Will this work? I''m sorry, but I don''t have a OpenSolaris machine on > to test this theory. Any pointers to documentation that explains things > would be greatly appreciated! > > warm regards, and have a nice 2008, > Bo Granlund > > _______________________________________________ > zones-discuss mailing list > zones-discuss at opensolaris.org >
Louis F. Springer
2008-Jan-02 05:51 UTC
[zfs-discuss] [zones-discuss] ZFS shared /home between zones
Well, ignore my post, a kernel engineer would know. I had no idea you could loopback mount the same filesystem into multiple zones, or am I missing something? This would certainly be more efficient than using nfs. Lou James C. McPherson wrote:> Bo Granlund wrote: > >> Hi, >> >> [Sorry for cross-posting, but I think either list can provide the >> solution I''m looking for.] >> >> I have been up all night researching zones and ZFS for a particular >> project we are going to build soon. It''s going to feature the latest >> and greatest of OpenSolaris, and use ofcourse ZFS pool to manage the >> available disk without allowing disk device files in the zones. >> >> The thing I want to be done (sorry if this is a really stupid question, >> but I''m a bit of a newbie in ZFS and zones, althought I like the concept >> a lot!), is to create a pool, say pool/home, and then have that pool >> mounted read write in a couple of zones running probably OpenSolaris >> on Nexenta, depending on the breaks. So my question is, is it possible >> to mount the pool/home pool to several different zones? I''ve been trying >> just about every concievable name combination in google, but haven''t >> found a definitive answer. The thing is, one zone is going to run sendmail/ >> postfix/whatever, that stores the mail in the /home/$user/Maildir. Then >> there''ll be another zone that runs some imap server application, like >> Dovecot for example, and it reads the mail from /home/$user/Maildir. >> Can this be done with ZFS? >> >> Reading documentation on the net, I came to wonder about this kind of >> solution >> host# zfs create pool/home >> host# zonecfg -z myzone >> >>> add dataset >>> >> dataset> set name=pool/home >> dataset> end >> >>> ^D >>> >> host# zoneadm -z myzone boot >> host# zlogin myzone >> >> myzone# zfs set mountpoint=/home pool/home >> > > > I think you''re on the right track. Here''s what I have in my > global zone: > > > $ zfs list sink/home > NAME USED AVAIL REFER MOUNTPOINT > sink/home 6.68G 59.9G 6.68G /export/home > > $ zfs get mountpoint sink/home > NAME PROPERTY VALUE SOURCE > sink/home mountpoint /export/home local > > set with "zfs set mountpoint=/export/home sink/home" > > > Then in each of my zones, I have this in the zone config file: > > > <filesystem special="/export/home/jmcp" directory="/export/home/jmcp" > type="lofs"> > <fsoption name="rw"/> > </filesystem> > > > > So then in each zone''s /etc/auto_home I have > > jmcp localhost:/export/home/jmcp > > > > > James C. McPherson > -- > Senior Kernel Software Engineer, Solaris > Sun Microsystems > http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog > _______________________________________________ > zones-discuss mailing list > zones-discuss at opensolaris.org >
James C. McPherson
2008-Jan-02 06:03 UTC
[zfs-discuss] [zones-discuss] ZFS shared /home between zones
Lou Springer wrote:> Well, ignore my post, a kernel engineer would know. I had no idea you > could loopback mount the same filesystem into multiple zones, or am I > missing something? This would certainly be more efficient than using nfs.Hi Lou, no need to disparage yourself (at least in public! :>) You can definitely loopback mount the same fs into multiple zones, and as far as I can see you don''t have the multiple-writer issues that otherwise require Qfs to solve - since you''re operating within just one kernel instance. It works for me, might not for anybody else.... The only real gotcha that I''ve come across concerns some of the teamware tools which don''t cope well when they try try to figure out where to go when the loopback mount appears to point to itself. In the zones, I''ve got a /scratch which is lofs mounted from the global, and that shows up in the zone''s /etc/mnttab as /scratch on /scratch read/write/setuid/devices/dev=2d9000d on Tue Dec 25 09:44:10 2007 The ws command hates it - "hmm, the underlying device for /scratch is /scratch.... maybe if I loop around stat()ing it it''ll turn into a pumpkin" :-) I think that if I used the underlying zfs rather than a loopback fs mount that might fix things up. However that would require me to re-jiggle my filesystems a lot and I just can''t be bothered since I do have a workaround. cheers, James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
Bob Scheifler
2008-Jan-02 14:52 UTC
[zfs-discuss] [zones-discuss] ZFS shared /home between zones
James C. McPherson wrote:> You can definitely loopback mount the same fs into multiple > zones, and as far as I can see you don''t have the multiple-writer > issues that otherwise require Qfs to solve - since you''re operating > within just one kernel instance.Is there any significant performance impact with loopback mounts? - Bob
James C. McPherson
2008-Jan-02 21:19 UTC
[zfs-discuss] [zones-discuss] ZFS shared /home between zones
Bob Scheifler wrote:> James C. McPherson wrote: >> You can definitely loopback mount the same fs into multiple >> zones, and as far as I can see you don''t have the multiple-writer >> issues that otherwise require Qfs to solve - since you''re operating >> within just one kernel instance. > > Is there any significant performance impact with loopback mounts?Not that I have come across. I''ve got three zones (global + 2 others) running permanently. My home directory is constantly mounted on two of them, and periodically on the third. Both the non-global zones have several loopback-mounted filesystems in very heavy use and at least from my point of view the performance has been quite good. James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
Steve McKinty
2008-Jan-03 13:51 UTC
[zfs-discuss] [zones-discuss] ZFS shared /home between zones
In general you should not allow a Solaris system to be both an NFS server and NFS client for the same filesystem, irrespective of whether zones are involved. Among other problems, you can run into kernel deadlocks in some (rare) circumstances. This is documented in the NFS administration docs. A loopback mount is definitely the recommended approach. This message posted from opensolaris.org
Ian Collins
2008-Jan-04 00:21 UTC
[zfs-discuss] [zones-discuss] ZFS shared /home between zones
James C. McPherson wrote:> > The ws command hates it - "hmm, the underlying device for > /scratch is /scratch.... maybe if I loop around stat()ing > it it''ll turn into a pumpkin" > > :-) > > >As does dmake, which is a real PITA for a developer! Ian
On Wed, 2 Jan 2008, James C. McPherson wrote:> Bo Granlund wrote: >> Hi, >> >> [Sorry for cross-posting, but I think either list can provide the >> solution I''m looking for.] >> >> I have been up all night researching zones and ZFS for a particular >> project we are going to build soon. It''s going to feature the latest >> and greatest of OpenSolaris, and use ofcourse ZFS pool to manage the >> available disk without allowing disk device files in the zones. >> >> The thing I want to be done (sorry if this is a really stupid question, >> but I''m a bit of a newbie in ZFS and zones, althought I like the concept >> a lot!), is to create a pool, say pool/home, and then have that pool >> mounted read write in a couple of zones running probably OpenSolaris >> on Nexenta, depending on the breaks. So my question is, is it possible >> to mount the pool/home pool to several different zones? I''ve been trying >> just about every concievable name combination in google, but haven''t >> found a definitive answer. The thing is, one zone is going to run sendmail/ >> postfix/whatever, that stores the mail in the /home/$user/Maildir. Then >> there''ll be another zone that runs some imap server application, like >> Dovecot for example, and it reads the mail from /home/$user/Maildir. >> Can this be done with ZFS? >> >> Reading documentation on the net, I came to wonder about this kind of >> solution >> host# zfs create pool/home >> host# zonecfg -z myzone >>> add dataset >> dataset> set name=pool/home >> dataset> end >>> ^D >> host# zoneadm -z myzone boot >> host# zlogin myzone >> >> myzone# zfs set mountpoint=/home pool/home > > > I think you''re on the right track. Here''s what I have in my > global zone: > > > $ zfs list sink/home > NAME USED AVAIL REFER MOUNTPOINT > sink/home 6.68G 59.9G 6.68G /export/home > > $ zfs get mountpoint sink/home > NAME PROPERTY VALUE SOURCE > sink/home mountpoint /export/home local > > set with "zfs set mountpoint=/export/home sink/home" > > > Then in each of my zones, I have this in the zone config file: > > > <filesystem special="/export/home/jmcp" directory="/export/home/jmcp" > type="lofs"> > <fsoption name="rw"/> > </filesystem> > > So then in each zone''s /etc/auto_home I have > > jmcp localhost:/export/home/jmcp >It''s not recommended practice to modify the zone config files directly (bad boy James!). While configuring the zone you can do this: add fs set dir=/tanku/home set special=/tanku/home set type=lofs set options=nodevices end commit My preference is to add the user in the zone using useradd with no "-m" or "-d" switch, and then followup by setting the /etc/passwd entry to /tanku/home/username and avoid using the /export/home and /home conventions. This leaves open the possibility of some NFS mounts later. Also, if you upgrade a box which has the UFS home filesystem mounted as /export/home to use ZFS filesystems, you may elect to leave the old drive(s) in place and keep the legacy data mounted as /export/home and the new home directories using ZFS type names - like pool and tank etc. Regards, Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/ Graduate from "sugar-coating school"? Sorry - I never attended! :)
Al Hopper wrote: ...> It''s not recommended practice to modify the zone config files directly > (bad boy James!).Bad boy Al for making an unwarranted assumption about what I have or have not done!> While configuring the zone you can do this:>> add fs > set dir=/tanku/home > set special=/tanku/home > set type=lofs > set options=nodevices > end > commitwhich is exactly what I''ve been doing.> My preference is to add the user in the zone using useradd with no > "-m" or "-d" switch, and then followup by setting the /etc/passwd > entry to /tanku/home/username and avoid using the /export/home and /home > conventions. This leaves open the possibility of some NFS mounts > later.All of which is a workaround for the essential problem that we don''t have a easy and obvious way of adding users to zones from the command line unless you use the webconsole thingy. James C. McPherson -- Senior Kernel Software Engineer, Solaris Sun Microsystems http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog
On Sun, 6 Jan 2008, James C. McPherson wrote:> Al Hopper wrote: > ... >> It''s not recommended practice to modify the zone config files directly (bad >> boy James!). > > Bad boy Al for making an unwarranted assumption about what > I have or have not done!Whoops!> >> While configuring the zone you can do this: >> >> add fs >> set dir=/tanku/home >> set special=/tanku/home >> set type=lofs >> set options=nodevices >> end >> commit > > which is exactly what I''ve been doing.Sorry James. :(> >> My preference is to add the user in the zone using useradd with no "-m" or >> "-d" switch, and then followup by setting the /etc/passwd entry to >> /tanku/home/username and avoid using the /export/home and /home >> conventions. This leaves open the possibility of some NFS mounts later. > > All of which is a workaround for the essential problem that > we don''t have a easy and obvious way of adding users to zones > from the command line unless you use the webconsole thingy. > > > > James C. McPherson > -- > Senior Kernel Software Engineer, Solaris > Sun Microsystems > http://blogs.sun.com/jmcp http://www.jmcp.homeunix.com/blog >Regards, Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007 http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/ Graduate from "sugar-coating school"? Sorry - I never attended! :)
?? But the ZFS man page specifically says, "A ZFS file system that is added to a non-global zone must have its mountpoint property set to legacy." Is this obsolete?? -- Scott This message posted from opensolaris.org
Torsten "Paul" Eichstädt
2008-Jan-20 19:36 UTC
[zfs-discuss] ZFS shared /home between zones
a loopback mount, not a dataset, does what you want. in zonecfg, do:> add fs > set special=/export/home > set dir=/home > set type=lofs > add options rw,nodevices,noexec,nosetuid > end > verify# man zonecfg Make sure the local zones have the same userids as the global zone, best would be to use LDAP or Kerberos. Switch off the automounter in the local zones, it uses /home. $ scvs -a | grep auto; su - # svcadm disable autofs Hint: prepare a zone, configure to do all what you want, test it for a while, then clone it. This message posted from opensolaris.org
Torsten Paul Eichst?dt wrote:> a loopback mount, not a dataset, does what you want. > in zonecfg, do: >> add fs >> set special=/export/home >> set dir=/home >> set type=lofs >> add options rw,nodevices,noexec,nosetuid >> end >> verify > # man zonecfg > > Make sure the local zones have the same userids as the global zone, best would be to use LDAP or Kerberos.Kerberos isn''t a nameservice and thus does not provide a uid/name mapping service. Kerberos also has no function for lofs mounts - only NFSv3 & NFSv4. -- Darren J Moffat