Steffen Weiberle
2006-Sep-08 12:18 UTC
[zfs-discuss] ?: ZFS and jumpstart export race condition
I have a jumpstart server where the install images are on a ZFS pool. For PXE boot, several lofs mounts are created and configured in /etc/vfstab. My system does not boot properly anymore because the mounts referring to jumstart files haven''t been mounted yet via ZFS. What is the best way of working around this? Can I just create the necessary mounts of pool1/jumpstart in /etc/vfstab, or is ZFS just not running yet when these mounts get attempted? A lot of network services, including ssh, are not running because fs-local did not come up clean. Is this a know problem that is being addressed? This is S10 6/06. Thanks Steffen # cat /etc/vfstab ... /export/jumpstart/s10/x86/boot - /tftpboot/I86PC.Solaris_10-1 lofs - yes ro /export/jumpstart/nv/x86/latest/boot - /tftpboot/I86PC.Solaris_11-1 lofs - yes ro /export/jumpstart/s10u3/x86/latest/boot - /tftpboot/I86PC.Solaris_10-2 lofs - yes ro # zfs get all pool1/jumpstart NAME PROPERTY VALUE SOURCE pool1/jumpstart type filesystem - pool1/jumpstart creation Mon Jun 12 8:26 2006 - pool1/jumpstart used 39.9G - pool1/jumpstart available 17.7G - pool1/jumpstart referenced 39.9G - pool1/jumpstart compressratio 1.00x - pool1/jumpstart mounted yes - pool1/jumpstart quota none default pool1/jumpstart reservation none default pool1/jumpstart recordsize 128K default pool1/jumpstart mountpoint /export/jumpstart local pool1/jumpstart sharenfs ro,anon=0 local pool1/jumpstart checksum on default pool1/jumpstart compression off default pool1/jumpstart atime on default pool1/jumpstart devices on default pool1/jumpstart exec on default pool1/jumpstart setuid on default pool1/jumpstart readonly off default pool1/jumpstart zoned off default pool1/jumpstart snapdir hidden default pool1/jumpstart aclmode groupmask default pool1/jumpstart aclinherit secure default
Thomas Wagner
2006-Sep-08 12:55 UTC
[zfs-discuss] ?: ZFS and jumpstart export race condition
Steffen, I have the same with my home-installserver. As a dirty solution I set mount-at-boot to "no" for the lofs Filesystems, to get the system up. But with every new OS added by JET the mount at reboot reappears. Seems to me as the question "when should a lofs filesystem be mounted at boot". When does a zfs filesystem get mounted? Probably a zfs legacy mount together with a lower priority lofs mount would do it. Regards, Thomas On Fri, Sep 08, 2006 at 08:18:06AM -0400, Steffen Weiberle wrote:> I have a jumpstart server where the install images are on a ZFS pool. > For PXE boot, several lofs mounts are created and configured in > /etc/vfstab. My system does not boot properly anymore because the > mounts referring to jumstart files haven''t been mounted yet via ZFS. > > What is the best way of working around this? Can I just create the > necessary mounts of pool1/jumpstart in /etc/vfstab, or is ZFS just not > running yet when these mounts get attempted? > > A lot of network services, including ssh, are not running because > fs-local did not come up clean. > > Is this a know problem that is being addressed? This is S10 6/06. > > Thanks > Steffen > > > # cat /etc/vfstab > ... > /export/jumpstart/s10/x86/boot - /tftpboot/I86PC.Solaris_10-1 lofs - > yes ro > /export/jumpstart/nv/x86/latest/boot - /tftpboot/I86PC.Solaris_11-1 > lofs - yes ro > /export/jumpstart/s10u3/x86/latest/boot - /tftpboot/I86PC.Solaris_10-2 > lofs - yes ro > > > > # zfs get all pool1/jumpstart > NAME PROPERTY VALUE SOURCE > pool1/jumpstart type filesystem - > pool1/jumpstart creation Mon Jun 12 8:26 2006 - > pool1/jumpstart used 39.9G - > pool1/jumpstart available 17.7G - > pool1/jumpstart referenced 39.9G - > pool1/jumpstart compressratio 1.00x - > pool1/jumpstart mounted yes - > pool1/jumpstart quota none default > pool1/jumpstart reservation none default > pool1/jumpstart recordsize 128K default > pool1/jumpstart mountpoint /export/jumpstart local > pool1/jumpstart sharenfs ro,anon=0 local > pool1/jumpstart checksum on default > pool1/jumpstart compression off default > pool1/jumpstart atime on default > pool1/jumpstart devices on default > pool1/jumpstart exec on default > pool1/jumpstart setuid on default > pool1/jumpstart readonly off default > pool1/jumpstart zoned off default > pool1/jumpstart snapdir hidden default > pool1/jumpstart aclmode groupmask default > pool1/jumpstart aclinherit secure default > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- Mit freundlichen Gruessen, Thomas Wagner -- ********************************************************************* Thomas Wagner Tel: +49-(0)-711-720 98-131 Strategic Support Engineer Fax: +49-(0)-711-720 98-443 Global Customer Services Cell: +49-(0)-175-292 60 64 Sun Microsystems GmbH E-Mail: Thomas.Wagner at Sun.com Zettachring 10A, D-70567 Stuttgart http://www.sun.de
Casper.Dik at Sun.COM
2006-Sep-08 13:06 UTC
[zfs-discuss] ?: ZFS and jumpstart export race condition
>I have the same with my home-installserver. As a dirty solution I >set mount-at-boot to "no" for the lofs Filesystems, to get the system up. >But with every new OS added by JET the mount at reboot reappears. > >Seems to me as the question "when should a lofs filesystem be mounted at boot". >When does a zfs filesystem get mounted? >Probably a zfs legacy mount together with a lower priority lofs mount >would do it.JET needs to be taught about ZFS; there does not seem to be any other way. (JET/setup_install_server creates the loopback mounts; without making the ZFS mounts into legacy mounts or creating them differently it will not work; personally I use auto_direct for the /tftpboot sub mounts; works for anything) Casper
Steffen Weiberle
2006-Sep-08 13:57 UTC
[zfs-discuss] ?: ZFS and jumpstart export race condition
Casper.Dik at Sun.COM wrote On 09/08/06 09:06,:> >>I have the same with my home-installserver. As a dirty solution I >>set mount-at-boot to "no" for the lofs Filesystems, to get the system up. >>But with every new OS added by JET the mount at reboot reappears. >> >>Seems to me as the question "when should a lofs filesystem be mounted at boot". >>When does a zfs filesystem get mounted? >>Probably a zfs legacy mount together with a lower priority lofs mount >>would do it. > > > > JET needs to be taught about ZFS; there does not seem to be any other > way.Maybe. However, I did not use JET. I set up ZFS using default (AFAIK at this point) parameters.> > (JET/setup_install_server creates the loopback mounts; without making the > ZFS mounts into legacy mounts or creating them differently it will not > work; personally I use auto_direct for the /tftpboot sub mounts; works > for anything)I believe that add_install_client [with a -b option?] is what is creating my vfstab entries. I haven''t had reboot issues until overnight (a system move), and I have been doing PXE boot of some x64 systems only recently, i.e. since the most recent power failure. Install images are being put down via getimage, so it is possible that setup_install_server would do the same. (not sure whether getimage does a setup_install_server at it completion.) Steffen> > Casper > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Victor Latushkin
2006-Sep-08 14:00 UTC
[zfs-discuss] ?: ZFS and jumpstart export race condition
I''ve just met the same issue. It is tracked in Bug 6418732. Regards, Victor Thomas Wagner wrote:> Steffen, > > I have the same with my home-installserver. As a dirty solution I > set mount-at-boot to "no" for the lofs Filesystems, to get the system up. > But with every new OS added by JET the mount at reboot reappears. > > Seems to me as the question "when should a lofs filesystem be mounted at boot". > When does a zfs filesystem get mounted? > Probably a zfs legacy mount together with a lower priority lofs mount > would do it. > > Regards, > Thomas > > On Fri, Sep 08, 2006 at 08:18:06AM -0400, Steffen Weiberle wrote: > >> I have a jumpstart server where the install images are on a ZFS pool. >> For PXE boot, several lofs mounts are created and configured in >> /etc/vfstab. My system does not boot properly anymore because the >> mounts referring to jumstart files haven''t been mounted yet via ZFS. >> >> What is the best way of working around this? Can I just create the >> necessary mounts of pool1/jumpstart in /etc/vfstab, or is ZFS just not >> running yet when these mounts get attempted? >> >> A lot of network services, including ssh, are not running because >> fs-local did not come up clean. >> >> Is this a know problem that is being addressed? This is S10 6/06. >> >> Thanks >> Steffen >> >> >> # cat /etc/vfstab >> ... >> /export/jumpstart/s10/x86/boot - /tftpboot/I86PC.Solaris_10-1 lofs - >> yes ro >> /export/jumpstart/nv/x86/latest/boot - /tftpboot/I86PC.Solaris_11-1 >> lofs - yes ro >> /export/jumpstart/s10u3/x86/latest/boot - /tftpboot/I86PC.Solaris_10-2 >> lofs - yes ro >> >> >> >> # zfs get all pool1/jumpstart >> NAME PROPERTY VALUE SOURCE >> pool1/jumpstart type filesystem - >> pool1/jumpstart creation Mon Jun 12 8:26 2006 - >> pool1/jumpstart used 39.9G - >> pool1/jumpstart available 17.7G - >> pool1/jumpstart referenced 39.9G - >> pool1/jumpstart compressratio 1.00x - >> pool1/jumpstart mounted yes - >> pool1/jumpstart quota none default >> pool1/jumpstart reservation none default >> pool1/jumpstart recordsize 128K default >> pool1/jumpstart mountpoint /export/jumpstart local >> pool1/jumpstart sharenfs ro,anon=0 local >> pool1/jumpstart checksum on default >> pool1/jumpstart compression off default >> pool1/jumpstart atime on default >> pool1/jumpstart devices on default >> pool1/jumpstart exec on default >> pool1/jumpstart setuid on default >> pool1/jumpstart readonly off default >> pool1/jumpstart zoned off default >> pool1/jumpstart snapdir hidden default >> pool1/jumpstart aclmode groupmask default >> pool1/jumpstart aclinherit secure default >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> > >
Casper.Dik at Sun.COM
2006-Sep-08 14:08 UTC
[zfs-discuss] ?: ZFS and jumpstart export race condition
>I believe that add_install_client [with a -b option?] is what is >creating my vfstab entries. I haven''t had reboot issues until >overnight (a system move), and I have been doing PXE boot of some x64 >systems only recently, i.e. since the most recent power failure. > >Install images are being put down via getimage, so it is possible that > setup_install_server would do the same. (not sure whether getimage >does a setup_install_server at it completion.)Either setup_install_server or add_install_client does this (or perhaps both). Casper
Scott Dickson - Systems Engineer
2006-Sep-15 16:07 UTC
[zfs-discuss] ?: ZFS and jumpstart export race condition
How did you get these images on ZFS? Did you just put them yourself or did you run setup_install_server? When I try to use add_install_client, if the image is on ZFS, it refuses. How do you get around that? --SCott Steffen Weiberle wrote:> I have a jumpstart server where the install images are on a ZFS pool. > For PXE boot, several lofs mounts are created and configured in > /etc/vfstab. My system does not boot properly anymore because the > mounts referring to jumstart files haven''t been mounted yet via ZFS. > > What is the best way of working around this? Can I just create the > necessary mounts of pool1/jumpstart in /etc/vfstab, or is ZFS just not > running yet when these mounts get attempted? > > A lot of network services, including ssh, are not running because > fs-local did not come up clean. > > Is this a know problem that is being addressed? This is S10 6/06. > > Thanks > Steffen > > > # cat /etc/vfstab > ... > /export/jumpstart/s10/x86/boot - /tftpboot/I86PC.Solaris_10-1 lofs - > yes ro > /export/jumpstart/nv/x86/latest/boot - /tftpboot/I86PC.Solaris_11-1 > lofs - yes ro > /export/jumpstart/s10u3/x86/latest/boot - /tftpboot/I86PC.Solaris_10-2 > lofs - yes ro > > > > # zfs get all pool1/jumpstart > NAME PROPERTY VALUE SOURCE > pool1/jumpstart type filesystem - > pool1/jumpstart creation Mon Jun 12 8:26 2006 - > pool1/jumpstart used 39.9G - > pool1/jumpstart available 17.7G - > pool1/jumpstart referenced 39.9G - > pool1/jumpstart compressratio 1.00x - > pool1/jumpstart mounted yes - > pool1/jumpstart quota none default > pool1/jumpstart reservation none default > pool1/jumpstart recordsize 128K default > pool1/jumpstart mountpoint /export/jumpstart local > pool1/jumpstart sharenfs ro,anon=0 local > pool1/jumpstart checksum on default > pool1/jumpstart compression off default > pool1/jumpstart atime on default > pool1/jumpstart devices on default > pool1/jumpstart exec on default > pool1/jumpstart setuid on default > pool1/jumpstart readonly off default > pool1/jumpstart zoned off default > pool1/jumpstart snapdir hidden default > pool1/jumpstart aclmode groupmask default > pool1/jumpstart aclinherit secure default > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss