Hello there, I know this is an OpenSolaris forum, and likely if I used OpenSolaris I could make this script work ... http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ ..but I was trying to make it work with Solaris 10 (2007-08) and I got as far as Step #5 before I encountered a bit of a problem. More specifically this line ... # zpool set bootfs=rootpool/rootfs rootpool As far as I can see, Solaris (2007-08) does not include the "ZFS Boot bits" which means it doesn''t support the boolean "bootfs". Now at this point I am a little stuck, because frankly I don''t have a 100% handle on exactly what I am doing, but I am thinking I can use this ... http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/mntroot-transition/ ... to boot ZFS the old "mountroot" way? Frankly I am guessing, and a better explanation would be much appreciated. Again, I know I am using Solaris 10 and not OpenSolaris. I was thinking OpenSolaris wouldn''t be good for a production system just yet, but I still wanted to use ZFS if it''s stable. Also, I figured if I posted here I would get the best, quick response! Any help or explanation is much appreciated. Thanks, Bill This message posted from opensolaris.org
William Papolis wrote:> Hello there, > > I know this is an OpenSolaris forum, and likely if I used OpenSolaris I could make this script work ... > > http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/ > > ..but I was trying to make it work with Solaris 10 (2007-08) and I got as far as Step #5 before I encountered a bit of a problem. > > More specifically this line ... # zpool set bootfs=rootpool/rootfs rootpool > > As far as I can see, Solaris (2007-08) does not include the "ZFS Boot bits" which means it doesn''t support the boolean "bootfs". > > Now at this point I am a little stuck, because frankly I don''t have a 100% handle on exactly what I am doing, but I am thinking I can use this ... > > http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/mntroot-transition/ > > ... to boot ZFS the old "mountroot" way? > > Frankly I am guessing, and a better explanation would be much appreciated. > > Again, I know I am using Solaris 10 and not OpenSolaris. I was thinking OpenSolaris wouldn''t be good for a production system just yet, but I still wanted to use ZFS if it''s stable. Also, I figured if I posted here I would get the best, quick response! Any help or explanation is much appreciated. > >You can certainly use ZFS on S10, but not as a root file system. None of that support has been backported and so zfs root is not yet ready for production use. Lori
k, Thanks for the tip. I spent a day trying it out, and this is what I learned ... 1. Solaris 10 (2007-08) doesn''t have the "ZFS Boot Bits" installed 2. Only OpenSolaris version snv_62 or later has "ZFS Boot Bits" 3. Even if I got OpenSolaris working with ZFS booting the system, with a 2 disk SATA array, I would still have trouble; if one drive fails, ZFS won''t boot. 4. Further there seems to be some issues with SATA framework and ZFS. Currently, it appears, it''s best to use SAS or SCSI. It''s too bad, I had high hopes that I could use ZFS to boot with the latest Solaris 10 build. With the "Live updating with ZFS running" problem solved, I was ready! On the other side, utilizing 2 filesystems on 2 drives is more complicated than I wanted to go. A couple questions ... 1. It appears that OpenSolaris has no way to get updates from Sun. (meaning Patch Manager doesn''t work with it in CLI - #/usr/sbin/smpatch get -) So ... how do people "patch" OpenSolaris? 2. Back to Solaris Volume Manager (SVM), I guess. It''s too bad too, because I don''t like it with 2 SATA disks either. There isn''t enough drives to put the State Database Replicas so that if either drive failed, the system would reboot unattended. Unless there is a trick? Thanks for the help, Bill This message posted from opensolaris.org
> 2. Back to Solaris Volume Manager (SVM), I guess. It''s too bad too, because I > don''t like it with 2 SATA disks either. There isn''t enough drives to put the > State Database Replicas so that if either drive failed, the system would > reboot unattended. Unless there is a trick?There is a trick for this, not sure how long it''s been around. Add to /etc/system: *Allow the system to boot if one of two rootdisks is missing set md:mirrored_root_flag=1 Good luck. --Kris -- Thomas Kris Kasner Qualcomm Inc. 5775 Morehouse Drive San Diego, CA 92121
Sweet! Thank you! :) Bill This message posted from opensolaris.org
Kris Kasner wrote:>> 2. Back to Solaris Volume Manager (SVM), I guess. It''s too bad too, because I >> don''t like it with 2 SATA disks either. There isn''t enough drives to put the >> State Database Replicas so that if either drive failed, the system would >> reboot unattended. Unless there is a trick? > > There is a trick for this, not sure how long it''s been around. > Add to /etc/system: > *Allow the system to boot if one of two rootdisks is missing > set md:mirrored_root_flag=1Before you do this, please read the fine manual: http://docs.sun.com/app/docs/doc/819-2724/chapter2-161?l=en&a=view&q=mirrored_root_flag This can cause corruption and is "not supported." -- richard
> 1. It appears that OpenSolaris has no way to get updates from Sun.> So ... how do people "patch" OpenSolaris? Easy, by upgrading to the next OpenSolaris build. I guess this is a kind of FAQ There are no patches for OpenSolaris, by defintion. All fixes and new features are always first integrated into the current development version of Solaris. When that is done, the fix is backported into older releases and tested there. When that is satisfactory, the fix gets rolled into an official patch for that specific release. Even so, sometimes updates for specific modules can be released for testing, before they are integrated into the next build. But you would need to install the files by hand, or even build them from source. Cheers, Henk Langeveld
OK, I guess using this ... set md:mirrored_root_flag=1 for Solaris Volume Manager (SVM) is not supported and could cause problems. I guess it''s back to my first idea ... With 2 disks, setup three SDR''s (State Database Replicas) Drive 0 = 1 SDR -> If this drive fails auto-magically boot DRIVE 1 Drive 1 = 2 SDR''s -> If this drive fails Sysadmin intervention required Well that''s OK, at least 50% of the time the system won''t KACK. Thanks for the help. I am pleasantly surprised with the level of Sun Staff involvement to help things along. Much appreciated Richard and other Sun Staff!! Bill This message posted from opensolaris.org
Henk, By upgrading do you mean, rebooting and installing Open Solaris from DVD or Network? Like, no Patch Manager install some quick patches and updates and a quick reboot, right? Bill This message posted from opensolaris.org
OpenSolaris builds are like "development snapshots"...they''re not a release and thus there are no patches. SXCE is just binary build from these snapshots... it''s there are convenience only, and "patches" are applied like in every other development project... by updating from source repository, compiling and installing (aka. BFU). This message posted from opensolaris.org
On 30/09/2007, William Papolis <wpapolis at gmail.com> wrote:> Henk, > > By upgrading do you mean, rebooting and installing Open Solaris from DVD or Network? > > Like, no Patch Manager install some quick patches and updates and a quick reboot, right?You can live upgrade and then do a quick reboot: http://number9.hellooperator.net/articles/2007/08/08/solaris-laptop-live-upgrade -- Rasputin :: Jack of All Trades - Master of Nuns http://number9.hellooperator.net/
Hello Richard, Friday, September 28, 2007, 7:45:47 PM, you wrote: RE> Kris Kasner wrote:>>> 2. Back to Solaris Volume Manager (SVM), I guess. It''s too bad too, because I >>> don''t like it with 2 SATA disks either. There isn''t enough drives to put the >>> State Database Replicas so that if either drive failed, the system would >>> reboot unattended. Unless there is a trick? >> >> There is a trick for this, not sure how long it''s been around. >> Add to /etc/system: >> *Allow the system to boot if one of two rootdisks is missing >> set md:mirrored_root_flag=1RE> Before you do this, please read the fine manual: RE> http://docs.sun.com/app/docs/doc/819-2724/chapter2-161?l=en&a=view&q=mirrored_root_flag The description on docs.sun.com is somewhat misleading. http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/lvm/md/md_mddb.c#5659 5659 if (mirrored_root_flag == 1 && setno == 0 && 5660 svm_bootpath[0] != 0) { 5661 md_clr_setstatus(setno, MD_SET_STALE); Looks like it has to be diskset=0 bootpath has to reside on svm device and mirrored_root_flag has to be set to 1. So if you got other disks (>2) in a system just put them in a separate disk group. -- Best regards, Robert Milkowski mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
Robert Milkowski wrote:> Hello Richard, > > Friday, September 28, 2007, 7:45:47 PM, you wrote: > > RE> Kris Kasner wrote: > >>>> 2. Back to Solaris Volume Manager (SVM), I guess. It''s too bad too, because I >>>> don''t like it with 2 SATA disks either. There isn''t enough drives to put the >>>> State Database Replicas so that if either drive failed, the system would >>>> reboot unattended. Unless there is a trick? >>>> >>> There is a trick for this, not sure how long it''s been around. >>> Add to /etc/system: >>> *Allow the system to boot if one of two rootdisks is missing >>> set md:mirrored_root_flag=1 >>> > > RE> Before you do this, please read the fine manual: > RE> http://docs.sun.com/app/docs/doc/819-2724/chapter2-161?l=en&a=view&q=mirrored_root_flag > > The description on docs.sun.com is somewhat misleading. > > http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/lvm/md/md_mddb.c#5659 > 5659 if (mirrored_root_flag == 1 && setno == 0 && > 5660 svm_bootpath[0] != 0) { > 5661 md_clr_setstatus(setno, MD_SET_STALE); > > Looks like it has to be diskset=0 bootpath has to reside on svm device > and mirrored_root_flag has to be set to 1. > > So if you got other disks (>2) in a system just put them in a separate > disk group. > > > >If we have more than 2 disks, then we have space for a 3rd metadb copy. -- richard
On 30/09/2007, William Papolis <wpapolis at gmail.com> wrote:> OK, > > I guess using this ... > > set md:mirrored_root_flag=1 > > for Solaris Volume Manager (SVM) is not supported and could cause problems. > > I guess it''s back to my first idea ... > > With 2 disks, setup three SDR''s (State Database Replicas) > Drive 0 = 1 SDR -> If this drive fails auto-magically boot DRIVE 1 > Drive 1 = 2 SDR''s -> If this drive fails Sysadmin intervention required > > Well that''s OK, at least 50% of the time the system won''t KACK.What you gain on the swings, you lose on the roundabouts. But if you lose drive 1 when the system is running, it''ll now panic (whereas with 50% of quorum, it will continue to run). -- Rasputin :: Jack of All Trades - Master of Nuns http://number9.hellooperator.net/
Hello Richard, Friday, October 5, 2007, 6:41:10 PM, you wrote: RE> Robert Milkowski wrote:>> Hello Richard, >> >> Friday, September 28, 2007, 7:45:47 PM, you wrote: >> >> RE> Kris Kasner wrote: >> >>>>> 2. Back to Solaris Volume Manager (SVM), I guess. It''s too bad too, because I >>>>> don''t like it with 2 SATA disks either. There isn''t enough drives to put the >>>>> State Database Replicas so that if either drive failed, the system would >>>>> reboot unattended. Unless there is a trick? >>>>> >>>> There is a trick for this, not sure how long it''s been around. >>>> Add to /etc/system: >>>> *Allow the system to boot if one of two rootdisks is missing >>>> set md:mirrored_root_flag=1 >>>> >> >> RE> Before you do this, please read the fine manual: >> RE> http://docs.sun.com/app/docs/doc/819-2724/chapter2-161?l=en&a=view&q=mirrored_root_flag >> >> The description on docs.sun.com is somewhat misleading. >> >> http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/io/lvm/md/md_mddb.c#5659 >> 5659 if (mirrored_root_flag == 1 && setno == 0 && >> 5660 svm_bootpath[0] != 0) { >> 5661 md_clr_setstatus(setno, MD_SET_STALE); >> >> Looks like it has to be diskset=0 bootpath has to reside on svm device >> and mirrored_root_flag has to be set to 1. >> >> So if you got other disks (>2) in a system just put them in a separate >> disk group. >> >> >> >>RE> If we have more than 2 disks, then we have space for a 3rd metadb copy. RE> -- richard well, depends - if it''s external jbod I prefer to put all disks from that jbod into separate diskset - that way it''s easier to move that jbod or re-install host. -- Best regards, Robert mailto:rmilkowski at task.gda.pl http://milek.blogspot.com