Hi, I have been trying to setup a boot ZFS filesystem since b63 and found out about bug 6553537 that was preventing boot from ZFS filesystems starting from b63. First question is whether b65 has solved the problem as was planned on the bug page. Second question is: as I cannot boot successfully from a ZFS filesystem after following the ZFS Boot Manual Setup instructions (http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/) due to a panic down the call chain of vfs_mountroot, what else (other than the bug, that is) could be wrong? -- Douglas This message posted from opensolaris.org
hi Doug, On Mon, 2007-06-04 at 12:25 -0700, Douglas Atique wrote:> I have been trying to setup a boot ZFS filesystem since b63 and found > out about bug 6553537 that was preventing boot from ZFS filesystems > starting from b63. First question is whether b65 has solved the > problem as was planned on the bug page.I''ll verify this today (note that this was only for netinstall/pfinstall ZFS boot - manually setup ZFS boot should work fine.)> Second question is: as I cannot boot successfully from a ZFS > filesystem after following the ZFS Boot Manual Setup instructions > (http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/) due > to a panic down the call chain of vfs_mountroot, what else (other than > the bug, that is) could be wrong?There''s a number of things you could check: 1. Is your root pool one of the supported types (mirror or single-disk) 2. There''s a bug with compression at the moment - the root pool, and the top level pool need to have compression set to off. ( 6538017 ) 3. Check that you''ve got an SMI label on the pool you''re trying to boot from ( more at http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling ) 4. Can you make sure your bios is booting from the correct device 5. (a bit more drastic) Can you run the script pointed to at the top of that page and setup ZFS boot that way, which could account for pilot error in following the manual steps. After that, could you verify that by changing the grub menu entry in /<pool>/boot/grub/menu.lst ( eg. change the "title" line in the ZFS boot entry, adding some random text) that you see those changes reflected in the menu that grub actually displays ? Let me know if any of these suggestions help ? cheers, tim -- Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops http://blogs.sun.com/timf
Hi Doug, On Tue, 2007-06-05 at 06:45 -0700, Douglas Atique wrote:> Hi, Tim. Thanks for your hints.No problem> Comments on each one follow (marked with "Doug:" and in blue).html mail :-/> Tim Foster <Tim.Foster at Sun.COM> wrote: > There''s a number of things you could check: > > 1. Is your root pool one of the supported types (mirror or > single-disk) > > Doug:I don''t know. I have created a pool in a slice of my main > disk. This is the layout:I assume your pool has just one slice in it. zpool status -v <pool> tells you the pool layout. I''ve a similar (but less complicated) layout on my machine here (with nv_64, admittedly) Did you installgrub the new zfs-capable version of grub onto c0d0s0? (I''m assuming you did, otherwise the "bootfs" keyword in the ZFS entry would fail) I haven''t tried booting a ZFS dataset from grub installed on a UFS disk> 2. There''s a bug with compression at the moment - the root > pool, > and the top level pool need to have compression set to off. > ( 6538017 ) > Doug: I don''t set compression on deliberately. Could it be on > by default?Nope, I don''t think so - check with "zfs get compression <dataset>"> 3. Check that you''ve got an SMI label on the pool you''re > trying to > boot from ( more at > http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling ) > Doug: I guess it is, because of the many slices. But how could > I check (read-only, non-destructively)Sounds like you''ve already got an SMI label if you can boot a UFS-rooted system from that disk.> 4. Can you make sure your bios is booting from the correct > device > Doug: I''m sure. That''s the only disk I have. S10 and Solaris > Express from UFS all boot correctly.Okay.> 5. (a bit more drastic) Can you run the script pointed to at > the top of > that page and setup ZFS boot that way, which could account for > pilot > error in following the manual steps. > Doug: Haven''t tried that, but I would really like to do it by > hand to make sure I understand what is going on.I agree.> After that, could you verify that by changing the grub menu > entry > in //boot/grub/menu.lst ( eg. change the "title" line in the > ZFS > boot entry, adding some random text) that you see those > changes > reflected in the menu that grub actually displays ? > Doug: This is my ZFS entry in my menu.lst: > root (hd0,0,f) > bootfs snv/b65 > kernel$ /boot/platform/i86pc/kernel/$ISADIR/unkx -B > $ZFS-BOOTFS > module$ /platform/i86pc/$ISADIR/boot_archiveAnd this entry shows up when you boot in grub ? There''s a typo in the above btw, "unkx", but I''m sure that was just an error pasting into this mail (otherwise you wouldn''t have even got the banner printed) Does your /etc/vfstab file on the snv/b65 ZFS filesystem contain a single entry for /, which should look like: snv/b65 - / zfs - no - and your ufs root entry should have been changed to : /dev/dsk/c1d0s0 /dev/rdsk/c1d0s0 /ufsroot ufs - yes - (or removed) Can''t think of anything else that might be wrong unfortunately. cheers, tim>-- Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops http://blogs.sun.com/timf
hi, from the following link there is no problem with b65 http://www.opensolaris.org/os/community/zfs/boot/netinstall This message posted from opensolaris.org
I have also been trying to figure out the best strategy regarding ZFS boot... I currently have a single disk UFS boot and RAID-Z for data. I plan on getting a mirror for boot, but I still don''t understand what my options are regarding: - Should I set up one zfs slice for the entire drive and mimic live update functionality with writable clones? Or use multiple slices, each with a ZFS boot environment? - Is it reasonable to expect that this scheme will eventually be the way ZFS boot and Live Upgrade will work in "official" release so I don''t have to reinstall entire system? - Are there any other drawbacks to going with ZFS boot at this time? As a side note, and the reason I am so thankful to people who created ZFS, I will tell a brief story... I used to have a Windows XP machine with a motherboard with onboard Sil3112A SATA chipset, and Seagate 200GB 7200.7drive that contained much data. I had spent months over time ripping a few hundred CDs that my wife and I had in our collection, and they were stored in .APE format (compressed, lossless, and checksummed). I had at the same time made a rip in mp3 format for iPod/iTunes, so I rarely had reason to access lossless files - they were there for long term backup and convenience. Occasionally I would realize that one of them refused to decompress (failed checksum), but I figured it is a bug somewhere and re-ripped it and hoped it wouldn''t happen again. Then I realized that too many had this problem, and started to systematically decompress them, only to find out that around 25-30% of the files were damaged - at least hundred hours of ripping and cataloguing down the drain. While researching this issue, I found out that there were incompatibilities between controller and the drive, and that people on Linux had to hack the drivers to get around this problem (google Mod15Write). Windows drivers were also fixed at some point - don''t know when - and if it weren''t for large, checksummed files that disk was full of, I could have gone on for years without realizing that data is getting corrupted. (it was only a few bits at a time - a tiny % of total number of bits, but when you have 500MB files...). This motherboard is still alive and is currently running OpenSolaris (not using on-board SATA controller), and the drive is happily chugging along on a ICH7-based motherboard in OSX. Moral of the story being that even very mainstream and well-regarded hardware that seems a perfectly sensible purchase at the time (The very popular ASUS A7N8X-E Deluxe motherboard with Seagate SATA drives of the same period) can turn out to be a disaster, and you won''t know until it is too late. Not to sound too sappy, but right now with a 1yr old, I have too many precious digital photos and videos and losing them is not an option. I use a combination of DVD and online backups, but none of it is any good if data is saliently rotting at the source. Thank you, ZFS team. On 6/4/07, Douglas Atique <tellmebout-solaris at yahoo.com> wrote:> > Hi, > I have been trying to setup a boot ZFS filesystem since b63 and found out > about bug 6553537 that was preventing boot from ZFS filesystems starting > from b63. First question is whether b65 has solved the problem as was > planned on the bug page. Second question is: as I cannot boot successfully > from a ZFS filesystem after following the ZFS Boot Manual Setup instructions > (http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/) due to > a panic down the call chain of vfs_mountroot, what else (other than the bug, > that is) could be wrong? > > -- Douglas > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070605/0a6ae127/attachment.html>
An additional information: I noticed that I was overlooking steps 6 and 7 in the instructions (http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/). I already have slice s0 in my disk dedicated to GRUB and it features a /boot of its own, so I was thinking that it wouldn''t make a difference to have GRUB in one or another slice. But reading the instructions more carefully, I noticed that it says clearly that GRUB has to be installed in the ZFS slice, even if there is another UFS slice, even if they are on different disks. So I tried installgrub into c0d0s5, my ZFS root pool slice. I also mounted that pool as a filesystem and copied my /boot to it. And then something strange happens. When GRUB is loaded from c0d0s0 it works fine. Booting GRUB from c0d0s5 the menu is not displayed and only a blank screen is seen until the default option is loaded by timeout. Could I have done something wrong in the GRUB installation? -- Doug This message posted from opensolaris.org
It seems as if I cannot post from my e-mail. Could this thread be merged into the original one? This message posted from opensolaris.org
Hi Doug, from the information I read so far, I assume you have c0d0s0 - ufs root c0d0s5 - zfs root pool ''snv'' and root filesystem ''b65'' installgrub on c0d0s0 puts grub on the same disk as c0d0s5, but <c0d0sn> indicates which slice is the default boot slice. So, once you default boot from c0d0s5, you should not need ''root (hd0,0,f)'' in your menu.lst entry which could confuse the mapping. (I''ll try to make the doc a little bit more clear.) That said, your original menu.lst does look fine (assuming you did copy the new grub into c0d0s0). Play around a little bit more and send us more detail information. e.g. menu.lst entries from the rootpool or ufs root, where is grub installed, have you copied the devices dir to the zfs root filesystem (step 4), is snv/b65 good...etc. Lin Douglas Atique wrote:> An additional information: > I noticed that I was overlooking steps 6 and 7 in the instructions (http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/). I already have slice s0 in my disk dedicated to GRUB and it features a /boot of its own, so I was thinking that it wouldn''t make a difference to have GRUB in one or another slice. But reading the instructions more carefully, I noticed that it says clearly that GRUB has to be installed in the ZFS slice, even if there is another UFS slice, even if they are on different disks. > So I tried installgrub into c0d0s5, my ZFS root pool slice. I also mounted that pool as a filesystem and copied my /boot to it. And then something strange happens. When GRUB is loaded from c0d0s0 it works fine. Booting GRUB from c0d0s5 the menu is not displayed and only a blank screen is seen until the default option is loaded by timeout. > Could I have done something wrong in the GRUB installation? > > -- Doug > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
> > Hi Doug, from the information I read so far, I assume > you have > > c0d0s0 - ufs root > c0d0s5 - zfs root pool ''snv'' and root filesystem > ''b65''Hi Lin, My complete layout follows: c0d0s0: boot slice (holds a manually maintained /boot) -- UFS c0d0s1: the usual swap slice c0d0s3: S10U3 root -- UFS c0d0s4: SXCE root -- UFS c0d0s5: "snv" pool -- ZFS latest version c0d0s6: an experimental "boot from sources the hard way" slice -- UFS c0d0s7: "common" pool, mounted on /export -- ZFS older version My menu.lst entries all use "root" to direct grub to one of the slices: c0d0s3 --> (hd0,0,d), c0d0s4 --> (hd0,0,e), c0d0s5 --> (hd0,0,f), c0d0s6 --> (hd0,0,g).> > installgrub on c0d0s0 puts grub on the same disk as > c0d0s5, but <c0d0sn> > indicates which slice is the default boot slice. > So, once you default boot from c0d0s5, you should not > need > ''root (hd0,0,f)'' in your menu.lst entry which could > confuse the mapping. > (I''ll try to make the doc a little bit more clear.)Yes, that was a mistake from mine. I just copied the menu.lst from c0d0s0 to c0d0s5 as I tried to make that the grub partition. However, changing that didn''t work, as it didn''t work to reformat c0d0s0 as a ZFS pool. I don''t quite understand what is going on when I boot, but I observe that whenever the grub slice is ZFS the menu is not shown, as if somehow the video drivers (or BIOS routines, whatever) couldn''t be accessed by GRUB on ZFS, but could on UFS. Could there be any correlation between the video access and the filesystem type?> > That said, your original menu.lst does look fine > (assuming you did copy > the new grub into c0d0s0). > > Play around a little bit more and send us more detail > information. > e.g. menu.lst entries from the rootpool or ufs root, > where is grub > installed, > have you copied the devices dir to the zfs root > filesystem (step 4), is > snv/b65 good...etc.This is another issue. I have followed the /devices and /dev creation steps on the ZFS-boot manual, but it doesn''t work. Couldn''t I create a clean /devices and /dev instead? Does reconfiguration boot work when the /devices and /dev are empty? I will post the complete menu.lst grub entries, but not right now, as I don''t have the notebook with me. One additional comment: I have also tried to installgrub -m, i.e. on the master boot record. That also didn''t work, but I am thinking if it would make any difference to wipe the hard disk completely and start again from scratch. Do you think there could be some older GRUB stuff hidden somewhere in the disk that I didn''t upgrade? Consider that the first installation on the disk (when I partitioned it the way it is today) was of S10U3, which used an older grub. -- Doug This message posted from opensolaris.org
Hi Doug, I need more information: You need /devices and /dev on zfs root to boot. Not sure what you mean by ''it doesn''t work''? What OS version is running on your boot slice (s0)? Is this where your zfs root pool (s5) built from? ''installgrub new-stage1 new-stage2 /dev/rdsk/c0d0s0'' puts the new grub on c0d0 (with boot device points to s0) and that should be sufficient for your case. ''-m'' overrides MBR, don''t need to do it. Not sure the real impact is, but might be ok in your case. Lin Douglas Atique wrote:>> Hi Doug, from the information I read so far, I assume >> you have >> >> c0d0s0 - ufs root >> c0d0s5 - zfs root pool ''snv'' and root filesystem >> ''b65'' >> > > Hi Lin, > My complete layout follows: > c0d0s0: boot slice (holds a manually maintained /boot) -- UFS > c0d0s1: the usual swap slice > c0d0s3: S10U3 root -- UFS > c0d0s4: SXCE root -- UFS > c0d0s5: "snv" pool -- ZFS latest version > c0d0s6: an experimental "boot from sources the hard way" slice -- UFS > c0d0s7: "common" pool, mounted on /export -- ZFS older version > > My menu.lst entries all use "root" to direct grub to one of the slices: > c0d0s3 --> (hd0,0,d), > c0d0s4 --> (hd0,0,e), > c0d0s5 --> (hd0,0,f), > c0d0s6 --> (hd0,0,g). > > >> installgrub on c0d0s0 puts grub on the same disk as >> c0d0s5, but <c0d0sn> >> indicates which slice is the default boot slice. >> So, once you default boot from c0d0s5, you should not >> need >> ''root (hd0,0,f)'' in your menu.lst entry which could >> confuse the mapping. >> (I''ll try to make the doc a little bit more clear.) >> > Yes, that was a mistake from mine. I just copied the menu.lst from c0d0s0 to c0d0s5 as I tried to make that the grub partition. However, changing that didn''t work, as it didn''t work to reformat c0d0s0 as a ZFS pool. I don''t quite understand what is going on when I boot, but I observe that whenever the grub slice is ZFS the menu is not shown, as if somehow the video drivers (or BIOS routines, whatever) couldn''t be accessed by GRUB on ZFS, but could on UFS. Could there be any correlation between the video access and the filesystem type? > > >> That said, your original menu.lst does look fine >> (assuming you did copy >> the new grub into c0d0s0). >> >> Play around a little bit more and send us more detail >> information. >> e.g. menu.lst entries from the rootpool or ufs root, >> where is grub >> installed, >> have you copied the devices dir to the zfs root >> filesystem (step 4), is >> snv/b65 good...etc. >> > > This is another issue. I have followed the /devices and /dev creation steps on the ZFS-boot manual, but it doesn''t work. Couldn''t I create a clean /devices and /dev instead? Does reconfiguration boot work when the /devices and /dev are empty? > > I will post the complete menu.lst grub entries, but not right now, as I don''t have the notebook with me. > > One additional comment: I have also tried to installgrub -m, i.e. on the master boot record. That also didn''t work, but I am thinking if it would make any difference to wipe the hard disk completely and start again from scratch. Do you think there could be some older GRUB stuff hidden somewhere in the disk that I didn''t upgrade? Consider that the first installation on the disk (when I partitioned it the way it is today) was of S10U3, which used an older grub. > > -- Doug > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Douglas Atique
2007-Jun-12 13:10 UTC
[zfs-discuss] Re: Re: Re: ZFS Boot manual setup in b65
> > Hi Doug, > > I need more information: > You need /devices and /dev on zfs root to boot.Right. But can I generate them automatically somehow on the next boot? I have followed the instructions that loop-mount / and tar the contents of devices and dev and untar them to the root pool. I just want to know if there is an alternative way to do it. For example, what if I add some new hardware after I have switched to ZFS for my root fs? How will it be added to /devices and /dev? Couldn''t the same principle be applied to generate all of these directories on the first boot off the ZFS root pool?> Not sure what you > an by ''it doesn''t work''?Sorry, I didn''t make myself clear. When grub is installed on a ZFS pool (no matter if I do installgrub to my c0d0s5 or if I create a ZFS pool on my c0d0s0 and do installgrub to it) the GRUB menu is not shown. Ever. Just by installing GRUB back to a UFS slice makes it work again. This "doesn''t work" refers to the difficulties with installing GRUB on a ZFS pool and having it work successfully. Apparently the only trouble is with the video, as the menu is not shown, but the options remain functional (if I remember the order by heart, that is). This is a different problem than my panic on boot off the ZFS pool in c0d0s5, though.> What OS version is running on your boot slice > (s0)? > Is this where your zfs root pool (s5) built from?None. s0 is a 256MB slice that contains only /boot and to which I did installgrub. I did that to be able to freely modify the other slices to experiment with several versions of Solaris without impacting other installed versions. I wanted to avoid that I deleted one of the root slices and grub went down with it so I wouldn''t have a bootable disk anymore. Every new update to SXCE I overwrite the fs in c0d0s0 with a completely new /boot, from the new installation, and redo installgrub with the new stage1 and stage2 I just copied. Boot goes like this: GRUB uses the menu.lst in c0d0s0 to present the options and menu.lst has one root entry for each Solaris version installed, pointing to its respective slice. It works for S10U3 in c0d0s3 and for SXCEb65 in c0d0s4 (UFS) but SXCEb65 in c0d0s5 (ZFS) doesn''t.> nstallgrub new-stage1 new-stage2 /dev/rdsk/c0d0s0'' > puts the new grub > on c0d0 > (with boot device points to s0) and that should be > sufficient for your > case. > ''-m'' overrides MBR, don''t need to do it. Not sure > the real impact is, > but might > be ok in your case.It apparently is, as now my MBR has a GRUB stage1 on it and it boots ok, but only because I moved c0d0s0 back to UFS. I was just testing to see if it would make any difference to the GRUB menu display problem.> > Lin >This message posted from opensolaris.org
Douglas Atique wrote:> Right. But can I generate them automatically somehow on the next boot? I have followed the instructions that loop-mount / and tar the contents of devices and dev and untar them to the root pool. I just want to know if there is an alternative way to do it. For example, what if I add some new hardware after I have switched to ZFS for my root fs? How will it be added to /devices and /dev? Couldn''t the same principle be applied to generate all of these directories on the first boot off the ZFS root pool? > >Once you switch over to zfs root, adding new hardware should just behave as what you expect on ufs root. Copy /devices and /dev is just a one-time thing (as part of ''installation'') to setup the initial zfs root.> Sorry, I didn''t make myself clear. When grub is installed on a ZFS pool (no matter if I do installgrub to my c0d0s5 or if I create a ZFS pool on my c0d0s0 and do installgrub to it) the GRUB menu is not shown. Ever. Just by installing GRUB back to a UFS slice makes it work again. This "doesn''t work" refers to the difficulties with installing GRUB on a ZFS pool and having it work successfully. Apparently the only trouble is with the video, as the menu is not shown, but the options remain functional (if I remember the order by heart, that is). This is a different problem than my panic on boot off the ZFS pool in c0d0s5, though. > >''installgrub'' will always put grub on the same disk location (first 3 cylinders), not on a ZFS pool. You only need to installgrub when you want a new grub bits on the disk. Since you are using s0 as the default menu.lst location, you should always installgrub for s0: # installgrub new-stage1 new-stage2 /dev/rdsk/c0d0s0 At this point, my suggestion would be: 1. boot up s4 (SXCEb65) 2. destroy ZFS rootpool rootfs on s5 3. use TimS''s script to setup the ZFS rootpool/rootfs on s5 from SXCEb65 (s4) If this works, switch to use your s0 for menu control: 1. boot up ZFS root 2. # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0d0s0 3. on s0 menu.lst, add the ZFS entry If this works, then you can start play around. Lin
Douglas Atique
2007-Jun-13 11:18 UTC
[zfs-discuss] Re: Re: Re: Re: ZFS Boot manual setup in b65
> Once you switch over to zfs root, adding new hardware > should just behave > as what you expect on ufs root. > Copy /devices and /dev is just a one-time thing (as > part of > ''installation'') to setup the initial zfs root.Ok, but what about the first boot? Why can''t /devices and /dev be generated automatically by devfsadm? I am trying to understand what happens on this first boot, as I have the sensation that something special takes place then and might be the subtle reason why my manual procedure doesn''t work while Tim''s script does.> ''installgrub'' will always put grub on the same disk > location (first 3 > cylinders), not on a ZFS pool. > You only need to installgrub when you want a new grub > bits on the disk. > Since you are using s0 as the default menu.lst > location, you should > always installgrub for s0: > > # installgrub new-stage1 new-stage2 /dev/rdsk/c0d0s0So, there is some data hidden somewhere that tells grub to go read menu.lst in s0. How can I learn more about this?> At this point, my suggestion would be: > > 1. boot up s4 (SXCEb65) > 2. destroy ZFS rootpool rootfs on s5 > 3. use TimS''s script to setup the ZFS rootpool/rootfs > on s5 from SXCEb65 > (s4)It works, except for 2 warnings on every boot saying that /etc/dfs/sharetab was not found and that /dev/random is not setup.> If this works, switch to use your s0 for menu > control: > 1. boot up ZFS root > 2. # installgrub /boot/grub/stage1 /boot/grub/stage2 > /dev/rdsk/c0d0s0 > 3. on s0 menu.lst, add the ZFS entry > > If this works, then you can start play around.It works. But I am not happy because I couldn''t understand what the difference is between me typing the commands and the script running them. Could there be some side effect on the bootadm update-archive depending on when I run it? On the other hand, for all practical purposes, it is working, so thanks. -- Doug This message posted from opensolaris.org
Douglas Atique
2007-Jun-13 14:33 UTC
[zfs-discuss] Re: Re: Re: Re: ZFS Boot manual setup in b65
Hi Lin, A few moments after replying to your post, I had an idea. I had tweaked with almost every part of the script but I couldn''t figure out what the difference was between the script and the manual execution. The difference is (as I found later) that when I created the ZFS root fs by hand, I was always zealous enough to "zpool export" my pool before rebooting. And this is the cause for the panic I am getting! I tried booting to the UFS root fs and did a zpool import, followed by another reboot, and it worked... Now that I know *what*, could you perhaps explain to my *why*? I understood zpool import and export operations much as mount and unmount, like maybe some checks on the integrity of the pool and updates to some structure on the OS to maintain the imported/exported state of that pool. But now I suspect this state information is in fact maintained in the pool itself. Does this make sense? In that case, may I suggest that you add a note to the manual (http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/) stating that the pool should not be exported prior to booting off it? Thanks for your help! -- Doug This message posted from opensolaris.org
Douglas Atique wrote:> Now that I know *what*, could you perhaps explain to my *why*? I understood zpool import and export operations much as mount and unmount, like maybe some checks on the integrity of the pool and updates to some structure on the OS to maintain the imported/exported state of that pool. But now I suspect this state information is in fact maintained in the pool itself. Does this make sense? > >In short, once a pool is exported, it''s not available/visible for live usage, even after reboot.> In that case, may I suggest that you add a note to the manual (http://www.opensolaris.org/os/community/zfs/boot/zfsboot-manual/) stating that the pool should not be exported prior to booting off it? > >done> Thanks for your help!You are welcome. Lin
> > Now that I know *what*, could you perhaps explain > to my *why*? I understood zpool import and export > operations much as mount and unmount, like maybe some > checks on the integrity of the pool and updates to > some structure on the OS to maintain the > imported/exported state of that pool. But now I > suspect this state information is in fact maintained > in the pool itself. Does this make sense? > > > In short, once a pool is exported, it''s not > available/visible for live > usage, even after reboot. >Do you think this panic when the root pool is not visible is a bug? Should I file one?> > In that case, may I suggest that you add a note to > the manual > (http://www.opensolaris.org/os/community/zfs/boot/zfsb > oot-manual/) stating that the pool should not be > exported prior to booting off it? > > > donePerhaps you could include a link to this discussion thread, too (for those who want more information). -- Doug This message posted from opensolaris.org
On Thu, Jun 14, 2007 at 05:17:36AM -0700, Douglas Atique wrote:> > Do you think this panic when the root pool is not visible is a bug? > Should I file one? >No. There is nothing else the OS can do when it cannot mount the root filesystem. That being said, it should have a nicer message (using FMA-style knowledge articles) that tell you what''s actually going wrong. There is already a bug filed against this failure mode. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
> No. There is nothing else the OS can do when it > cannot mount the root > filesystem.I have the impression (didn''t check though) that the pool is made available by just setting some information in its main superblock or something alike (sorry for the imprecisions in ZFS jargon). I understand the OS knows which pool/fs it wants to mount onto /. It also knows that the root filesystem is ZFS, so it could in theory be able to import the pool at boot, I suppose. So I wonder if the OS could prompt the user on the console to import the pool or even use some (additional) boot options to instruct it to either import the pool without asking or reboot without panicking (e.g. -B auto-import-exported-root-pool=true/false). I guess this would be an RFE rather than a bug. Any thoughts on it?> That being said, it should have a nicer > message (using > FMA-style knowledge articles) that tell you what''s > actually going > wrong. There is already a bug filed against this > failure mode.Off-topic question, but I cannot resist. What is "FMA-style knowledge articles"? -- Douglas This message posted from opensolaris.org
On Fri, Jun 15, 2007 at 04:37:06AM -0700, Douglas Atique wrote:> > I have the impression (didn''t check though) that the pool is made > available by just setting some information in its main superblock or > something alike (sorry for the imprecisions in ZFS jargon). I > understand the OS knows which pool/fs it wants to mount onto /. It > also knows that the root filesystem is ZFS, so it could in theory be > able to import the pool at boot, I suppose. So I wonder if the OS > could prompt the user on the console to import the pool or even use > some (additional) boot options to instruct it to either import the > pool without asking or reboot without panicking (e.g. -B > auto-import-exported-root-pool=true/false). I guess this would be an > RFE rather than a bug. Any thoughts on it? >Sure, that would seem possible. Keep in mind that the boot environment is extremely limited when dealing with devices. For example, I don''t know if it''s possible for a grub plugin to search all attached devices, which would be necessary for pool import.> > Off-topic question, but I cannot resist. What is "FMA-style knowledge > articles"? >See the Fault Management community: http://www.opensolaris.org/os/community/fm/ As well as the event registry: http://www.opensolaris.org/os/project/events-registry/ - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock