Hi, I have a triple boot amd64 Linux/FreeBSD/OpenSolaris box used for Q/A. It is in a data center where I don''t have easy physical access to the machine. It was working fine for months, now I see this at boot time on the serial console: SunOS Release 5.11 Version snv_86 64-bit Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. NOTICE: mount: not a UFS magic number (0x0) panic[cpu0]/thread=fffffffffbc245a0: cannot mount root path /ramdisk:a fffffffffbc446d0 genunix:rootconf+113 () fffffffffbc44720 genunix:vfs_mountroot+65 () fffffffffbc44750 genunix:main+d8 () fffffffffbc44760 unix:_locore_start+92 () I suspect the problem was caused when, under Linux, somebody foolishly exported then imported the Solaris rootfs using the Linux FUSE ZFS stuff so they could pull data off the Solaris side without a reboot. I guess that must have done "something" to the pool so that Solaris no longer likes it. The linux ZFS tools list the history of the zpool as: History for ''rpool'': 2008-05-06.08:39:33 zpool create -f rpool_tmp c5t0d0s0 2008-05-06.08:39:33 zfs create rpool_tmp/ROOT 2008-05-06.08:39:33 zfs set compression=off rpool_tmp/ROOT 2008-05-06.08:39:35 zfs set mountpoint=/a/export rpool_tmp/export 2008-05-06.08:39:35 zfs set mountpoint=/a/export/home rpool_tmp/export/home 2008-05-06.08:51:28 zpool set bootfs=rpool_tmp/ROOT/opensolaris rpool_tmp 2008-05-06.08:51:29 zfs set mountpoint=/export/home rpool_tmp/export/home 2008-05-06.08:51:29 zfs set mountpoint=/export rpool_tmp/export 2008-05-06.08:51:31 zpool export -f rpool_tmp 2008-05-06.08:51:38 zpool import -f 2344082471458403555 rpool 2008-05-06.08:51:59 zpool set bootfs=rpool/ROOT/opensolaris rpool 2008-05-06.08:52:20 zfs snapshot -r rpool at install 2008-09-07.12:22:55 zpool import -ocachefile=/etc/zfs-cachefile -d /tmp/dev/ -f rpool 2008-09-07.12:26:00 zpool export rpool 2008-09-07.12:34:58 zpool import -d /tmp/dev rpool 2008-09-07.09:59:40 zpool import -f rpool 2008-09-07.17:20:56 zpool import -d /var/tmp/dev -f rpool 2008-09-07.17:21:43 zpool export rpool 2008-09-07.17:27:35 zpool import -d /var/tmp/dev/ rpool 2008-09-07.17:32:10 zpool export rpool 2008-09-07.17:32:23 zpool import -d /var/tmp/dev/ rpool 2008-09-07.17:32:40 zpool export rpool 2008-09-07.10:41:13 zpool import rpool 2008-09-07.11:42:09 zpool export rpool 2008-09-07.11:42:24 zpool import rpool 2008-09-07.11:45:26 zpool export rpool 2008-09-07.18:52:35 zpool import -d /var/tmp/dev rpool The entries from 2008-09-07 were operations using the linux tools, prior are from the Solaris installation. Is there any possible way to rescue the solaris installation remotely, using the linux install or via grub or kmdb from the serial console? How? Alternatively, would it be possible to rescue the installation by either moving the disk to an OpenSolaris machine (b95) and doing something (what?)? Or by booting via the Indiana installation CD (what?). Thanks, Drew -- This message posted from opensolaris.org
Reboot to the grub menu Move to the failsafe kernel entry tap "e" to edit entry. go to the kernel entry and tap "e" again Append -kv to the end of the line Accept and tap "b" to boot the line. After some output you will be prompted to mount the root pool on /a - Enter y to accept. You will then get a shell prompt. Reboot and all should be fine. I actually need to ask a question: Did the person who imported the pool under Linux use the old (circa Feb 2008) zfs-fuse, or the new one (Sept 2008)? If the later, then it is possible that they also did a zpool upgrade and now your Solaris no longer understands the ZFS on-disk format. If this is the case, upgrade solaris (Boot from new media, go to the text-more installer, and select "upgrade" when prompted for the install type). This will update Solaris to understand the ZFS version. I think the older zfs-fuse used to support ZFS version 8 or 9, the new one supports version 12 or 13. On Wed, Oct 22, 2008 at 5:15 PM, Andrew Gallatin <gallatin at myri.com> wrote:> Hi, > > I have a triple boot amd64 Linux/FreeBSD/OpenSolaris box used for Q/A. It > is in a data center where I don''t have easy physical access to the machine. > It was working fine for months, now I see this at boot time on the serial > console: > > > SunOS Release 5.11 Version snv_86 64-bit > Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved. > Use is subject to license terms. > NOTICE: mount: not a UFS magic number (0x0) > > panic[cpu0]/thread=fffffffffbc245a0: cannot mount root path /ramdisk:a > > fffffffffbc446d0 genunix:rootconf+113 () > fffffffffbc44720 genunix:vfs_mountroot+65 () > fffffffffbc44750 genunix:main+d8 () > fffffffffbc44760 unix:_locore_start+92 () > > I suspect the problem was caused when, under Linux, somebody foolishly > exported then imported the Solaris rootfs using the Linux FUSE ZFS stuff so > they could pull data off the Solaris side without a reboot. I guess that > must have done "something" to the pool so that Solaris no longer likes it. > The linux ZFS tools list the history of the zpool as: > > History for ''rpool'': > 2008-05-06.08:39:33 zpool create -f rpool_tmp c5t0d0s0 > 2008-05-06.08:39:33 zfs create rpool_tmp/ROOT > 2008-05-06.08:39:33 zfs set compression=off rpool_tmp/ROOT > 2008-05-06.08:39:35 zfs set mountpoint=/a/export rpool_tmp/export > 2008-05-06.08:39:35 zfs set mountpoint=/a/export/home rpool_tmp/export/home > 2008-05-06.08:51:28 zpool set bootfs=rpool_tmp/ROOT/opensolaris rpool_tmp > 2008-05-06.08:51:29 zfs set mountpoint=/export/home rpool_tmp/export/home > 2008-05-06.08:51:29 zfs set mountpoint=/export rpool_tmp/export > 2008-05-06.08:51:31 zpool export -f rpool_tmp > 2008-05-06.08:51:38 zpool import -f 2344082471458403555 rpool > 2008-05-06.08:51:59 zpool set bootfs=rpool/ROOT/opensolaris rpool > 2008-05-06.08:52:20 zfs snapshot -r rpool at install > 2008-09-07.12:22:55 zpool import -ocachefile=/etc/zfs-cachefile -d > /tmp/dev/ -f rpool > 2008-09-07.12:26:00 zpool export rpool > 2008-09-07.12:34:58 zpool import -d /tmp/dev rpool > 2008-09-07.09:59:40 zpool import -f rpool > 2008-09-07.17:20:56 zpool import -d /var/tmp/dev -f rpool > 2008-09-07.17:21:43 zpool export rpool > 2008-09-07.17:27:35 zpool import -d /var/tmp/dev/ rpool > 2008-09-07.17:32:10 zpool export rpool > 2008-09-07.17:32:23 zpool import -d /var/tmp/dev/ rpool > 2008-09-07.17:32:40 zpool export rpool > 2008-09-07.10:41:13 zpool import rpool > 2008-09-07.11:42:09 zpool export rpool > 2008-09-07.11:42:24 zpool import rpool > 2008-09-07.11:45:26 zpool export rpool > 2008-09-07.18:52:35 zpool import -d /var/tmp/dev rpool > > The entries from 2008-09-07 were operations using the linux tools, prior > are from the Solaris installation. > > Is there any possible way to rescue the solaris installation remotely, > using the linux install or via grub or kmdb from the serial console? How? > > Alternatively, would it be possible to rescue the installation by either > moving the disk to an OpenSolaris machine (b95) and doing something (what?)? > Or by booting via the Indiana installation CD (what?). > > Thanks, > > Drew > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- Any sufficiently advanced technology is indistinguishable from magic. Arthur C. Clarke Afrikaanse Stap Website: http://www.bloukous.co.za My blog: http://initialprogramload.blogspot.com ICQ = 193944626, YahooIM = johan_hartzenberg, GoogleTalk jhartzen at gmail.com, AIM = JohanHartzenberg -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20081022/71eb60e5/attachment.html>
Johan Hartzenberg wrote:> Reboot to the grub menu > Move to the failsafe kernel entryUgh. This is OpenSolaris (Indiana), and there *is* no failsafe as far as I can tell. There is one grub entry for Solaris: #---------- ADDED BY BOOTADM - DO NOT EDIT ---------- title OpenSolaris 2008.05 snv_86_rc2a X86 bootfs rpool/ROOT/opensolaris kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS module$ /platform/i86pc/$ISADIR/boot_archive #---------------------END BOOTADM-------------------- I''m sitting in grub now, trying to figure out what to do.>From an S10u5 box I have, it looks like there shouldbe a file /boot/x86.miniroot-safe, but that file does not exist on the OpenSolaris box.> Did the person who imported the pool under Linux use the old (circa Feb > 2008) zfs-fuse, or the new one (Sept 2008)?I think I''m safe on this front. zpool upgrade says its running version 10, which is what the identical (working) machine also says. I think I''m just going to give up and re-install.. Sigh. Drew -- This message posted from opensolaris.org
On 10/22/08 09:02 AM, Andrew Gallatin wrote:> Johan Hartzenberg wrote: > >> Reboot to the grub menu >> Move to the failsafe kernel entry >> > > Ugh. This is OpenSolaris (Indiana), and there *is* no failsafe > as far as I can tell. There is one grub entry for Solaris: > #---------- ADDED BY BOOTADM - DO NOT EDIT ---------- > title OpenSolaris 2008.05 snv_86_rc2a X86 > bootfs rpool/ROOT/opensolaris > kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS > module$ /platform/i86pc/$ISADIR/boot_archive > #---------------------END BOOTADM-------------------- > > > I''m sitting in grub now, trying to figure out what to do. >Simple, the equiv of failsafe for OpenSolaris is to boot the live-cd, then manually mount your disk drive. Sort of like using Knoppix to repair a linux install, or WinPE to repair the mistake of installing windows...> >From an S10u5 box I have, it looks like there should > be a file /boot/x86.miniroot-safe, but that file does > not exist on the OpenSolaris box. > > >> Did the person who imported the pool under Linux use the old (circa Feb >> 2008) zfs-fuse, or the new one (Sept 2008)? >> > > I think I''m safe on this front. zpool upgrade says its running version > 10, which is what the identical (working) machine also says. > > I think I''m just going to give up and re-install.. Sigh. > > Drew > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20081022/26a65f55/attachment.html>
Neal Pollack wrote:> Simple, the equiv of failsafe for OpenSolaris is to boot the live-cd, > then manually mount your disk drive.Yuck. The lack of a failsafe is a *huge* step backwards, considering how fragile the ZFS root seems to be. The idea of having to have somebody on-site at a datacenter with a CD to rescue a machine simply because another OS mounted the root directory is insane. I hope S10 doesn''t go this route.> Sort of like using Knoppix to repair a linux install, > or WinPE to repair the mistake of installing windows...I haven''t needed to rescue a linux installation with a CD since most distros moved to grub 5 or so years ago. Drew -- This message posted from opensolaris.org
For what it is worth, I ended up using Linux to dd the Solaris partition from an identical machine. I realize that ZFS is a huge step forward on a huge number of fronts, but the boot process has got to improve, or else it should not be offered as a root filesystem. Even in the "bad old days" of SunOS 4 20 years ago, you could boot single user and fsck your way out of most problems. In my case, I think the kernel really should have been able to mount the ZFS fs read only, as grub (and linux) were both able to access it. Drew -- This message posted from opensolaris.org