Technically "bootfs ID" is a string which names the root dataset, typically "rpool/ROOT/solarisReleaseNameCode". This string can be passed to Solaris kernel as a parameter manually or by bootloader, otherwise a default current "bootfs" is read from the root pool''s attributes (not dataset attributes! - see "zpool get/set bootfs"). In your case it seems that the attribute points to an invalid name, and your root dataset may be named somehing else - just set the pool attribute. I don''t know of "bootfs ID numbers", but maybe that''s a concept in your company''s scripting and patching environment. It is also possible that device names changed (i.e. on x86 - when SATA HDD access mode in BIOS changed from IDE to AHCI) and the boot device name saved in eeprom or its GRUB emulator is no longer valid. But this has different error strings ;) Good luck, //Jim -- This message posted from opensolaris.org
So y my system is not coming up .. i jumpstarted the system again ... but it panics like earlier .. so how should i recover it .. and get it up ? System was booted from network into single user mode and then rpool imported and following is the listing # zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT rpool 68G 4.08G 63.9G 5% ONLINE - # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 9.15G 57.8G 98K /rpool rpool/ROOT 4.08G 57.8G 21K /rpool/ROOT rpool/ROOT/zfsBE_patched 4.08G 57.8G 4.08G / rpool/dump 3.01G 60.8G 16K - rpool/swap 2.06G 59.9G 16K - # Dataset mos [META], ID 0, cr_txg 4, 137K, 62 objects Dataset rpool/ROOT/zfsBE_patched [ZPL], ID 47, cr_txg 40, 4.08G, 110376 objects Dataset rpool/ROOT [ZPL], ID 39, cr_txg 32, 21.0K, 4 objects Dataset rpool/dump [ZVOL], ID 71, cr_txg 74, 16K, 2 objects Dataset rpool/swap [ZVOL], ID 65, cr_txg 71, 16K, 2 objects Dataset rpool [ZPL], ID 16, cr_txg 1, 98.0K, 10 objects But when system is rebooted it again panics .. Is there any way to recover it ? I tried all the things which i know SunOS Release 5.10 Version Generic_142900-13 64-bit Copyright 1983-2010 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. NOTICE: zfs_parse_bootfs: error 48 Cannot mount root on rpool/47 fstype zfs panic[cpu0]/thread=180e000: vfs_mountroot: cannot mount root -- This message posted from opensolaris.org
Hi Ketan, What steps lead up to this problem? I believe the boot failure messages below are related to a mismatch between the pool version and the installed OS version. If you''re using the JumpStart installation method, then the root pool is re-created each time, I believe. Does it also install a patch that upgrades the pool version? Thanks, Cindy On 05/11/11 13:27, Ketan wrote:> So y my system is not coming up .. i jumpstarted the system again ... but it panics like earlier .. so how should i recover it .. and get it up ? > > System was booted from network into single user mode and then rpool imported and following is the listing > > > # zpool list > NAME SIZE ALLOC FREE CAP HEALTH ALTROOT > rpool 68G 4.08G 63.9G 5% ONLINE - > # zfs list > NAME USED AVAIL REFER MOUNTPOINT > rpool 9.15G 57.8G 98K /rpool > rpool/ROOT 4.08G 57.8G 21K /rpool/ROOT > rpool/ROOT/zfsBE_patched 4.08G 57.8G 4.08G / > rpool/dump 3.01G 60.8G 16K - > rpool/swap 2.06G 59.9G 16K - > # > > > > Dataset mos [META], ID 0, cr_txg 4, 137K, 62 objects > Dataset rpool/ROOT/zfsBE_patched [ZPL], ID 47, cr_txg 40, 4.08G, 110376 objects > Dataset rpool/ROOT [ZPL], ID 39, cr_txg 32, 21.0K, 4 objects > Dataset rpool/dump [ZVOL], ID 71, cr_txg 74, 16K, 2 objects > Dataset rpool/swap [ZVOL], ID 65, cr_txg 71, 16K, 2 objects > Dataset rpool [ZPL], ID 16, cr_txg 1, 98.0K, 10 objects > > > But when system is rebooted it again panics .. Is there any way to recover it ? I tried all the things which i know > > > SunOS Release 5.10 Version Generic_142900-13 64-bit > Copyright 1983-2010 Sun Microsystems, Inc. All rights reserved. > Use is subject to license terms. > NOTICE: zfs_parse_bootfs: error 48 > Cannot mount root on rpool/47 fstype zfs > > panic[cpu0]/thread=180e000: vfs_mountroot: cannot mount root
Hello again. As I kind of explained earlier, and as your screenshots display, your actual root filesystem dataset is named "rpool/ROOT/zfsBE_patched". However either the boot loader (eeprom parameters on SPARC) or much more likely the "rpool" ZFS-pool level attribute "bootfs" contains a different string - "rpool/47". I see that it is your reported ZFS dataset ID, however the bootfs attribute should contain the dataset name, not a dataset ID. While booted from single-user mode, did you try to set this pool attribute as I suggested (I did not know your FS names from the first post): # zpool set bootfs=rpool/ROOT/zfsBE_patched rpool I have a big hope that this alone may fix your problem, otherwise we''re into deep digging in your eeprom settings and other parts of the boot procedure ;) HTH, //Jim -- This message posted from opensolaris.org
Hello Jim, Thanks for the reply following is my o/p before setting bootfs parameter # zpool get all rpool NAME PROPERTY VALUE SOURCE rpool size 68G - rpool capacity 5% - rpool altroot - default rpool health ONLINE - rpool guid 8812174757237060985 default rpool version 22 default rpool bootfs rpool/ROOT/zfsBE_patched local rpool delegation on default rpool autoreplace off default rpool cachefile - default rpool failmode continue local rpool listsnapshots on default rpool autoexpand off default rpool free 63.9G - rpool allocated 4.08G - But i still ran the command .. but it didn''t help me and system still panics # zpool set bootfs=rpool/ROOT/zfsBE_patched rpool # zpool get all rpool NAME PROPERTY VALUE SOURCE rpool size 68G - rpool capacity 5% - rpool altroot - default rpool health ONLINE - rpool guid 8812174757237060985 default rpool version 22 default rpool bootfs rpool/ROOT/zfsBE_patched local rpool delegation on default rpool autoreplace off default rpool cachefile - default rpool failmode continue local rpool listsnapshots on default rpool autoexpand off default rpool free 63.9G - rpool allocated 4.08G - # init 6 # The system is being restarted. syncing file systems... done rebooting... Resetting... POST Sequence 01 CPU Check POST Sequence 02 Banner LSB#00 (XSB#00-0): POST 2.14.0 (2010/05/13 13:27) POST Sequence 03 Fatal Check POST Sequence 04 CPU Register POST Sequence 05 STICK POST Sequence 06 MMU POST Sequence 07 Memory Initialize POST Sequence 08 Memory POST Sequence 09 Raw UE In Cache POST Sequence 0A Floating Point Unit POST Sequence 0B SC POST Sequence 0C Cacheable Instruction POST Sequence 0D Softint POST Sequence 0E CPU Cross Call POST Sequence 0F CMU-CH POST Sequence 10 PCI-CH POST Sequence 11 Master Device POST Sequence 12 DSCP POST Sequence 13 SC Check Before STICK Diag POST Sequence 14 STICK Stop POST Sequence 15 STICK Start POST Sequence 16 Error CPU Check POST Sequence 17 System Configuration POST Sequence 18 System Status Check POST Sequence 19 System Status Check After Sync POST Sequence 1A OpenBoot Start... POST Sequence Complete. ChassisSerialNumber BCF080207K Sun SPARC Enterprise M5000 Server, using Domain console Copyright 2010 Sun Microsystems, Inc. All rights reserved. Copyright 2010 Sun Microsystems, Inc. and Fujitsu Limited. All rights reserved. OpenBoot 4.24.14, 65536 MB memory installed, Serial #75515882. Ethernet address 0:14:4f:80:47:ea, Host ID: 848047ea. Rebooting with command: boot Boot device: rootmirror File and args: SunOS Release 5.10 Version Generic_142900-13 64-bit Copyright 1983-2010 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. NOTICE: zfs_parse_bootfs: error 48 Cannot mount root on rpool/47 fstype zfs panic[cpu0]/thread=180e000: vfs_mountroot: cannot mount root 000000000180b950 genunix:vfs_mountroot+358 (800, 200, 0, 18ef800, 1918000, 194dc00) %l0-3: 00000000010c2400 00000000010c22ec 00000000018f5178 00000000011f5400 %l4-7: 00000000011f5400 0000000001950400 0000000000000600 0000000000000200 000000000180ba10 genunix:main+9c (0, 180c000, 1892260, 1833358, 1839738, 1940800) %l0-3: 000000000180c000 000000000180c000 0000000070002000 0000000000000000 %l4-7: 000000000189c800 0000000000000000 000000000180c000 0000000000000001 And the bootfs ID thing i read it from the following opensolaris link where one user was getting was the same error as mine. http://opensolaris.org/jive/thread.jspa?messageID=315743 Can you plz tell me some other way to get it rt ? -- This message posted from opensolaris.org
Sorry, I guess I''m running out of reasonable ideas then. One that you can try (or already did) is installing Solaris not by JumpStart or WANBoot but from original media (DVD or Network Install) to see if the problem pertains. Maybe your flash image lacks some controller drivers, etc? (I am not sure how that would be possible if you made the archive on another domain in the same/similar box - but perhaps some paths were marked for exclusion?) Another idea would be to start the system with mdb debugger (but then you''d need to know what to type or kick) and/or with higher verbosity ("boot -m verbose" from eeprom or via "reboot - -m verbose" from single-user), just to maybe see some more insight on "what fails?". //Jim -- This message posted from opensolaris.org
Small world... Never seen this problem before your post, and hit it now myself ;) We had an outage on an SXCE snv_117 server today with a data pool taking unknown time to import, so we decided to "zpool import -F" it. But the feature is lacking in build 117, so we imported the pool into an OpenSolaris 2010.03 dev 134 preview image, all went smooth (albeit slower than usually). We also imported the root pool to move the zpool.cache file out of the way, just in case, updated the boot archive and exported the rpool before rebooting back into the installed OS. Upon reboot we got an error very much like yours (unfortunately I can''t copy-paste it - it scrolled away from the buffer too quickly with BIOS replacing it). I was afraid the rpool got updated somehow into a version unknown to snv_117. However booting into "failsafe mode" of the installed OS image was successful on the first try. It found the rpool and current bootfs and imported it with no problems. Then I just did "init 6" to finish the failsafe mode, and after a reboot the system came back up with no hiccups. HTH, //Jim -- This message posted from opensolaris.org