I am not sure what I did wrong but I did follow up all the steps to get my system moved from ufs to zfs and not I am unable to boot it... can anyone suggest what I could do to fix it? here are all my steps: [00:26:38] @adas: /root > zpool create rootpool c1t1d0s0 [00:26:57] @adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool Analyzing system configuration. Comparing source boot environment <ufsBE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment <zfsBE>. Source boot environment is <ufsBE>. Creating boot environment <zfsBE>. Creating file systems on boot environment <zfsBE>. Creating <zfs> file system for </> in zone <global> on <rootpool/ROOT/zfsBE>. Populating file systems on boot environment <zfsBE>. Checking selection integrity. Integrity check OK. Populating contents of mount point </>. Copying. Bus Error - core dumped Creating shared file system mount points. Creating compare databases for boot environment <zfsBE>. Creating compare database for file system </var>. Creating compare database for file system </usr>. Creating compare database for file system </rootpool/ROOT>. Creating compare database for file system </>. Updating compare databases on boot environment <zfsBE>. Making boot environment <zfsBE> bootable. Population of boot environment <zfsBE> successful. Creation of boot environment <zfsBE> successful. [01:19:36] @adas: /root > Nov 5 02:44:16 adas root: => com.sun.patchpro.util.CachingDownloader at c05d3b <=com.sun.cc.platform.clientsignature.CNSSignException: Error reading private key Nov 5 02:44:16 adas 7470:error:0906D06C:PEM routines:PEM_read_bio:no start line:/on10/build-nd/F10U6B7A/usr/src/common/openssl/crypto/pem/pem_lib.c:637:Expecting: ANY PRIVATE KEY Nov 5 02:44:16 adas at com.sun.cc.platform.clientsignature.CNSClientSignature.throwError(Unknown Source) Nov 5 02:44:16 adas at com.sun.cc.platform.clientsignature.CNSClientSignature.<init>(Unknown Source) Nov 5 02:44:16 adas at com.sun.cc.platform.clientsignature.CNSClientSignature.genSigString(Unknown Source) Nov 5 02:44:16 adas root: => com.sun.patchpro.util.CachingDownloader at c05d3b <= at com.sun.patchpro.util.Downloader.connectToURL(Downloader.java:430) Nov 5 02:44:16 adas at com.sun.patchpro.util.CachingDownloader.establishConnection(CachingDownloader.java:618) Nov 5 02:44:16 adas at com.sun.patchpro.util.CachingDownloader.setSourceURL(CachingDownloader.java:282) Nov 5 02:44:16 adas at com.sun.patchpro.util.CachingDownloader.setupCache(CachingDownloader.java:208) Nov 5 02:44:16 adas at com.sun.patchpro.util.CachingDownloader.<init>(CachingDownloader.java:187) Nov 5 02:44:16 adas root: => com.sun.patchpro.util.CachingDownloader at c05d3b <= at com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadFile(UnifiedServerPatchServiceProvider.java:1242) Nov 5 02:44:16 adas at com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadDatabaseFile(UnifiedServerPatchServiceProvider.java:928) Nov 5 02:44:16 adas at com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadPatchDB(UnifiedServerPatchServiceProvider.java:468) Nov 5 02:44:16 adas at com.sun.patchpro.server.PatchServerProxy.downloadPatchDB(PatchServerProxy.java:156) Nov 5 02:44:16 adas at com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDBWithPOST(MemoryPatchDBBuilder.java:163) Nov 5 02:44:16 adas root: => com.sun.patchpro.util.CachingDownloader at c05d3b <= at com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDB(MemoryPatchDBBuilder.java:752) Nov 5 02:44:16 adas at com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:108) Nov 5 02:44:16 adas at com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:181) Nov 5 02:44:16 adas at com.sun.patchpro.database.GroupPatchDBBuilder.buildDB(GroupPatchDBBuilder.java:108) Nov 5 02:44:16 adas at com.sun.patchpro.model.PatchProModel.downloadPatchDB(PatchProModel.java:1849) Nov 5 02:44:16 adas root: => com.sun.patchpro.util.CachingDownloader at c05d3b <=null at com.sun.patchpro.model.PatchProStateMachine$5.run(PatchProStateMachine.java:277) Nov 5 02:44:16 adas at com.sun.patchpro.util.State.run(State.java:266) Nov 5 02:44:16 adas at java.lang.Thread.run(Thread.java:595) Nov 5 02:44:17 adas root: => com.sun.patchpro.util.CachingDownloader at 1901437 <=com.sun.cc.platform.clientsignature.CNSSignException: Error reading private key Nov 5 02:44:17 adas 7470:error:0906D06C:PEM routines:PEM_read_bio:no start line:/on10/build-nd/F10U6B7A/usr/src/common/openssl/crypto/pem/pem_lib.c:637:Expecting: ANY PRIVATE KEY Nov 5 02:44:17 adas at com.sun.cc.platform.clientsignature.CNSClientSignature.throwError(Unknown Source) Nov 5 02:44:17 adas at com.sun.cc.platform.clientsignature.CNSClientSignature.<init>(Unknown Source) Nov 5 02:44:17 adas at com.sun.cc.platform.clientsignature.CNSClientSignature.genSigString(Unknown Source) Nov 5 02:44:17 adas root: => com.sun.patchpro.util.CachingDownloader at 1901437 <= at com.sun.patchpro.util.Downloader.connectToURL(Downloader.java:430) Nov 5 02:44:17 adas at com.sun.patchpro.util.CachingDownloader.establishConnection(CachingDownloader.java:618) Nov 5 02:44:17 adas at com.sun.patchpro.util.CachingDownloader.setSourceURL(CachingDownloader.java:282) Nov 5 02:44:17 adas at com.sun.patchpro.util.CachingDownloader.setupCache(CachingDownloader.java:208) Nov 5 02:44:17 adas at com.sun.patchpro.util.CachingDownloader.<init>(CachingDownloader.java:187) Nov 5 02:44:17 adas root: => com.sun.patchpro.util.CachingDownloader at 1901437 <= at com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadFile(UnifiedServerPatchServiceProvider.java:1242) Nov 5 02:44:17 adas at com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadRealizationDetectors(UnifiedServerPatchServiceProvider.java:1029) Nov 5 02:44:17 adas at com.sun.patchpro.server.PatchServerProxy.downloadRealizationDetectors(PatchServerProxy.java:174) Nov 5 02:44:17 adas at com.sun.patchpro.analysis.HostAnalyzer$RealizationSetAuto.downloadDetectorsWithPOST(HostAnalyzer.java:1212) Nov 5 02:44:17 adas at com.sun.patchpro.analysis.HostAnalyzer$RealizationSetAuto.downloadDetectors(HostAnalyzer.java:1156) Nov 5 02:44:17 adas root: => com.sun.patchpro.util.CachingDownloader at 1901437 <= at com.sun.patchpro.analysis.HostAnalyzer$RealizationSetAuto.prepare(HostAnalyzer.java:875) Nov 5 02:44:17 adas at com.sun.patchpro.analysis.HostAnalyzer.downloadDetectors(HostAnalyzer.java:299) Nov 5 02:44:17 adas at com.sun.patchpro.model.PatchProModel.downloadDetectors(PatchProModel.java:1776) Nov 5 02:44:17 adas at com.sun.patchpro.model.PatchProStateMachine$4.run(PatchProStateMachine.java:245) Nov 5 02:44:17 adas at com.sun.patchpro.util.State.run(State.java:266) Nov 5 02:44:17 adas root: => com.sun.patchpro.util.CachingDownloader at 1901437 <=null at java.lang.Thread.run(Thread.java:595) Nov 5 02:44:27 adas root: => com.sun.patchpro.server.UnifiedServerPatchServiceProvider at c79809 <=UnifiedServerPatchServiceProvider.downloadFile: Unable to create cache downloader. Nov 5 02:44:27 adas root: => com.sun.patchpro.server.UnifiedServerPatchServiceProvider at c79809 <=java.io.IOException: Response code was 403 Nov 5 02:44:27 adas at com.sun.patchpro.util.CachingDownloader.setSourceURL(CachingDownloader.java:302) Nov 5 02:44:27 adas at com.sun.patchpro.util.CachingDownloader.setupCache(CachingDownloader.java:208) Nov 5 02:44:27 adas at com.sun.patchpro.util.CachingDownloader.<init>(CachingDownloader.java:187) Nov 5 02:44:27 adas at com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadFile(UnifiedServerPatchServiceProvider.java:1242) Nov 5 02:44:27 adas root: => com.sun.patchpro.server.UnifiedServerPatchServiceProvider at c79809 <= at com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadDatabaseFile(UnifiedServerPatchServiceProvider.java:928) Nov 5 02:44:27 adas at com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadPatchDB(UnifiedServerPatchServiceProvider.java:468) Nov 5 02:44:27 adas at com.sun.patchpro.server.PatchServerProxy.downloadPatchDB(PatchServerProxy.java:156) Nov 5 02:44:27 adas at com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDBWithPOST(MemoryPatchDBBuilder.java:163) Nov 5 02:44:27 adas at com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDB(MemoryPatchDBBuilder.java:752) Nov 5 02:44:27 adas root: => com.sun.patchpro.server.UnifiedServerPatchServiceProvider at c79809 <= at com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:108) Nov 5 02:44:27 adas at com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:181) Nov 5 02:44:27 adas at com.sun.patchpro.database.GroupPatchDBBuilder.buildDB(GroupPatchDBBuilder.java:108) Nov 5 02:44:27 adas at com.sun.patchpro.model.PatchProModel.downloadPatchDB(PatchProModel.java:1849) Nov 5 02:44:27 adas at com.sun.patchpro.model.PatchProStateMachine$5.run(PatchProStateMachine.java:277) Nov 5 02:44:27 adas root: => com.sun.patchpro.server.UnifiedServerPatchServiceProvider at c79809 <=null at com.sun.patchpro.util.State.run(State.java:266) Nov 5 02:44:27 adas at java.lang.Thread.run(Thread.java:595) Nov 5 02:44:27 adas root: => com.sun.patchpro.server.UnifiedServerPatchServiceProvider at 3bc473 <=UnifiedServerPatchServiceProvider.downloadFile: Unable to create cache downloader. Nov 5 02:44:28 adas root: => com.sun.patchpro.server.UnifiedServerPatchServiceProvider at 3bc473 <=java.io.IOException: Response code was 403 Nov 5 02:44:28 adas at com.sun.patchpro.util.CachingDownloader.setSourceURL(CachingDownloader.java:302) Nov 5 02:44:28 adas at com.sun.patchpro.util.CachingDownloader.setupCache(CachingDownloader.java:208) Nov 5 02:44:28 adas at com.sun.patchpro.util.CachingDownloader.<init>(CachingDownloader.java:187) Nov 5 02:44:28 adas at com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadFile(UnifiedServerPatchServiceProvider.java:1242) Nov 5 02:44:28 adas root: => com.sun.patchpro.server.UnifiedServerPatchServiceProvider at 3bc473 <= at com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadRealizationDetectors(UnifiedServerPatchServiceProvider.java:1029) Nov 5 02:44:28 adas at com.sun.patchpro.server.PatchServerProxy.downloadRealizationDetectors(PatchServerProxy.java:174) Nov 5 02:44:28 adas at com.sun.patchpro.analysis.HostAnalyzer$RealizationSetAuto.downloadDetectorsWithPOST(HostAnalyzer.java:1212) Nov 5 02:44:28 adas at com.sun.patchpro.analysis.HostAnalyzer$RealizationSetAuto.downloadDetectors(HostAnalyzer.java:1156) Nov 5 02:44:28 adas at com.sun.patchpro.analysis.HostAnalyzer$RealizationSetAuto.prepare(HostAnalyzer.java:875) Nov 5 02:44:28 adas root: => com.sun.patchpro.server.UnifiedServerPatchServiceProvider at 3bc473 <= at com.sun.patchpro.analysis.HostAnalyzer.downloadDetectors(HostAnalyzer.java:299) Nov 5 02:44:28 adas at com.sun.patchpro.model.PatchProModel.downloadDetectors(PatchProModel.java:1776) Nov 5 02:44:28 adas at com.sun.patchpro.model.PatchProStateMachine$4.run(PatchProStateMachine.java:245) Nov 5 02:44:28 adas at com.sun.patchpro.util.State.run(State.java:266) Nov 5 02:44:28 adas at java.lang.Thread.run(Thread.java:595) Nov 5 02:44:28 adas root: => com.sun.patchpro.cli.PatchServices at 1bcdbf6 <=com.sun.patchpro.model.PatchProException: Response code was 403 Nov 5 02:44:28 adas at com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadFile(UnifiedServerPatchServiceProvider.java:1267) Nov 5 02:44:28 adas at com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadDatabaseFile(UnifiedServerPatchServiceProvider.java:928) Nov 5 02:44:28 adas at com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadPatchDB(UnifiedServerPatchServiceProvider.java:468) Nov 5 02:44:28 adas at com.sun.patchpro.server.PatchServerProxy.downloadPatchDB(PatchServerProxy.java:156) Nov 5 02:44:28 adas root: => com.sun.patchpro.cli.PatchServices at 1bcdbf6 <= at com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDBWithPOST(MemoryPatchDBBuilder.java:163) Nov 5 02:44:28 adas at com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDB(MemoryPatchDBBuilder.java:752) Nov 5 02:44:28 adas at com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:108) Nov 5 02:44:28 adas at com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:181) Nov 5 02:44:28 adas at com.sun.patchpro.database.GroupPatchDBBuilder.buildDB(GroupPatchDBBuilder.java:108) Nov 5 02:44:28 adas root: => com.sun.patchpro.cli.PatchServices at 1bcdbf6 <= at com.sun.patchpro.model.PatchProModel.downloadPatchDB(PatchProModel.java:1849) Nov 5 02:44:28 adas at com.sun.patchpro.model.PatchProStateMachine$5.run(PatchProStateMachine.java:277) Nov 5 02:44:28 adas at com.sun.patchpro.util.State.run(State.java:266) Nov 5 02:44:28 adas at java.lang.Thread.run(Thread.java:595) Nov 5 02:44:28 adas Caused by: Nov 5 02:44:28 adas root: => com.sun.patchpro.cli.PatchServices at 1bcdbf6 <=java.io.IOException: Response code was 403 Nov 5 02:44:28 adas at com.sun.patchpro.util.CachingDownloader.setSourceURL(CachingDownloader.java:302) Nov 5 02:44:28 adas at com.sun.patchpro.util.CachingDownloader.setupCache(CachingDownloader.java:208) Nov 5 02:44:28 adas at com.sun.patchpro.util.CachingDownloader.<init>(CachingDownloader.java:187) Nov 5 02:44:28 adas at com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadFile(UnifiedServerPatchServiceProvider.java:1242) Nov 5 02:44:28 adas root: => com.sun.patchpro.cli.PatchServices at 1bcdbf6 <= at com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadDatabaseFile(UnifiedServerPatchServiceProvider.java:928) Nov 5 02:44:28 adas at com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadPatchDB(UnifiedServerPatchServiceProvider.java:468) Nov 5 02:44:28 adas at com.sun.patchpro.server.PatchServerProxy.downloadPatchDB(PatchServerProxy.java:156) Nov 5 02:44:28 adas at com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDBWithPOST(MemoryPatchDBBuilder.java:163) Nov 5 02:44:28 adas at com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDB(MemoryPatchDBBuilder.java:752) Nov 5 02:44:28 adas root: => com.sun.patchpro.cli.PatchServices at 1bcdbf6 <= at com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:108) Nov 5 02:44:28 adas at com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:181) Nov 5 02:44:28 adas at com.sun.patchpro.database.GroupPatchDBBuilder.buildDB(GroupPatchDBBuilder.java:108) Nov 5 02:44:28 adas at com.sun.patchpro.model.PatchProModel.downloadPatchDB(PatchProModel.java:1849) Nov 5 02:44:28 adas at com.sun.patchpro.model.PatchProStateMachine$5.run(PatchProStateMachine.java:277) Nov 5 02:44:28 adas root: => com.sun.patchpro.cli.PatchServices at 1bcdbf6 <=null at com.sun.patchpro.util.State.run(State.java:266) Nov 5 02:44:28 adas at java.lang.Thread.run(Thread.java:595) [07:36:43] @adas: /root > lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- ufsBE yes yes yes no - zfsBE yes no no yes - [07:36:52] @adas: /root > luactivate zfsBE A Live Upgrade Sync operation will be performed on startup of boot environment <zfsBE>. ********************************************************************** The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target BE. ********************************************************************** In case of a failure while booting to the target BE, the following process needs to be followed to fallback to the currently working boot environment: 1. Enter the PROM monitor (ok prompt). 2. Change the boot device back to the original boot environment by typing: setenv boot-device /pci at 1c,600000/scsi at 2/disk at 0,0:a 3. Boot to the original boot environment by typing: boot ********************************************************************** Activation of boot environment <zfsBE> successful. [07:37:52] @adas: /root > init 0 [07:38:44] @adas: /root > stopping NetWorker daemons: nsr_shutdown -q svc.startd: The system is coming down. Please wait. svc.startd: 89 system services are now being stopped. Nov 5 07:39:39 adas syslogd: going down on signal 15 svc.startd: The system is down. syncing file systems... done Program terminated {0} ok boot SC Alert: Host System has Reset Probing system devices Probing memory Probing I/O buses Sun Fire V210, No Keyboard Copyright 2007 Sun Microsystems, Inc. All rights reserved. OpenBoot 4.22.33, 4096 MB memory installed, Serial #64938415. Ethernet address 0:3:ba:de:e1:af, Host ID: 83dee1af. Rebooting with command: boot Boot device: /pci at 1c,600000/scsi at 2/disk at 1,0:a File and args: Can''t open boot_archive Evaluating: The file just loaded does not appear to be executable. {1} ok boot disk2 Boot device: /pci at 1c,600000/scsi at 2/disk at 2,0 File and args: ERROR: /pci at 1c,600000: Last Trap: Fast Data Access MMU Miss {1} ok boot disk1 Boot device: /pci at 1c,600000/scsi at 2/disk at 1,0 File and args: ERROR: /pci at 1c,600000: Last Trap: Fast Data Access MMU Miss {1} ok setenv boot-device /pci at 1c,600000/scsi at 2/disk at 0,0:a boot-device = /pci at 1c,600000/scsi at 2/disk at 0,0:a {1} ok boot Boot device: /pci at 1c,600000/scsi at 2/disk at 0,0:a File and args: ERROR: /pci at 1c,600000: Last Trap: Fast Data Access MMU Miss {1} ok boot disk Boot device: /pci at 1c,600000/scsi at 2/disk at 0,0 File and args: ERROR: /pci at 1c,600000: Last Trap: Fast Data Access MMU Miss {1} ok setenv boot-device /pci at 1c,600000/scsi at 2/disk at 0,0:a boot-device = /pci at 1c,600000/scsi at 2/disk at 0,0:a {1} ok boot Boot device: /pci at 1c,600000/scsi at 2/disk at 0,0:a File and args: ERROR: /pci at 1c,600000: Last Trap: Fast Data Access MMU Miss {1} ok
On 05 November, 2008 - Krzys sent me these 18K bytes:> > I am not sure what I did wrong but I did follow up all the steps to get my > system moved from ufs to zfs and not I am unable to boot it... can anyone > suggest what I could do to fix it? > > here are all my steps: > > [00:26:38] @adas: /root > zpool create rootpool c1t1d0s0 > [00:26:57] @adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool > Analyzing system configuration. > Comparing source boot environment <ufsBE> file systems with the file > system(s) you specified for the new boot environment. Determining which > file systems should be in the new boot environment. > Updating boot environment description database on all BEs. > Updating system configuration files. > The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; > cannot get BE ID. > Creating configuration for boot environment <zfsBE>. > Source boot environment is <ufsBE>. > Creating boot environment <zfsBE>. > Creating file systems on boot environment <zfsBE>. > Creating <zfs> file system for </> in zone <global> on <rootpool/ROOT/zfsBE>. > Populating file systems on boot environment <zfsBE>. > Checking selection integrity. > Integrity check OK. > Populating contents of mount point </>. > Copying. > Bus Error - core dumpedThis should have cought both your attention and lucreate''s attention.. If the copying process core dumps, then I guess most bets are off..> Creating shared file system mount points. > Creating compare databases for boot environment <zfsBE>. > Creating compare database for file system </var>. > Creating compare database for file system </usr>. > Creating compare database for file system </rootpool/ROOT>. > Creating compare database for file system </>. > Updating compare databases on boot environment <zfsBE>. > Making boot environment <zfsBE> bootable. > Population of boot environment <zfsBE> successful. > Creation of boot environment <zfsBE> successful./Tomas -- Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ |- Student at Computing Science, University of Ume? `- Sysadmin at {cs,acc}.umu.se
Yes, I did notice that error too, but when I did lustatus it did show as it was ok, so I guess I did asume it was safe to start from it, but even booting up from original disk caused problems and I was unable to boot my system... ANyway I did poweroff my system for few minutes, and then started it up and it did boot without any problems to original disk, I just had to do hard reset on the box for some reason. On Wed, 5 Nov 2008, Tomas ?gren wrote:> On 05 November, 2008 - Krzys sent me these 18K bytes: > >> >> I am not sure what I did wrong but I did follow up all the steps to get my >> system moved from ufs to zfs and not I am unable to boot it... can anyone >> suggest what I could do to fix it? >> >> here are all my steps: >> >> [00:26:38] @adas: /root > zpool create rootpool c1t1d0s0 >> [00:26:57] @adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool >> Analyzing system configuration. >> Comparing source boot environment <ufsBE> file systems with the file >> system(s) you specified for the new boot environment. Determining which >> file systems should be in the new boot environment. >> Updating boot environment description database on all BEs. >> Updating system configuration files. >> The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; >> cannot get BE ID. >> Creating configuration for boot environment <zfsBE>. >> Source boot environment is <ufsBE>. >> Creating boot environment <zfsBE>. >> Creating file systems on boot environment <zfsBE>. >> Creating <zfs> file system for </> in zone <global> on <rootpool/ROOT/zfsBE>. >> Populating file systems on boot environment <zfsBE>. >> Checking selection integrity. >> Integrity check OK. >> Populating contents of mount point </>. >> Copying. >> Bus Error - core dumped > > This should have cought both your attention and lucreate''s attention.. > > If the copying process core dumps, then I guess most bets are off.. > >> Creating shared file system mount points. >> Creating compare databases for boot environment <zfsBE>. >> Creating compare database for file system </var>. >> Creating compare database for file system </usr>. >> Creating compare database for file system </rootpool/ROOT>. >> Creating compare database for file system </>. >> Updating compare databases on boot environment <zfsBE>. >> Making boot environment <zfsBE> bootable. >> Population of boot environment <zfsBE> successful. >> Creation of boot environment <zfsBE> successful. > > /Tomas > -- > Tomas ?gren, stric at acc.umu.se, http://www.acc.umu.se/~stric/ > |- Student at Computing Science, University of Ume? > `- Sysadmin at {cs,acc}.umu.se > > > !DSPAM:122,49119ff929530021468! >
Enda O''Connor
2008-Nov-05 13:35 UTC
[zfs-discuss] migrating ufs to zfs - cant boot system
On 11/05/08 13:02, Krzys wrote:> I am not sure what I did wrong but I did follow up all the steps to get my > system moved from ufs to zfs and not I am unable to boot it... can anyone > suggest what I could do to fix it? > > here are all my steps: > > [00:26:38] @adas: /root > zpool create rootpool c1t1d0s0 > [00:26:57] @adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool > Analyzing system configuration. > Comparing source boot environment <ufsBE> file systems with the file > system(s) you specified for the new boot environment. Determining which > file systems should be in the new boot environment. > Updating boot environment description database on all BEs. > Updating system configuration files. > The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; > cannot get BE ID. > Creating configuration for boot environment <zfsBE>. > Source boot environment is <ufsBE>. > Creating boot environment <zfsBE>. > Creating file systems on boot environment <zfsBE>. > Creating <zfs> file system for </> in zone <global> on <rootpool/ROOT/zfsBE>. > Populating file systems on boot environment <zfsBE>. > Checking selection integrity. > Integrity check OK. > Populating contents of mount point </>. > Copying. > Bus Error - core dumpedhmm above might be relevant I''d guess. What release are you on , ie is this Solaris 10, or is this Nevada build? Enda> Creating shared file system mount points. > Creating compare databases for boot environment <zfsBE>. > Creating compare database for file system </var>. > Creating compare database for file system </usr>. > Creating compare database for file system </rootpool/ROOT>. > Creating compare database for file system </>. > Updating compare databases on boot environment <zfsBE>. > Making boot environment <zfsBE> bootable. > Population of boot environment <zfsBE> successful. > Creation of boot environment <zfsBE> successful. > [01:19:36] @adas: /root > > Nov 5 02:44:16 adas root: => com.sun.patchpro.util.CachingDownloader at c05d3b > <=com.sun.cc.platform.clientsignature.CNSSignException: Error reading private > key > Nov 5 02:44:16 adas 7470:error:0906D06C:PEM routines:PEM_read_bio:no start > line:/on10/build-nd/F10U6B7A/usr/src/common/openssl/crypto/pem/pem_lib.c:637:Expecting: > ANY PRIVATE KEY > Nov 5 02:44:16 adas at > com.sun.cc.platform.clientsignature.CNSClientSignature.throwError(Unknown > Source) > Nov 5 02:44:16 adas at > com.sun.cc.platform.clientsignature.CNSClientSignature.<init>(Unknown Source) > Nov 5 02:44:16 adas at > com.sun.cc.platform.clientsignature.CNSClientSignature.genSigString(Unknown > Source) > Nov 5 02:44:16 adas root: => com.sun.patchpro.util.CachingDownloader at c05d3b <= > at com.sun.patchpro.util.Downloader.connectToURL(Downloader.java:430) > Nov 5 02:44:16 adas at > com.sun.patchpro.util.CachingDownloader.establishConnection(CachingDownloader.java:618) > Nov 5 02:44:16 adas at > com.sun.patchpro.util.CachingDownloader.setSourceURL(CachingDownloader.java:282) > Nov 5 02:44:16 adas at > com.sun.patchpro.util.CachingDownloader.setupCache(CachingDownloader.java:208) > Nov 5 02:44:16 adas at > com.sun.patchpro.util.CachingDownloader.<init>(CachingDownloader.java:187) > Nov 5 02:44:16 adas root: => com.sun.patchpro.util.CachingDownloader at c05d3b <= > at > com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadFile(UnifiedServerPatchServiceProvider.java:1242) > Nov 5 02:44:16 adas at > com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadDatabaseFile(UnifiedServerPatchServiceProvider.java:928) > Nov 5 02:44:16 adas at > com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadPatchDB(UnifiedServerPatchServiceProvider.java:468) > Nov 5 02:44:16 adas at > com.sun.patchpro.server.PatchServerProxy.downloadPatchDB(PatchServerProxy.java:156) > Nov 5 02:44:16 adas at > com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDBWithPOST(MemoryPatchDBBuilder.java:163) > Nov 5 02:44:16 adas root: => com.sun.patchpro.util.CachingDownloader at c05d3b <= > at > com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDB(MemoryPatchDBBuilder.java:752) > Nov 5 02:44:16 adas at > com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:108) > Nov 5 02:44:16 adas at > com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:181) > Nov 5 02:44:16 adas at > com.sun.patchpro.database.GroupPatchDBBuilder.buildDB(GroupPatchDBBuilder.java:108) > Nov 5 02:44:16 adas at > com.sun.patchpro.model.PatchProModel.downloadPatchDB(PatchProModel.java:1849) > Nov 5 02:44:16 adas root: => com.sun.patchpro.util.CachingDownloader at c05d3b > <=null at > com.sun.patchpro.model.PatchProStateMachine$5.run(PatchProStateMachine.java:277) > Nov 5 02:44:16 adas at com.sun.patchpro.util.State.run(State.java:266) > Nov 5 02:44:16 adas at java.lang.Thread.run(Thread.java:595) > Nov 5 02:44:17 adas root: => com.sun.patchpro.util.CachingDownloader at 1901437 > <=com.sun.cc.platform.clientsignature.CNSSignException: Error reading private > key > Nov 5 02:44:17 adas 7470:error:0906D06C:PEM routines:PEM_read_bio:no start > line:/on10/build-nd/F10U6B7A/usr/src/common/openssl/crypto/pem/pem_lib.c:637:Expecting: > ANY PRIVATE KEY > Nov 5 02:44:17 adas at > com.sun.cc.platform.clientsignature.CNSClientSignature.throwError(Unknown > Source) > Nov 5 02:44:17 adas at > com.sun.cc.platform.clientsignature.CNSClientSignature.<init>(Unknown Source) > Nov 5 02:44:17 adas at > com.sun.cc.platform.clientsignature.CNSClientSignature.genSigString(Unknown > Source) > Nov 5 02:44:17 adas root: => com.sun.patchpro.util.CachingDownloader at 1901437 > <= at com.sun.patchpro.util.Downloader.connectToURL(Downloader.java:430) > Nov 5 02:44:17 adas at > com.sun.patchpro.util.CachingDownloader.establishConnection(CachingDownloader.java:618) > Nov 5 02:44:17 adas at > com.sun.patchpro.util.CachingDownloader.setSourceURL(CachingDownloader.java:282) > Nov 5 02:44:17 adas at > com.sun.patchpro.util.CachingDownloader.setupCache(CachingDownloader.java:208) > Nov 5 02:44:17 adas at > com.sun.patchpro.util.CachingDownloader.<init>(CachingDownloader.java:187) > Nov 5 02:44:17 adas root: => com.sun.patchpro.util.CachingDownloader at 1901437 > <= at > com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadFile(UnifiedServerPatchServiceProvider.java:1242) > Nov 5 02:44:17 adas at > com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadRealizationDetectors(UnifiedServerPatchServiceProvider.java:1029) > Nov 5 02:44:17 adas at > com.sun.patchpro.server.PatchServerProxy.downloadRealizationDetectors(PatchServerProxy.java:174) > Nov 5 02:44:17 adas at > com.sun.patchpro.analysis.HostAnalyzer$RealizationSetAuto.downloadDetectorsWithPOST(HostAnalyzer.java:1212) > Nov 5 02:44:17 adas at > com.sun.patchpro.analysis.HostAnalyzer$RealizationSetAuto.downloadDetectors(HostAnalyzer.java:1156) > Nov 5 02:44:17 adas root: => com.sun.patchpro.util.CachingDownloader at 1901437 > <= at > com.sun.patchpro.analysis.HostAnalyzer$RealizationSetAuto.prepare(HostAnalyzer.java:875) > Nov 5 02:44:17 adas at > com.sun.patchpro.analysis.HostAnalyzer.downloadDetectors(HostAnalyzer.java:299) > Nov 5 02:44:17 adas at > com.sun.patchpro.model.PatchProModel.downloadDetectors(PatchProModel.java:1776) > Nov 5 02:44:17 adas at > com.sun.patchpro.model.PatchProStateMachine$4.run(PatchProStateMachine.java:245) > Nov 5 02:44:17 adas at com.sun.patchpro.util.State.run(State.java:266) > Nov 5 02:44:17 adas root: => com.sun.patchpro.util.CachingDownloader at 1901437 > <=null at java.lang.Thread.run(Thread.java:595) > Nov 5 02:44:27 adas root: => > com.sun.patchpro.server.UnifiedServerPatchServiceProvider at c79809 > <=UnifiedServerPatchServiceProvider.downloadFile: Unable to create cache > downloader. > Nov 5 02:44:27 adas root: => > com.sun.patchpro.server.UnifiedServerPatchServiceProvider at c79809 > <=java.io.IOException: Response code was 403 > Nov 5 02:44:27 adas at > com.sun.patchpro.util.CachingDownloader.setSourceURL(CachingDownloader.java:302) > Nov 5 02:44:27 adas at > com.sun.patchpro.util.CachingDownloader.setupCache(CachingDownloader.java:208) > Nov 5 02:44:27 adas at > com.sun.patchpro.util.CachingDownloader.<init>(CachingDownloader.java:187) > Nov 5 02:44:27 adas at > com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadFile(UnifiedServerPatchServiceProvider.java:1242) > Nov 5 02:44:27 adas root: => > com.sun.patchpro.server.UnifiedServerPatchServiceProvider at c79809 <= at > com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadDatabaseFile(UnifiedServerPatchServiceProvider.java:928) > Nov 5 02:44:27 adas at > com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadPatchDB(UnifiedServerPatchServiceProvider.java:468) > Nov 5 02:44:27 adas at > com.sun.patchpro.server.PatchServerProxy.downloadPatchDB(PatchServerProxy.java:156) > Nov 5 02:44:27 adas at > com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDBWithPOST(MemoryPatchDBBuilder.java:163) > Nov 5 02:44:27 adas at > com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDB(MemoryPatchDBBuilder.java:752) > Nov 5 02:44:27 adas root: => > com.sun.patchpro.server.UnifiedServerPatchServiceProvider at c79809 <= at > com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:108) > Nov 5 02:44:27 adas at > com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:181) > Nov 5 02:44:27 adas at > com.sun.patchpro.database.GroupPatchDBBuilder.buildDB(GroupPatchDBBuilder.java:108) > Nov 5 02:44:27 adas at > com.sun.patchpro.model.PatchProModel.downloadPatchDB(PatchProModel.java:1849) > Nov 5 02:44:27 adas at > com.sun.patchpro.model.PatchProStateMachine$5.run(PatchProStateMachine.java:277) > Nov 5 02:44:27 adas root: => > com.sun.patchpro.server.UnifiedServerPatchServiceProvider at c79809 <=null at > com.sun.patchpro.util.State.run(State.java:266) > Nov 5 02:44:27 adas at java.lang.Thread.run(Thread.java:595) > Nov 5 02:44:27 adas root: => > com.sun.patchpro.server.UnifiedServerPatchServiceProvider at 3bc473 > <=UnifiedServerPatchServiceProvider.downloadFile: Unable to create cache > downloader. > Nov 5 02:44:28 adas root: => > com.sun.patchpro.server.UnifiedServerPatchServiceProvider at 3bc473 > <=java.io.IOException: Response code was 403 > Nov 5 02:44:28 adas at > com.sun.patchpro.util.CachingDownloader.setSourceURL(CachingDownloader.java:302) > Nov 5 02:44:28 adas at > com.sun.patchpro.util.CachingDownloader.setupCache(CachingDownloader.java:208) > Nov 5 02:44:28 adas at > com.sun.patchpro.util.CachingDownloader.<init>(CachingDownloader.java:187) > Nov 5 02:44:28 adas at > com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadFile(UnifiedServerPatchServiceProvider.java:1242) > Nov 5 02:44:28 adas root: => > com.sun.patchpro.server.UnifiedServerPatchServiceProvider at 3bc473 <= at > com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadRealizationDetectors(UnifiedServerPatchServiceProvider.java:1029) > Nov 5 02:44:28 adas at > com.sun.patchpro.server.PatchServerProxy.downloadRealizationDetectors(PatchServerProxy.java:174) > Nov 5 02:44:28 adas at > com.sun.patchpro.analysis.HostAnalyzer$RealizationSetAuto.downloadDetectorsWithPOST(HostAnalyzer.java:1212) > Nov 5 02:44:28 adas at > com.sun.patchpro.analysis.HostAnalyzer$RealizationSetAuto.downloadDetectors(HostAnalyzer.java:1156) > Nov 5 02:44:28 adas at > com.sun.patchpro.analysis.HostAnalyzer$RealizationSetAuto.prepare(HostAnalyzer.java:875) > Nov 5 02:44:28 adas root: => > com.sun.patchpro.server.UnifiedServerPatchServiceProvider at 3bc473 <= at > com.sun.patchpro.analysis.HostAnalyzer.downloadDetectors(HostAnalyzer.java:299) > Nov 5 02:44:28 adas at > com.sun.patchpro.model.PatchProModel.downloadDetectors(PatchProModel.java:1776) > Nov 5 02:44:28 adas at > com.sun.patchpro.model.PatchProStateMachine$4.run(PatchProStateMachine.java:245) > Nov 5 02:44:28 adas at com.sun.patchpro.util.State.run(State.java:266) > Nov 5 02:44:28 adas at java.lang.Thread.run(Thread.java:595) > Nov 5 02:44:28 adas root: => com.sun.patchpro.cli.PatchServices at 1bcdbf6 > <=com.sun.patchpro.model.PatchProException: Response code was 403 > Nov 5 02:44:28 adas at > com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadFile(UnifiedServerPatchServiceProvider.java:1267) > Nov 5 02:44:28 adas at > com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadDatabaseFile(UnifiedServerPatchServiceProvider.java:928) > Nov 5 02:44:28 adas at > com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadPatchDB(UnifiedServerPatchServiceProvider.java:468) > Nov 5 02:44:28 adas at > com.sun.patchpro.server.PatchServerProxy.downloadPatchDB(PatchServerProxy.java:156) > Nov 5 02:44:28 adas root: => com.sun.patchpro.cli.PatchServices at 1bcdbf6 <= > at > com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDBWithPOST(MemoryPatchDBBuilder.java:163) > Nov 5 02:44:28 adas at > com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDB(MemoryPatchDBBuilder.java:752) > Nov 5 02:44:28 adas at > com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:108) > Nov 5 02:44:28 adas at > com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:181) > Nov 5 02:44:28 adas at > com.sun.patchpro.database.GroupPatchDBBuilder.buildDB(GroupPatchDBBuilder.java:108) > Nov 5 02:44:28 adas root: => com.sun.patchpro.cli.PatchServices at 1bcdbf6 <= > at com.sun.patchpro.model.PatchProModel.downloadPatchDB(PatchProModel.java:1849) > Nov 5 02:44:28 adas at > com.sun.patchpro.model.PatchProStateMachine$5.run(PatchProStateMachine.java:277) > Nov 5 02:44:28 adas at com.sun.patchpro.util.State.run(State.java:266) > Nov 5 02:44:28 adas at java.lang.Thread.run(Thread.java:595) > Nov 5 02:44:28 adas Caused by: > Nov 5 02:44:28 adas root: => com.sun.patchpro.cli.PatchServices at 1bcdbf6 > <=java.io.IOException: Response code was 403 > Nov 5 02:44:28 adas at > com.sun.patchpro.util.CachingDownloader.setSourceURL(CachingDownloader.java:302) > Nov 5 02:44:28 adas at > com.sun.patchpro.util.CachingDownloader.setupCache(CachingDownloader.java:208) > Nov 5 02:44:28 adas at > com.sun.patchpro.util.CachingDownloader.<init>(CachingDownloader.java:187) > Nov 5 02:44:28 adas at > com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadFile(UnifiedServerPatchServiceProvider.java:1242) > Nov 5 02:44:28 adas root: => com.sun.patchpro.cli.PatchServices at 1bcdbf6 <= > at > com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadDatabaseFile(UnifiedServerPatchServiceProvider.java:928) > Nov 5 02:44:28 adas at > com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadPatchDB(UnifiedServerPatchServiceProvider.java:468) > Nov 5 02:44:28 adas at > com.sun.patchpro.server.PatchServerProxy.downloadPatchDB(PatchServerProxy.java:156) > Nov 5 02:44:28 adas at > com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDBWithPOST(MemoryPatchDBBuilder.java:163) > Nov 5 02:44:28 adas at > com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDB(MemoryPatchDBBuilder.java:752) > Nov 5 02:44:28 adas root: => com.sun.patchpro.cli.PatchServices at 1bcdbf6 <= > at > com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:108) > Nov 5 02:44:28 adas at > com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:181) > Nov 5 02:44:28 adas at > com.sun.patchpro.database.GroupPatchDBBuilder.buildDB(GroupPatchDBBuilder.java:108) > Nov 5 02:44:28 adas at > com.sun.patchpro.model.PatchProModel.downloadPatchDB(PatchProModel.java:1849) > Nov 5 02:44:28 adas at > com.sun.patchpro.model.PatchProStateMachine$5.run(PatchProStateMachine.java:277) > Nov 5 02:44:28 adas root: => com.sun.patchpro.cli.PatchServices at 1bcdbf6 <=null > at com.sun.patchpro.util.State.run(State.java:266) > Nov 5 02:44:28 adas at java.lang.Thread.run(Thread.java:595) > > > [07:36:43] @adas: /root > lustatus > Boot Environment Is Active Active Can Copy > Name Complete Now On Reboot Delete Status > -------------------------- -------- ------ --------- ------ ---------- > ufsBE yes yes yes no - > zfsBE yes no no yes - > [07:36:52] @adas: /root > luactivate zfsBE > A Live Upgrade Sync operation will be performed on startup of boot environment > <zfsBE>. > > > ********************************************************************** > > The target boot environment has been activated. It will be used when you > reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You > MUST USE either the init or the shutdown command when you reboot. If you > do not use either init or shutdown, the system will not boot using the > target BE. > > ********************************************************************** > > In case of a failure while booting to the target BE, the following process > needs to be followed to fallback to the currently working boot environment: > > 1. Enter the PROM monitor (ok prompt). > > 2. Change the boot device back to the original boot environment by typing: > > setenv boot-device /pci at 1c,600000/scsi at 2/disk at 0,0:a > > 3. Boot to the original boot environment by typing: > > boot > > ********************************************************************** > > Activation of boot environment <zfsBE> successful. > [07:37:52] @adas: /root > init 0 > [07:38:44] @adas: /root > stopping NetWorker daemons: > nsr_shutdown -q > svc.startd: The system is coming down. Please wait. > svc.startd: 89 system services are now being stopped. > Nov 5 07:39:39 adas syslogd: going down on signal 15 > svc.startd: The system is down. > syncing file systems... done > Program terminated > {0} ok boot > > SC Alert: Host System has Reset > Probing system devices > Probing memory > Probing I/O buses > > Sun Fire V210, No Keyboard > Copyright 2007 Sun Microsystems, Inc. All rights reserved. > OpenBoot 4.22.33, 4096 MB memory installed, Serial #64938415. > Ethernet address 0:3:ba:de:e1:af, Host ID: 83dee1af. > > > > Rebooting with command: boot > Boot device: /pci at 1c,600000/scsi at 2/disk at 1,0:a File and args: > > Can''t open boot_archive > > Evaluating: > The file just loaded does not appear to be executable. > {1} ok boot disk2 > Boot device: /pci at 1c,600000/scsi at 2/disk at 2,0 File and args: > ERROR: /pci at 1c,600000: Last Trap: Fast Data Access MMU Miss > > {1} ok boot disk1 > Boot device: /pci at 1c,600000/scsi at 2/disk at 1,0 File and args: > ERROR: /pci at 1c,600000: Last Trap: Fast Data Access MMU Miss > > {1} ok setenv boot-device /pci at 1c,600000/scsi at 2/disk at 0,0:a > boot-device = /pci at 1c,600000/scsi at 2/disk at 0,0:a > {1} ok boot > Boot device: /pci at 1c,600000/scsi at 2/disk at 0,0:a File and args: > ERROR: /pci at 1c,600000: Last Trap: Fast Data Access MMU Miss > > {1} ok boot disk > Boot device: /pci at 1c,600000/scsi at 2/disk at 0,0 File and args: > ERROR: /pci at 1c,600000: Last Trap: Fast Data Access MMU Miss > > {1} ok setenv boot-device /pci at 1c,600000/scsi at 2/disk at 0,0:a > boot-device = /pci at 1c,600000/scsi at 2/disk at 0,0:a > {1} ok boot > Boot device: /pci at 1c,600000/scsi at 2/disk at 0,0:a File and args: > ERROR: /pci at 1c,600000: Last Trap: Fast Data Access MMU Miss > > {1} ok > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Enda O''Connor x19781 Software Product Engineering Patch System Test : Ireland : x19781/353-1-8199718
Sorry its Solaris 10 U6, not Nevada. I just upgraded to U6 and was hoping I could take advantage of the zfs boot mirroring. On Wed, 5 Nov 2008, Enda O''Connor wrote:> On 11/05/08 13:02, Krzys wrote: >> I am not sure what I did wrong but I did follow up all the steps to get my >> system moved from ufs to zfs and not I am unable to boot it... can anyone >> suggest what I could do to fix it? >> >> here are all my steps: >> >> [00:26:38] @adas: /root > zpool create rootpool c1t1d0s0 >> [00:26:57] @adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool >> Analyzing system configuration. >> Comparing source boot environment <ufsBE> file systems with the file >> system(s) you specified for the new boot environment. Determining which >> file systems should be in the new boot environment. >> Updating boot environment description database on all BEs. >> Updating system configuration files. >> The device </dev/dsk/c1t1d0s0> is not a root device for any boot >> environment; cannot get BE ID. >> Creating configuration for boot environment <zfsBE>. >> Source boot environment is <ufsBE>. >> Creating boot environment <zfsBE>. >> Creating file systems on boot environment <zfsBE>. >> Creating <zfs> file system for </> in zone <global> on >> <rootpool/ROOT/zfsBE>. >> Populating file systems on boot environment <zfsBE>. >> Checking selection integrity. >> Integrity check OK. >> Populating contents of mount point </>. >> Copying. >> Bus Error - core dumped > hmm above might be relevant I''d guess. > > What release are you on , ie is this Solaris 10, or is this Nevada build? > > Enda >> Creating shared file system mount points. >> Creating compare databases for boot environment <zfsBE>. >> Creating compare database for file system </var>. >> Creating compare database for file system </usr>. >> Creating compare database for file system </rootpool/ROOT>. >> Creating compare database for file system </>. >> Updating compare databases on boot environment <zfsBE>. >> Making boot environment <zfsBE> bootable. >> Population of boot environment <zfsBE> successful. >> Creation of boot environment <zfsBE> successful. >> [01:19:36] @adas: /root > >> Nov 5 02:44:16 adas root: => >> com.sun.patchpro.util.CachingDownloader at c05d3b >> <=com.sun.cc.platform.clientsignature.CNSSignException: Error reading >> private key >> Nov 5 02:44:16 adas 7470:error:0906D06C:PEM routines:PEM_read_bio:no start >> line:/on10/build-nd/F10U6B7A/usr/src/common/openssl/crypto/pem/pem_lib.c:637:Expecting: >> ANY PRIVATE KEY >> Nov 5 02:44:16 adas at >> com.sun.cc.platform.clientsignature.CNSClientSignature.throwError(Unknown >> Source) >> Nov 5 02:44:16 adas at >> com.sun.cc.platform.clientsignature.CNSClientSignature.<init>(Unknown >> Source) >> Nov 5 02:44:16 adas at >> com.sun.cc.platform.clientsignature.CNSClientSignature.genSigString(Unknown >> Source) >> Nov 5 02:44:16 adas root: => >> com.sun.patchpro.util.CachingDownloader at c05d3b <= at >> com.sun.patchpro.util.Downloader.connectToURL(Downloader.java:430) >> Nov 5 02:44:16 adas at >> com.sun.patchpro.util.CachingDownloader.establishConnection(CachingDownloader.java:618) >> Nov 5 02:44:16 adas at >> com.sun.patchpro.util.CachingDownloader.setSourceURL(CachingDownloader.java:282) >> Nov 5 02:44:16 adas at >> com.sun.patchpro.util.CachingDownloader.setupCache(CachingDownloader.java:208) >> Nov 5 02:44:16 adas at >> com.sun.patchpro.util.CachingDownloader.<init>(CachingDownloader.java:187) >> Nov 5 02:44:16 adas root: => >> com.sun.patchpro.util.CachingDownloader at c05d3b <= at >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadFile(UnifiedServerPatchServiceProvider.java:1242) >> Nov 5 02:44:16 adas at >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadDatabaseFile(UnifiedServerPatchServiceProvider.java:928) >> Nov 5 02:44:16 adas at >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadPatchDB(UnifiedServerPatchServiceProvider.java:468) >> Nov 5 02:44:16 adas at >> com.sun.patchpro.server.PatchServerProxy.downloadPatchDB(PatchServerProxy.java:156) >> Nov 5 02:44:16 adas at >> com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDBWithPOST(MemoryPatchDBBuilder.java:163) >> Nov 5 02:44:16 adas root: => >> com.sun.patchpro.util.CachingDownloader at c05d3b <= at >> com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDB(MemoryPatchDBBuilder.java:752) >> Nov 5 02:44:16 adas at >> com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:108) >> Nov 5 02:44:16 adas at >> com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:181) >> Nov 5 02:44:16 adas at >> com.sun.patchpro.database.GroupPatchDBBuilder.buildDB(GroupPatchDBBuilder.java:108) >> Nov 5 02:44:16 adas at >> com.sun.patchpro.model.PatchProModel.downloadPatchDB(PatchProModel.java:1849) >> Nov 5 02:44:16 adas root: => >> com.sun.patchpro.util.CachingDownloader at c05d3b <=null at >> com.sun.patchpro.model.PatchProStateMachine$5.run(PatchProStateMachine.java:277) >> Nov 5 02:44:16 adas at com.sun.patchpro.util.State.run(State.java:266) >> Nov 5 02:44:16 adas at java.lang.Thread.run(Thread.java:595) >> Nov 5 02:44:17 adas root: => >> com.sun.patchpro.util.CachingDownloader at 1901437 >> <=com.sun.cc.platform.clientsignature.CNSSignException: Error reading >> private key >> Nov 5 02:44:17 adas 7470:error:0906D06C:PEM routines:PEM_read_bio:no start >> line:/on10/build-nd/F10U6B7A/usr/src/common/openssl/crypto/pem/pem_lib.c:637:Expecting: >> ANY PRIVATE KEY >> Nov 5 02:44:17 adas at >> com.sun.cc.platform.clientsignature.CNSClientSignature.throwError(Unknown >> Source) >> Nov 5 02:44:17 adas at >> com.sun.cc.platform.clientsignature.CNSClientSignature.<init>(Unknown >> Source) >> Nov 5 02:44:17 adas at >> com.sun.cc.platform.clientsignature.CNSClientSignature.genSigString(Unknown >> Source) >> Nov 5 02:44:17 adas root: => >> com.sun.patchpro.util.CachingDownloader at 1901437 <= at >> com.sun.patchpro.util.Downloader.connectToURL(Downloader.java:430) >> Nov 5 02:44:17 adas at >> com.sun.patchpro.util.CachingDownloader.establishConnection(CachingDownloader.java:618) >> Nov 5 02:44:17 adas at >> com.sun.patchpro.util.CachingDownloader.setSourceURL(CachingDownloader.java:282) >> Nov 5 02:44:17 adas at >> com.sun.patchpro.util.CachingDownloader.setupCache(CachingDownloader.java:208) >> Nov 5 02:44:17 adas at >> com.sun.patchpro.util.CachingDownloader.<init>(CachingDownloader.java:187) >> Nov 5 02:44:17 adas root: => >> com.sun.patchpro.util.CachingDownloader at 1901437 <= at >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadFile(UnifiedServerPatchServiceProvider.java:1242) >> Nov 5 02:44:17 adas at >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadRealizationDetectors(UnifiedServerPatchServiceProvider.java:1029) >> Nov 5 02:44:17 adas at >> com.sun.patchpro.server.PatchServerProxy.downloadRealizationDetectors(PatchServerProxy.java:174) >> Nov 5 02:44:17 adas at >> com.sun.patchpro.analysis.HostAnalyzer$RealizationSetAuto.downloadDetectorsWithPOST(HostAnalyzer.java:1212) >> Nov 5 02:44:17 adas at >> com.sun.patchpro.analysis.HostAnalyzer$RealizationSetAuto.downloadDetectors(HostAnalyzer.java:1156) >> Nov 5 02:44:17 adas root: => >> com.sun.patchpro.util.CachingDownloader at 1901437 <= at >> com.sun.patchpro.analysis.HostAnalyzer$RealizationSetAuto.prepare(HostAnalyzer.java:875) >> Nov 5 02:44:17 adas at >> com.sun.patchpro.analysis.HostAnalyzer.downloadDetectors(HostAnalyzer.java:299) >> Nov 5 02:44:17 adas at >> com.sun.patchpro.model.PatchProModel.downloadDetectors(PatchProModel.java:1776) >> Nov 5 02:44:17 adas at >> com.sun.patchpro.model.PatchProStateMachine$4.run(PatchProStateMachine.java:245) >> Nov 5 02:44:17 adas at com.sun.patchpro.util.State.run(State.java:266) >> Nov 5 02:44:17 adas root: => >> com.sun.patchpro.util.CachingDownloader at 1901437 <=null at >> java.lang.Thread.run(Thread.java:595) >> Nov 5 02:44:27 adas root: => >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider at c79809 >> <=UnifiedServerPatchServiceProvider.downloadFile: Unable to create cache >> downloader. >> Nov 5 02:44:27 adas root: => >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider at c79809 >> <=java.io.IOException: Response code was 403 >> Nov 5 02:44:27 adas at >> com.sun.patchpro.util.CachingDownloader.setSourceURL(CachingDownloader.java:302) >> Nov 5 02:44:27 adas at >> com.sun.patchpro.util.CachingDownloader.setupCache(CachingDownloader.java:208) >> Nov 5 02:44:27 adas at >> com.sun.patchpro.util.CachingDownloader.<init>(CachingDownloader.java:187) >> Nov 5 02:44:27 adas at >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadFile(UnifiedServerPatchServiceProvider.java:1242) >> Nov 5 02:44:27 adas root: => >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider at c79809 <= at >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadDatabaseFile(UnifiedServerPatchServiceProvider.java:928) >> Nov 5 02:44:27 adas at >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadPatchDB(UnifiedServerPatchServiceProvider.java:468) >> Nov 5 02:44:27 adas at >> com.sun.patchpro.server.PatchServerProxy.downloadPatchDB(PatchServerProxy.java:156) >> Nov 5 02:44:27 adas at >> com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDBWithPOST(MemoryPatchDBBuilder.java:163) >> Nov 5 02:44:27 adas at >> com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDB(MemoryPatchDBBuilder.java:752) >> Nov 5 02:44:27 adas root: => >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider at c79809 <= at >> com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:108) >> Nov 5 02:44:27 adas at >> com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:181) >> Nov 5 02:44:27 adas at >> com.sun.patchpro.database.GroupPatchDBBuilder.buildDB(GroupPatchDBBuilder.java:108) >> Nov 5 02:44:27 adas at >> com.sun.patchpro.model.PatchProModel.downloadPatchDB(PatchProModel.java:1849) >> Nov 5 02:44:27 adas at >> com.sun.patchpro.model.PatchProStateMachine$5.run(PatchProStateMachine.java:277) >> Nov 5 02:44:27 adas root: => >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider at c79809 <=null at >> com.sun.patchpro.util.State.run(State.java:266) >> Nov 5 02:44:27 adas at java.lang.Thread.run(Thread.java:595) >> Nov 5 02:44:27 adas root: => >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider at 3bc473 >> <=UnifiedServerPatchServiceProvider.downloadFile: Unable to create cache >> downloader. >> Nov 5 02:44:28 adas root: => >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider at 3bc473 >> <=java.io.IOException: Response code was 403 >> Nov 5 02:44:28 adas at >> com.sun.patchpro.util.CachingDownloader.setSourceURL(CachingDownloader.java:302) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.util.CachingDownloader.setupCache(CachingDownloader.java:208) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.util.CachingDownloader.<init>(CachingDownloader.java:187) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadFile(UnifiedServerPatchServiceProvider.java:1242) >> Nov 5 02:44:28 adas root: => >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider at 3bc473 <= at >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadRealizationDetectors(UnifiedServerPatchServiceProvider.java:1029) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.server.PatchServerProxy.downloadRealizationDetectors(PatchServerProxy.java:174) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.analysis.HostAnalyzer$RealizationSetAuto.downloadDetectorsWithPOST(HostAnalyzer.java:1212) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.analysis.HostAnalyzer$RealizationSetAuto.downloadDetectors(HostAnalyzer.java:1156) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.analysis.HostAnalyzer$RealizationSetAuto.prepare(HostAnalyzer.java:875) >> Nov 5 02:44:28 adas root: => >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider at 3bc473 <= at >> com.sun.patchpro.analysis.HostAnalyzer.downloadDetectors(HostAnalyzer.java:299) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.model.PatchProModel.downloadDetectors(PatchProModel.java:1776) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.model.PatchProStateMachine$4.run(PatchProStateMachine.java:245) >> Nov 5 02:44:28 adas at com.sun.patchpro.util.State.run(State.java:266) >> Nov 5 02:44:28 adas at java.lang.Thread.run(Thread.java:595) >> Nov 5 02:44:28 adas root: => com.sun.patchpro.cli.PatchServices at 1bcdbf6 >> <=com.sun.patchpro.model.PatchProException: Response code was 403 >> Nov 5 02:44:28 adas at >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadFile(UnifiedServerPatchServiceProvider.java:1267) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadDatabaseFile(UnifiedServerPatchServiceProvider.java:928) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadPatchDB(UnifiedServerPatchServiceProvider.java:468) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.server.PatchServerProxy.downloadPatchDB(PatchServerProxy.java:156) >> Nov 5 02:44:28 adas root: => com.sun.patchpro.cli.PatchServices at 1bcdbf6 >> <= at >> com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDBWithPOST(MemoryPatchDBBuilder.java:163) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDB(MemoryPatchDBBuilder.java:752) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:108) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:181) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.database.GroupPatchDBBuilder.buildDB(GroupPatchDBBuilder.java:108) >> Nov 5 02:44:28 adas root: => com.sun.patchpro.cli.PatchServices at 1bcdbf6 >> <= at >> com.sun.patchpro.model.PatchProModel.downloadPatchDB(PatchProModel.java:1849) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.model.PatchProStateMachine$5.run(PatchProStateMachine.java:277) >> Nov 5 02:44:28 adas at com.sun.patchpro.util.State.run(State.java:266) >> Nov 5 02:44:28 adas at java.lang.Thread.run(Thread.java:595) >> Nov 5 02:44:28 adas Caused by: >> Nov 5 02:44:28 adas root: => com.sun.patchpro.cli.PatchServices at 1bcdbf6 >> <=java.io.IOException: Response code was 403 >> Nov 5 02:44:28 adas at >> com.sun.patchpro.util.CachingDownloader.setSourceURL(CachingDownloader.java:302) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.util.CachingDownloader.setupCache(CachingDownloader.java:208) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.util.CachingDownloader.<init>(CachingDownloader.java:187) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadFile(UnifiedServerPatchServiceProvider.java:1242) >> Nov 5 02:44:28 adas root: => com.sun.patchpro.cli.PatchServices at 1bcdbf6 >> <= at >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadDatabaseFile(UnifiedServerPatchServiceProvider.java:928) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.server.UnifiedServerPatchServiceProvider.downloadPatchDB(UnifiedServerPatchServiceProvider.java:468) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.server.PatchServerProxy.downloadPatchDB(PatchServerProxy.java:156) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDBWithPOST(MemoryPatchDBBuilder.java:163) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.database.MemoryPatchDBBuilder.downloadPatchDB(MemoryPatchDBBuilder.java:752) >> Nov 5 02:44:28 adas root: => com.sun.patchpro.cli.PatchServices at 1bcdbf6 >> <= at >> com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:108) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.database.MemoryPatchDBBuilder.buildDB(MemoryPatchDBBuilder.java:181) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.database.GroupPatchDBBuilder.buildDB(GroupPatchDBBuilder.java:108) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.model.PatchProModel.downloadPatchDB(PatchProModel.java:1849) >> Nov 5 02:44:28 adas at >> com.sun.patchpro.model.PatchProStateMachine$5.run(PatchProStateMachine.java:277) >> Nov 5 02:44:28 adas root: => com.sun.patchpro.cli.PatchServices at 1bcdbf6 >> <=null at com.sun.patchpro.util.State.run(State.java:266) >> Nov 5 02:44:28 adas at java.lang.Thread.run(Thread.java:595) >> >> >> [07:36:43] @adas: /root > lustatus >> Boot Environment Is Active Active Can Copy >> Name Complete Now On Reboot Delete Status >> -------------------------- -------- ------ --------- ------ ---------- >> ufsBE yes yes yes no - >> zfsBE yes no no yes - >> [07:36:52] @adas: /root > luactivate zfsBE >> A Live Upgrade Sync operation will be performed on startup of boot >> environment <zfsBE>. >> >> >> ********************************************************************** >> >> The target boot environment has been activated. It will be used when you >> reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You >> MUST USE either the init or the shutdown command when you reboot. If you >> do not use either init or shutdown, the system will not boot using the >> target BE. >> >> ********************************************************************** >> >> In case of a failure while booting to the target BE, the following process >> needs to be followed to fallback to the currently working boot environment: >> >> 1. Enter the PROM monitor (ok prompt). >> >> 2. Change the boot device back to the original boot environment by typing: >> >> setenv boot-device /pci at 1c,600000/scsi at 2/disk at 0,0:a >> >> 3. Boot to the original boot environment by typing: >> >> boot >> >> ********************************************************************** >> >> Activation of boot environment <zfsBE> successful. >> [07:37:52] @adas: /root > init 0 >> [07:38:44] @adas: /root > stopping NetWorker daemons: >> nsr_shutdown -q >> svc.startd: The system is coming down. Please wait. >> svc.startd: 89 system services are now being stopped. >> Nov 5 07:39:39 adas syslogd: going down on signal 15 >> svc.startd: The system is down. >> syncing file systems... done >> Program terminated >> {0} ok boot >> >> SC Alert: Host System has Reset >> Probing system devices >> Probing memory >> Probing I/O buses >> >> Sun Fire V210, No Keyboard >> Copyright 2007 Sun Microsystems, Inc. All rights reserved. >> OpenBoot 4.22.33, 4096 MB memory installed, Serial #64938415. >> Ethernet address 0:3:ba:de:e1:af, Host ID: 83dee1af. >> >> >> >> Rebooting with command: boot >> Boot device: /pci at 1c,600000/scsi at 2/disk at 1,0:a File and args: >> >> Can''t open boot_archive >> >> Evaluating: >> The file just loaded does not appear to be executable. >> {1} ok boot disk2 >> Boot device: /pci at 1c,600000/scsi at 2/disk at 2,0 File and args: >> ERROR: /pci at 1c,600000: Last Trap: Fast Data Access MMU Miss >> >> {1} ok boot disk1 >> Boot device: /pci at 1c,600000/scsi at 2/disk at 1,0 File and args: >> ERROR: /pci at 1c,600000: Last Trap: Fast Data Access MMU Miss >> >> {1} ok setenv boot-device /pci at 1c,600000/scsi at 2/disk at 0,0:a >> boot-device = /pci at 1c,600000/scsi at 2/disk at 0,0:a >> {1} ok boot >> Boot device: /pci at 1c,600000/scsi at 2/disk at 0,0:a File and args: >> ERROR: /pci at 1c,600000: Last Trap: Fast Data Access MMU Miss >> >> {1} ok boot disk >> Boot device: /pci at 1c,600000/scsi at 2/disk at 0,0 File and args: >> ERROR: /pci at 1c,600000: Last Trap: Fast Data Access MMU Miss >> >> {1} ok setenv boot-device /pci at 1c,600000/scsi at 2/disk at 0,0:a >> boot-device = /pci at 1c,600000/scsi at 2/disk at 0,0:a >> {1} ok boot >> Boot device: /pci at 1c,600000/scsi at 2/disk at 0,0:a File and args: >> ERROR: /pci at 1c,600000: Last Trap: Fast Data Access MMU Miss >> >> {1} ok >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > -- > Enda O''Connor x19781 Software Product Engineering > Patch System Test : Ireland : x19781/353-1-8199718 > > > !DSPAM:122,4911a0f1297769287932! >
On Wed, 5 Nov 2008, Enda O''Connor wrote:> On 11/05/08 13:02, Krzys wrote: >> I am not sure what I did wrong but I did follow up all the steps to get my >> system moved from ufs to zfs and not I am unable to boot it... can anyone >> suggest what I could do to fix it? >> >> here are all my steps: >> >> [00:26:38] @adas: /root > zpool create rootpool c1t1d0s0 >> [00:26:57] @adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool >> Analyzing system configuration. >> Comparing source boot environment <ufsBE> file systems with the file >> system(s) you specified for the new boot environment. Determining which >> file systems should be in the new boot environment. >> Updating boot environment description database on all BEs. >> Updating system configuration files. >> The device </dev/dsk/c1t1d0s0> is not a root device for any boot >> environment; cannot get BE ID. >> Creating configuration for boot environment <zfsBE>. >> Source boot environment is <ufsBE>. >> Creating boot environment <zfsBE>. >> Creating file systems on boot environment <zfsBE>. >> Creating <zfs> file system for </> in zone <global> on >> <rootpool/ROOT/zfsBE>. >> Populating file systems on boot environment <zfsBE>. >> Checking selection integrity. >> Integrity check OK. >> Populating contents of mount point </>. >> Copying. >> Bus Error - core dumped > hmm above might be relevant I''d guess. > > What release are you on , ie is this Solaris 10, or is this Nevada build? > > Enda >> Creating shared file system mount points. >> Creating compare databases for boot environment <zfsBE>. >> Creating compare database for file system </var>. >> Creating compare database for file system </usr>. >> Creating compare database for file system </rootpool/ROOT>. >> Creating compare database for file system </>. >> Updating compare databases on boot environment <zfsBE>. >> Making boot environment <zfsBE> bootable.Anyway I did restart the whole process again, and I got again that Bus Error [07:59:01] root at adas: /root > zpool create rootpool c1t1d0s0 [07:59:22] root at adas: /root > zfs set compression=on rootpool/ROOT cannot open ''rootpool/ROOT'': dataset does not exist [07:59:27] root at adas: /root > zfs set compression=on rootpool [07:59:31] root at adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool Analyzing system configuration. Comparing source boot environment <ufsBE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment <zfsBE>. Source boot environment is <ufsBE>. Creating boot environment <zfsBE>. Creating file systems on boot environment <zfsBE>. Creating <zfs> file system for </> in zone <global> on <rootpool/ROOT/zfsBE>. Populating file systems on boot environment <zfsBE>. Checking selection integrity. Integrity check OK. Populating contents of mount point </>. Copying. Bus Error - core dumped Creating shared file system mount points. Creating compare databases for boot environment <zfsBE>. Creating compare database for file system </var>. Creating compare database for file system </usr>.
Enda O''Connor
2008-Nov-05 14:07 UTC
[zfs-discuss] migrating ufs to zfs - cant boot system
Hi did you get a core dump? would be nice to see the core file to get an idea of what dumped core, might configure coreadm if not already done run coreadm first, if the output looks like # coreadm global core file pattern: /var/crash/core.%f.%p global core file content: default init core file pattern: core init core file content: default global core dumps: enabled per-process core dumps: enabled global setid core dumps: enabled per-process setid core dumps: disabled global core dump logging: enabled then all should be good, and cores should appear in /var/crash otherwise the following should configure coreadm: coreadm -g /var/crash/core.%f.%p coreadm -G all coreadm -e global coreadm -e per-process coreadm -u to load the new settings without rebooting. also might need to set the size of the core dump via ulimit -c unlimited check ulimit -a first. then rerun test and check /var/crash for core dump. If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c ufsBE -n zfsBE -p rootpool might give an indication, look for SIGBUS in the truss log NOTE, that you might want to reset the coreadm and ulimit for coredumps after this, in order to not risk filling the system with coredumps in the case of some utility coredumping in a loop say. Enda On 11/05/08 13:46, Krzys wrote:> > On Wed, 5 Nov 2008, Enda O''Connor wrote: > >> On 11/05/08 13:02, Krzys wrote: >>> I am not sure what I did wrong but I did follow up all the steps to get my >>> system moved from ufs to zfs and not I am unable to boot it... can anyone >>> suggest what I could do to fix it? >>> >>> here are all my steps: >>> >>> [00:26:38] @adas: /root > zpool create rootpool c1t1d0s0 >>> [00:26:57] @adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool >>> Analyzing system configuration. >>> Comparing source boot environment <ufsBE> file systems with the file >>> system(s) you specified for the new boot environment. Determining which >>> file systems should be in the new boot environment. >>> Updating boot environment description database on all BEs. >>> Updating system configuration files. >>> The device </dev/dsk/c1t1d0s0> is not a root device for any boot >>> environment; cannot get BE ID. >>> Creating configuration for boot environment <zfsBE>. >>> Source boot environment is <ufsBE>. >>> Creating boot environment <zfsBE>. >>> Creating file systems on boot environment <zfsBE>. >>> Creating <zfs> file system for </> in zone <global> on >>> <rootpool/ROOT/zfsBE>. >>> Populating file systems on boot environment <zfsBE>. >>> Checking selection integrity. >>> Integrity check OK. >>> Populating contents of mount point </>. >>> Copying. >>> Bus Error - core dumped >> hmm above might be relevant I''d guess. >> >> What release are you on , ie is this Solaris 10, or is this Nevada build? >> >> Enda >>> Creating shared file system mount points. >>> Creating compare databases for boot environment <zfsBE>. >>> Creating compare database for file system </var>. >>> Creating compare database for file system </usr>. >>> Creating compare database for file system </rootpool/ROOT>. >>> Creating compare database for file system </>. >>> Updating compare databases on boot environment <zfsBE>. >>> Making boot environment <zfsBE> bootable. > > Anyway I did restart the whole process again, and I got again that Bus Error > > [07:59:01] root at adas: /root > zpool create rootpool c1t1d0s0 > [07:59:22] root at adas: /root > zfs set compression=on rootpool/ROOT > cannot open ''rootpool/ROOT'': dataset does not exist > [07:59:27] root at adas: /root > zfs set compression=on rootpool > [07:59:31] root at adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool > Analyzing system configuration. > Comparing source boot environment <ufsBE> file systems with the file > system(s) you specified for the new boot environment. Determining which > file systems should be in the new boot environment. > Updating boot environment description database on all BEs. > Updating system configuration files. > The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; > cannot get BE ID. > Creating configuration for boot environment <zfsBE>. > Source boot environment is <ufsBE>. > Creating boot environment <zfsBE>. > Creating file systems on boot environment <zfsBE>. > Creating <zfs> file system for </> in zone <global> on <rootpool/ROOT/zfsBE>. > Populating file systems on boot environment <zfsBE>. > Checking selection integrity. > Integrity check OK. > Populating contents of mount point </>. > Copying. > Bus Error - core dumped > Creating shared file system mount points. > Creating compare databases for boot environment <zfsBE>. > Creating compare database for file system </var>. > Creating compare database for file system </usr>. > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Enda O''Connor x19781 Software Product Engineering Patch System Test : Ireland : x19781/353-1-8199718
Enda O''Connor
2008-Nov-05 14:24 UTC
[zfs-discuss] migrating ufs to zfs - cant boot system
Hi Krzys Also some info on the actual system ie what was it upgraded to u6 from and how. and an idea of how the filesystems are laid out, ie is usr seperate from / and so on ( maybe a df -k ). Don''t appear to have any zones installed, just to confirm. Enda On 11/05/08 14:07, Enda O''Connor wrote:> Hi > did you get a core dump? > would be nice to see the core file to get an idea of what dumped core, > might configure coreadm if not already done > run coreadm first, if the output looks like > > # coreadm > global core file pattern: /var/crash/core.%f.%p > global core file content: default > init core file pattern: core > init core file content: default > global core dumps: enabled > per-process core dumps: enabled > global setid core dumps: enabled > per-process setid core dumps: disabled > global core dump logging: enabled > > then all should be good, and cores should appear in /var/crash > > otherwise the following should configure coreadm: > coreadm -g /var/crash/core.%f.%p > coreadm -G all > coreadm -e global > coreadm -e per-process > > > coreadm -u to load the new settings without rebooting. > > also might need to set the size of the core dump via > ulimit -c unlimited > check ulimit -a first. > > then rerun test and check /var/crash for core dump. > > If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c > ufsBE -n zfsBE -p rootpool > > might give an indication, look for SIGBUS in the truss log > > NOTE, that you might want to reset the coreadm and ulimit for coredumps > after this, in order to not risk filling the system with coredumps in > the case of some utility coredumping in a loop say. > > > Enda > On 11/05/08 13:46, Krzys wrote: >> >> On Wed, 5 Nov 2008, Enda O''Connor wrote: >> >>> On 11/05/08 13:02, Krzys wrote: >>>> I am not sure what I did wrong but I did follow up all the steps to >>>> get my system moved from ufs to zfs and not I am unable to boot >>>> it... can anyone suggest what I could do to fix it? >>>> >>>> here are all my steps: >>>> >>>> [00:26:38] @adas: /root > zpool create rootpool c1t1d0s0 >>>> [00:26:57] @adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool >>>> Analyzing system configuration. >>>> Comparing source boot environment <ufsBE> file systems with the file >>>> system(s) you specified for the new boot environment. Determining which >>>> file systems should be in the new boot environment. >>>> Updating boot environment description database on all BEs. >>>> Updating system configuration files. >>>> The device </dev/dsk/c1t1d0s0> is not a root device for any boot >>>> environment; cannot get BE ID. >>>> Creating configuration for boot environment <zfsBE>. >>>> Source boot environment is <ufsBE>. >>>> Creating boot environment <zfsBE>. >>>> Creating file systems on boot environment <zfsBE>. >>>> Creating <zfs> file system for </> in zone <global> on >>>> <rootpool/ROOT/zfsBE>. >>>> Populating file systems on boot environment <zfsBE>. >>>> Checking selection integrity. >>>> Integrity check OK. >>>> Populating contents of mount point </>. >>>> Copying. >>>> Bus Error - core dumped >>> hmm above might be relevant I''d guess. >>> >>> What release are you on , ie is this Solaris 10, or is this Nevada >>> build? >>> >>> Enda >>>> Creating shared file system mount points. >>>> Creating compare databases for boot environment <zfsBE>. >>>> Creating compare database for file system </var>. >>>> Creating compare database for file system </usr>. >>>> Creating compare database for file system </rootpool/ROOT>. >>>> Creating compare database for file system </>. >>>> Updating compare databases on boot environment <zfsBE>. >>>> Making boot environment <zfsBE> bootable. >> >> Anyway I did restart the whole process again, and I got again that Bus >> Error >> >> [07:59:01] root at adas: /root > zpool create rootpool c1t1d0s0 >> [07:59:22] root at adas: /root > zfs set compression=on rootpool/ROOT >> cannot open ''rootpool/ROOT'': dataset does not exist >> [07:59:27] root at adas: /root > zfs set compression=on rootpool >> [07:59:31] root at adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool >> Analyzing system configuration. >> Comparing source boot environment <ufsBE> file systems with the file >> system(s) you specified for the new boot environment. Determining which >> file systems should be in the new boot environment. >> Updating boot environment description database on all BEs. >> Updating system configuration files. >> The device </dev/dsk/c1t1d0s0> is not a root device for any boot >> environment; cannot get BE ID. >> Creating configuration for boot environment <zfsBE>. >> Source boot environment is <ufsBE>. >> Creating boot environment <zfsBE>. >> Creating file systems on boot environment <zfsBE>. >> Creating <zfs> file system for </> in zone <global> on >> <rootpool/ROOT/zfsBE>. >> Populating file systems on boot environment <zfsBE>. >> Checking selection integrity. >> Integrity check OK. >> Populating contents of mount point </>. >> Copying. >> Bus Error - core dumped >> Creating shared file system mount points. >> Creating compare databases for boot environment <zfsBE>. >> Creating compare database for file system </var>. >> Creating compare database for file system </usr>. >> >> >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >-- Enda O''Connor x19781 Software Product Engineering Patch System Test : Ireland : x19781/353-1-8199718
Great, I will follow this, but I was wondering maybe I did not setup my disc correctly? from what I do understand zpool cannot be setup on whole disk as other pools are so I did partition my disk so all the space is in s0 slice. Maybe I thats not correct? [10:03:45] root at adas: /root > format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c1t0d0 <SEAGATE-ST3146807LC-0007 cyl 49780 alt 2 hd 8 sec 720> /pci at 1c,600000/scsi at 2/sd at 0,0 1. c1t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107> /pci at 1c,600000/scsi at 2/sd at 1,0 Specify disk (enter its number): 1 selecting c1t1d0 [disk formatted] /dev/dsk/c1t1d0s0 is part of active ZFS pool rootpool. Please see zpool(1M). /dev/dsk/c1t1d0s2 is part of active ZFS pool rootpool. Please see zpool(1M). FORMAT MENU: disk - select a disk type - select (define) a disk type partition - select (define) a partition table current - describe the current disk format - format and analyze the disk repair - repair a defective sector label - write label to the disk analyze - surface analysis defect - defect list management backup - search for backup labels verify - read and display labels save - save new disk/partition definitions inquiry - show vendor, product and revision volname - set 8-character volume name !<cmd> - execute <cmd>, then return quit format> verify Primary label contents: Volume name = < > ascii name = <SUN36G cyl 24620 alt 2 hd 27 sec 107> pcyl = 24622 ncyl = 24620 acyl = 2 nhead = 27 nsect = 107 Part Tag Flag Cylinders Size Blocks 0 root wm 0 - 24619 33.92GB (24620/0/0) 71127180 1 unassigned wu 0 0 (0/0/0) 0 2 backup wm 0 - 24619 33.92GB (24620/0/0) 71127180 3 unassigned wu 0 0 (0/0/0) 0 4 unassigned wu 0 0 (0/0/0) 0 5 unassigned wu 0 0 (0/0/0) 0 6 unassigned wu 0 0 (0/0/0) 0 7 unassigned wu 0 0 (0/0/0) 0 format> On Wed, 5 Nov 2008, Enda O''Connor wrote:> Hi > did you get a core dump? > would be nice to see the core file to get an idea of what dumped core, > might configure coreadm if not already done > run coreadm first, if the output looks like > > # coreadm > global core file pattern: /var/crash/core.%f.%p > global core file content: default > init core file pattern: core > init core file content: default > global core dumps: enabled > per-process core dumps: enabled > global setid core dumps: enabled > per-process setid core dumps: disabled > global core dump logging: enabled > > then all should be good, and cores should appear in /var/crash > > otherwise the following should configure coreadm: > coreadm -g /var/crash/core.%f.%p > coreadm -G all > coreadm -e global > coreadm -e per-process > > > coreadm -u to load the new settings without rebooting. > > also might need to set the size of the core dump via > ulimit -c unlimited > check ulimit -a first. > > then rerun test and check /var/crash for core dump. > > If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c ufsBE > -n zfsBE -p rootpool > > might give an indication, look for SIGBUS in the truss log > > NOTE, that you might want to reset the coreadm and ulimit for coredumps after > this, in order to not risk filling the system with coredumps in the case of > some utility coredumping in a loop say. > > > Enda > On 11/05/08 13:46, Krzys wrote: >> >> On Wed, 5 Nov 2008, Enda O''Connor wrote: >> >>> On 11/05/08 13:02, Krzys wrote: >>>> I am not sure what I did wrong but I did follow up all the steps to get >>>> my system moved from ufs to zfs and not I am unable to boot it... can >>>> anyone suggest what I could do to fix it? >>>> >>>> here are all my steps: >>>> >>>> [00:26:38] @adas: /root > zpool create rootpool c1t1d0s0 >>>> [00:26:57] @adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool >>>> Analyzing system configuration. >>>> Comparing source boot environment <ufsBE> file systems with the file >>>> system(s) you specified for the new boot environment. Determining which >>>> file systems should be in the new boot environment. >>>> Updating boot environment description database on all BEs. >>>> Updating system configuration files. >>>> The device </dev/dsk/c1t1d0s0> is not a root device for any boot >>>> environment; cannot get BE ID. >>>> Creating configuration for boot environment <zfsBE>. >>>> Source boot environment is <ufsBE>. >>>> Creating boot environment <zfsBE>. >>>> Creating file systems on boot environment <zfsBE>. >>>> Creating <zfs> file system for </> in zone <global> on >>>> <rootpool/ROOT/zfsBE>. >>>> Populating file systems on boot environment <zfsBE>. >>>> Checking selection integrity. >>>> Integrity check OK. >>>> Populating contents of mount point </>. >>>> Copying. >>>> Bus Error - core dumped >>> hmm above might be relevant I''d guess. >>> >>> What release are you on , ie is this Solaris 10, or is this Nevada build? >>> >>> Enda >>>> Creating shared file system mount points. >>>> Creating compare databases for boot environment <zfsBE>. >>>> Creating compare database for file system </var>. >>>> Creating compare database for file system </usr>. >>>> Creating compare database for file system </rootpool/ROOT>. >>>> Creating compare database for file system </>. >>>> Updating compare databases on boot environment <zfsBE>. >>>> Making boot environment <zfsBE> bootable. >> >> Anyway I did restart the whole process again, and I got again that Bus >> Error >> >> [07:59:01] root at adas: /root > zpool create rootpool c1t1d0s0 >> [07:59:22] root at adas: /root > zfs set compression=on rootpool/ROOT >> cannot open ''rootpool/ROOT'': dataset does not exist >> [07:59:27] root at adas: /root > zfs set compression=on rootpool >> [07:59:31] root at adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool >> Analyzing system configuration. >> Comparing source boot environment <ufsBE> file systems with the file >> system(s) you specified for the new boot environment. Determining which >> file systems should be in the new boot environment. >> Updating boot environment description database on all BEs. >> Updating system configuration files. >> The device </dev/dsk/c1t1d0s0> is not a root device for any boot >> environment; cannot get BE ID. >> Creating configuration for boot environment <zfsBE>. >> Source boot environment is <ufsBE>. >> Creating boot environment <zfsBE>. >> Creating file systems on boot environment <zfsBE>. >> Creating <zfs> file system for </> in zone <global> on >> <rootpool/ROOT/zfsBE>. >> Populating file systems on boot environment <zfsBE>. >> Checking selection integrity. >> Integrity check OK. >> Populating contents of mount point </>. >> Copying. >> Bus Error - core dumped >> Creating shared file system mount points. >> Creating compare databases for boot environment <zfsBE>. >> Creating compare database for file system </var>. >> Creating compare database for file system </usr>. >> >> >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > -- > Enda O''Connor x19781 Software Product Engineering > Patch System Test : Ireland : x19781/353-1-8199718 > > > !DSPAM:122,4911a8521572681622464! >
Enda O''Connor
2008-Nov-05 15:21 UTC
[zfs-discuss] migrating ufs to zfs - cant boot system
Hi No that should be fine, as long as disk is SMI labelled then that''s fine, and lU would have failed much earlier if it found an EFI labelled disk. core dump is not due to this, something else is causing that. Enda On 11/05/08 15:14, Krzys wrote:> Great, I will follow this, but I was wondering maybe I did not setup my > disc correctly? from what I do understand zpool cannot be setup on whole > disk as other pools are so I did partition my disk so all the space is > in s0 slice. Maybe I thats not correct? > > [10:03:45] root at adas: /root > format > Searching for disks...done > > > AVAILABLE DISK SELECTIONS: > 0. c1t0d0 <SEAGATE-ST3146807LC-0007 cyl 49780 alt 2 hd 8 sec 720> > /pci at 1c,600000/scsi at 2/sd at 0,0 > 1. c1t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107> > /pci at 1c,600000/scsi at 2/sd at 1,0 > Specify disk (enter its number): 1 > selecting c1t1d0 > [disk formatted] > /dev/dsk/c1t1d0s0 is part of active ZFS pool rootpool. Please see > zpool(1M). > /dev/dsk/c1t1d0s2 is part of active ZFS pool rootpool. Please see > zpool(1M). > > > FORMAT MENU: > disk - select a disk > type - select (define) a disk type > partition - select (define) a partition table > current - describe the current disk > format - format and analyze the disk > repair - repair a defective sector > label - write label to the disk > analyze - surface analysis > defect - defect list management > backup - search for backup labels > verify - read and display labels > save - save new disk/partition definitions > inquiry - show vendor, product and revision > volname - set 8-character volume name > !<cmd> - execute <cmd>, then return > quit > format> verify > > Primary label contents: > > Volume name = < > > ascii name = <SUN36G cyl 24620 alt 2 hd 27 sec 107> > pcyl = 24622 > ncyl = 24620 > acyl = 2 > nhead = 27 > nsect = 107 > Part Tag Flag Cylinders Size Blocks > 0 root wm 0 - 24619 33.92GB (24620/0/0) 71127180 > 1 unassigned wu 0 0 (0/0/0) 0 > 2 backup wm 0 - 24619 33.92GB (24620/0/0) 71127180 > 3 unassigned wu 0 0 (0/0/0) 0 > 4 unassigned wu 0 0 (0/0/0) 0 > 5 unassigned wu 0 0 (0/0/0) 0 > 6 unassigned wu 0 0 (0/0/0) 0 > 7 unassigned wu 0 0 (0/0/0) 0 > > format> > > > On Wed, 5 Nov 2008, Enda O''Connor wrote: > >> Hi >> did you get a core dump? >> would be nice to see the core file to get an idea of what dumped core, >> might configure coreadm if not already done >> run coreadm first, if the output looks like >> >> # coreadm >> global core file pattern: /var/crash/core.%f.%p >> global core file content: default >> init core file pattern: core >> init core file content: default >> global core dumps: enabled >> per-process core dumps: enabled >> global setid core dumps: enabled >> per-process setid core dumps: disabled >> global core dump logging: enabled >> >> then all should be good, and cores should appear in /var/crash >> >> otherwise the following should configure coreadm: >> coreadm -g /var/crash/core.%f.%p >> coreadm -G all >> coreadm -e global >> coreadm -e per-process >> >> >> coreadm -u to load the new settings without rebooting. >> >> also might need to set the size of the core dump via >> ulimit -c unlimited >> check ulimit -a first. >> >> then rerun test and check /var/crash for core dump. >> >> If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c >> ufsBE -n zfsBE -p rootpool >> >> might give an indication, look for SIGBUS in the truss log >> >> NOTE, that you might want to reset the coreadm and ulimit for >> coredumps after this, in order to not risk filling the system with >> coredumps in the case of some utility coredumping in a loop say. >> >> >> Enda >> On 11/05/08 13:46, Krzys wrote: >>> >>> On Wed, 5 Nov 2008, Enda O''Connor wrote: >>> >>>> On 11/05/08 13:02, Krzys wrote: >>>>> I am not sure what I did wrong but I did follow up all the steps to >>>>> get my system moved from ufs to zfs and not I am unable to boot >>>>> it... can anyone suggest what I could do to fix it? >>>>> >>>>> here are all my steps: >>>>> >>>>> [00:26:38] @adas: /root > zpool create rootpool c1t1d0s0 >>>>> [00:26:57] @adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool >>>>> Analyzing system configuration. >>>>> Comparing source boot environment <ufsBE> file systems with the file >>>>> system(s) you specified for the new boot environment. Determining >>>>> which >>>>> file systems should be in the new boot environment. >>>>> Updating boot environment description database on all BEs. >>>>> Updating system configuration files. >>>>> The device </dev/dsk/c1t1d0s0> is not a root device for any boot >>>>> environment; cannot get BE ID. >>>>> Creating configuration for boot environment <zfsBE>. >>>>> Source boot environment is <ufsBE>. >>>>> Creating boot environment <zfsBE>. >>>>> Creating file systems on boot environment <zfsBE>. >>>>> Creating <zfs> file system for </> in zone <global> on >>>>> <rootpool/ROOT/zfsBE>. >>>>> Populating file systems on boot environment <zfsBE>. >>>>> Checking selection integrity. >>>>> Integrity check OK. >>>>> Populating contents of mount point </>. >>>>> Copying. >>>>> Bus Error - core dumped >>>> hmm above might be relevant I''d guess. >>>> >>>> What release are you on , ie is this Solaris 10, or is this Nevada >>>> build? >>>> >>>> Enda >>>>> Creating shared file system mount points. >>>>> Creating compare databases for boot environment <zfsBE>. >>>>> Creating compare database for file system </var>. >>>>> Creating compare database for file system </usr>. >>>>> Creating compare database for file system </rootpool/ROOT>. >>>>> Creating compare database for file system </>. >>>>> Updating compare databases on boot environment <zfsBE>. >>>>> Making boot environment <zfsBE> bootable. >>> >>> Anyway I did restart the whole process again, and I got again that >>> Bus Error >>> >>> [07:59:01] root at adas: /root > zpool create rootpool c1t1d0s0 >>> [07:59:22] root at adas: /root > zfs set compression=on rootpool/ROOT >>> cannot open ''rootpool/ROOT'': dataset does not exist >>> [07:59:27] root at adas: /root > zfs set compression=on rootpool >>> [07:59:31] root at adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool >>> Analyzing system configuration. >>> Comparing source boot environment <ufsBE> file systems with the file >>> system(s) you specified for the new boot environment. Determining which >>> file systems should be in the new boot environment. >>> Updating boot environment description database on all BEs. >>> Updating system configuration files. >>> The device </dev/dsk/c1t1d0s0> is not a root device for any boot >>> environment; cannot get BE ID. >>> Creating configuration for boot environment <zfsBE>. >>> Source boot environment is <ufsBE>. >>> Creating boot environment <zfsBE>. >>> Creating file systems on boot environment <zfsBE>. >>> Creating <zfs> file system for </> in zone <global> on >>> <rootpool/ROOT/zfsBE>. >>> Populating file systems on boot environment <zfsBE>. >>> Checking selection integrity. >>> Integrity check OK. >>> Populating contents of mount point </>. >>> Copying. >>> Bus Error - core dumped >>> Creating shared file system mount points. >>> Creating compare databases for boot environment <zfsBE>. >>> Creating compare database for file system </var>. >>> Creating compare database for file system </usr>. >>> >>> >>> >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> >> -- >> Enda O''Connor x19781 Software Product Engineering >> Patch System Test : Ireland : x19781/353-1-8199718 >> >> >> !DSPAM:122,4911a8521572681622464! >>-- Enda O''Connor x19781 Software Product Engineering Patch System Test : Ireland : x19781/353-1-8199718
I did upgrade my U5 to U6 from DVD, went trough the upgrade process. my file system is setup as follow: [10:11:54] root at adas: /root > df -h | egrep -v "platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr" Filesystem size used avail capacity Mounted on /dev/dsk/c1t0d0s0 16G 7.2G 8.4G 47% / swap 8.3G 1.5M 8.3G 1% /etc/svc/volatile /dev/dsk/c1t0d0s6 16G 8.7G 6.9G 56% /usr /dev/dsk/c1t0d0s1 16G 2.5G 13G 17% /var swap 8.5G 229M 8.3G 3% /tmp swap 8.3G 40K 8.3G 1% /var/run /dev/dsk/c1t0d0s7 78G 1.2G 76G 2% /export/home rootpool 33G 19K 21G 1% /rootpool rootpool/ROOT 33G 18K 21G 1% /rootpool/ROOT rootpool/ROOT/zfsBE 33G 31M 21G 1% /.alt.tmp.b-UUb.mnt /export/home 78G 1.2G 76G 2% /.alt.tmp.b-UUb.mnt/export/home /rootpool 21G 19K 21G 1% /.alt.tmp.b-UUb.mnt/rootpool /rootpool/ROOT 21G 18K 21G 1% /.alt.tmp.b-UUb.mnt/rootpool/ROOT swap 8.3G 0K 8.3G 0% /.alt.tmp.b-UUb.mnt/var/run swap 8.3G 0K 8.3G 0% /.alt.tmp.b-UUb.mnt/tmp [10:12:00] root at adas: /root > so I have /, /usr, /var and /export/home on that primary disk. Original disk is 140gb, this new one is only 36gb, but disk utilization on that primary disk is much less utilized so easily should fit on it. / 7.2GB /usr 8.7GB /var 2.5GB /export/home 1.2GB total space 19.6GB I did notice that lucreate did alocate 8GB to SWAP and 4GB to DUMP total space needed 31.6GB seems like total available disk space on my disk should be 33.92GB so its quite close as both numbers do approach. So to make sure I will change disk for 72gb and will try again. I do not beleive that I need to match my main disk size as 146gb as I am not using that much disk space on it. But let me try this and it might be why I am getting this problem... On Wed, 5 Nov 2008, Enda O''Connor wrote:> Hi Krzys > Also some info on the actual system > ie what was it upgraded to u6 from and how. > and an idea of how the filesystems are laid out, ie is usr seperate from / > and so on ( maybe a df -k ). Don''t appear to have any zones installed, just > to confirm. > Enda > > On 11/05/08 14:07, Enda O''Connor wrote: >> Hi >> did you get a core dump? >> would be nice to see the core file to get an idea of what dumped core, >> might configure coreadm if not already done >> run coreadm first, if the output looks like >> >> # coreadm >> global core file pattern: /var/crash/core.%f.%p >> global core file content: default >> init core file pattern: core >> init core file content: default >> global core dumps: enabled >> per-process core dumps: enabled >> global setid core dumps: enabled >> per-process setid core dumps: disabled >> global core dump logging: enabled >> >> then all should be good, and cores should appear in /var/crash >> >> otherwise the following should configure coreadm: >> coreadm -g /var/crash/core.%f.%p >> coreadm -G all >> coreadm -e global >> coreadm -e per-process >> >> >> coreadm -u to load the new settings without rebooting. >> >> also might need to set the size of the core dump via >> ulimit -c unlimited >> check ulimit -a first. >> >> then rerun test and check /var/crash for core dump. >> >> If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c >> ufsBE -n zfsBE -p rootpool >> >> might give an indication, look for SIGBUS in the truss log >> >> NOTE, that you might want to reset the coreadm and ulimit for coredumps >> after this, in order to not risk filling the system with coredumps in the >> case of some utility coredumping in a loop say. >> >> >> Enda >> On 11/05/08 13:46, Krzys wrote: >>> >>> On Wed, 5 Nov 2008, Enda O''Connor wrote: >>> >>>> On 11/05/08 13:02, Krzys wrote: >>>>> I am not sure what I did wrong but I did follow up all the steps to get >>>>> my system moved from ufs to zfs and not I am unable to boot it... can >>>>> anyone suggest what I could do to fix it? >>>>> >>>>> here are all my steps: >>>>> >>>>> [00:26:38] @adas: /root > zpool create rootpool c1t1d0s0 >>>>> [00:26:57] @adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool >>>>> Analyzing system configuration. >>>>> Comparing source boot environment <ufsBE> file systems with the file >>>>> system(s) you specified for the new boot environment. Determining which >>>>> file systems should be in the new boot environment. >>>>> Updating boot environment description database on all BEs. >>>>> Updating system configuration files. >>>>> The device </dev/dsk/c1t1d0s0> is not a root device for any boot >>>>> environment; cannot get BE ID. >>>>> Creating configuration for boot environment <zfsBE>. >>>>> Source boot environment is <ufsBE>. >>>>> Creating boot environment <zfsBE>. >>>>> Creating file systems on boot environment <zfsBE>. >>>>> Creating <zfs> file system for </> in zone <global> on >>>>> <rootpool/ROOT/zfsBE>. >>>>> Populating file systems on boot environment <zfsBE>. >>>>> Checking selection integrity. >>>>> Integrity check OK. >>>>> Populating contents of mount point </>. >>>>> Copying. >>>>> Bus Error - core dumped >>>> hmm above might be relevant I''d guess. >>>> >>>> What release are you on , ie is this Solaris 10, or is this Nevada build? >>>> >>>> Enda >>>>> Creating shared file system mount points. >>>>> Creating compare databases for boot environment <zfsBE>. >>>>> Creating compare database for file system </var>. >>>>> Creating compare database for file system </usr>. >>>>> Creating compare database for file system </rootpool/ROOT>. >>>>> Creating compare database for file system </>. >>>>> Updating compare databases on boot environment <zfsBE>. >>>>> Making boot environment <zfsBE> bootable. >>> >>> Anyway I did restart the whole process again, and I got again that Bus >>> Error >>> >>> [07:59:01] root at adas: /root > zpool create rootpool c1t1d0s0 >>> [07:59:22] root at adas: /root > zfs set compression=on rootpool/ROOT >>> cannot open ''rootpool/ROOT'': dataset does not exist >>> [07:59:27] root at adas: /root > zfs set compression=on rootpool >>> [07:59:31] root at adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool >>> Analyzing system configuration. >>> Comparing source boot environment <ufsBE> file systems with the file >>> system(s) you specified for the new boot environment. Determining which >>> file systems should be in the new boot environment. >>> Updating boot environment description database on all BEs. >>> Updating system configuration files. >>> The device </dev/dsk/c1t1d0s0> is not a root device for any boot >>> environment; cannot get BE ID. >>> Creating configuration for boot environment <zfsBE>. >>> Source boot environment is <ufsBE>. >>> Creating boot environment <zfsBE>. >>> Creating file systems on boot environment <zfsBE>. >>> Creating <zfs> file system for </> in zone <global> on >>> <rootpool/ROOT/zfsBE>. >>> Populating file systems on boot environment <zfsBE>. >>> Checking selection integrity. >>> Integrity check OK. >>> Populating contents of mount point </>. >>> Copying. >>> Bus Error - core dumped >>> Creating shared file system mount points. >>> Creating compare databases for boot environment <zfsBE>. >>> Creating compare database for file system </var>. >>> Creating compare database for file system </usr>. >>> >>> >>> >>> _______________________________________________ >>> zfs-discuss mailing list >>> zfs-discuss at opensolaris.org >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> >> > > > -- > Enda O''Connor x19781 Software Product Engineering > Patch System Test : Ireland : x19781/353-1-8199718 > > > !DSPAM:122,4911ac7c27292151120594! >
Enda O''Connor
2008-Nov-05 16:47 UTC
[zfs-discuss] migrating ufs to zfs - cant boot system
Hi Looks ok, some mounts left over from pervious fail. In regards to swap and dump on zpool you can set them zfs set volsize=1G rootpool/dump zfs set volsize=1G rootpool/swap for instance, of course above are only an example of how to do it. or make the zvol doe rootpool/dump etc before lucreate, in which case it will take the swap and dump size you have preset. But I think we need to see the coredump/truss at this point to get an idea of where things went wrong. Enda On 11/05/08 15:38, Krzys wrote:> I did upgrade my U5 to U6 from DVD, went trough the upgrade process. > my file system is setup as follow: > [10:11:54] root at adas: /root > df -h | egrep -v > "platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr" > Filesystem size used avail capacity Mounted on > /dev/dsk/c1t0d0s0 16G 7.2G 8.4G 47% / > swap 8.3G 1.5M 8.3G 1% /etc/svc/volatile > /dev/dsk/c1t0d0s6 16G 8.7G 6.9G 56% /usr > /dev/dsk/c1t0d0s1 16G 2.5G 13G 17% /var > swap 8.5G 229M 8.3G 3% /tmp > swap 8.3G 40K 8.3G 1% /var/run > /dev/dsk/c1t0d0s7 78G 1.2G 76G 2% /export/home > rootpool 33G 19K 21G 1% /rootpool > rootpool/ROOT 33G 18K 21G 1% /rootpool/ROOT > rootpool/ROOT/zfsBE 33G 31M 21G 1% /.alt.tmp.b-UUb.mnt > /export/home 78G 1.2G 76G 2% > /.alt.tmp.b-UUb.mnt/export/home > /rootpool 21G 19K 21G 1% > /.alt.tmp.b-UUb.mnt/rootpool > /rootpool/ROOT 21G 18K 21G 1% > /.alt.tmp.b-UUb.mnt/rootpool/ROOT > swap 8.3G 0K 8.3G 0% > /.alt.tmp.b-UUb.mnt/var/run > swap 8.3G 0K 8.3G 0% /.alt.tmp.b-UUb.mnt/tmp > [10:12:00] root at adas: /root > > > > so I have /, /usr, /var and /export/home on that primary disk. Original > disk is 140gb, this new one is only 36gb, but disk utilization on that > primary disk is much less utilized so easily should fit on it. > > / 7.2GB > /usr 8.7GB > /var 2.5GB > /export/home 1.2GB > total space 19.6GB > I did notice that lucreate did alocate 8GB to SWAP and 4GB to DUMP > total space needed 31.6GB > seems like total available disk space on my disk should be 33.92GB > so its quite close as both numbers do approach. So to make sure I will > change disk for 72gb and will try again. I do not beleive that I need to > match my main disk size as 146gb as I am not using that much disk space > on it. But let me try this and it might be why I am getting this problem... > > > > On Wed, 5 Nov 2008, Enda O''Connor wrote: > >> Hi Krzys >> Also some info on the actual system >> ie what was it upgraded to u6 from and how. >> and an idea of how the filesystems are laid out, ie is usr seperate >> from / and so on ( maybe a df -k ). Don''t appear to have any zones >> installed, just to confirm. >> Enda >> >> On 11/05/08 14:07, Enda O''Connor wrote: >>> Hi >>> did you get a core dump? >>> would be nice to see the core file to get an idea of what dumped core, >>> might configure coreadm if not already done >>> run coreadm first, if the output looks like >>> >>> # coreadm >>> global core file pattern: /var/crash/core.%f.%p >>> global core file content: default >>> init core file pattern: core >>> init core file content: default >>> global core dumps: enabled >>> per-process core dumps: enabled >>> global setid core dumps: enabled >>> per-process setid core dumps: disabled >>> global core dump logging: enabled >>> >>> then all should be good, and cores should appear in /var/crash >>> >>> otherwise the following should configure coreadm: >>> coreadm -g /var/crash/core.%f.%p >>> coreadm -G all >>> coreadm -e global >>> coreadm -e per-process >>> >>> >>> coreadm -u to load the new settings without rebooting. >>> >>> also might need to set the size of the core dump via >>> ulimit -c unlimited >>> check ulimit -a first. >>> >>> then rerun test and check /var/crash for core dump. >>> >>> If that fails a truss via say truss -fae -o /tmp/truss.out lucreate >>> -c ufsBE -n zfsBE -p rootpool >>> >>> might give an indication, look for SIGBUS in the truss log >>> >>> NOTE, that you might want to reset the coreadm and ulimit for >>> coredumps after this, in order to not risk filling the system with >>> coredumps in the case of some utility coredumping in a loop say. >>> >>> >>> Enda >>> On 11/05/08 13:46, Krzys wrote: >>>> >>>> On Wed, 5 Nov 2008, Enda O''Connor wrote: >>>> >>>>> On 11/05/08 13:02, Krzys wrote: >>>>>> I am not sure what I did wrong but I did follow up all the steps >>>>>> to get my system moved from ufs to zfs and not I am unable to boot >>>>>> it... can anyone suggest what I could do to fix it? >>>>>> >>>>>> here are all my steps: >>>>>> >>>>>> [00:26:38] @adas: /root > zpool create rootpool c1t1d0s0 >>>>>> [00:26:57] @adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool >>>>>> Analyzing system configuration. >>>>>> Comparing source boot environment <ufsBE> file systems with the file >>>>>> system(s) you specified for the new boot environment. Determining >>>>>> which >>>>>> file systems should be in the new boot environment. >>>>>> Updating boot environment description database on all BEs. >>>>>> Updating system configuration files. >>>>>> The device </dev/dsk/c1t1d0s0> is not a root device for any boot >>>>>> environment; cannot get BE ID. >>>>>> Creating configuration for boot environment <zfsBE>. >>>>>> Source boot environment is <ufsBE>. >>>>>> Creating boot environment <zfsBE>. >>>>>> Creating file systems on boot environment <zfsBE>. >>>>>> Creating <zfs> file system for </> in zone <global> on >>>>>> <rootpool/ROOT/zfsBE>. >>>>>> Populating file systems on boot environment <zfsBE>. >>>>>> Checking selection integrity. >>>>>> Integrity check OK. >>>>>> Populating contents of mount point </>. >>>>>> Copying. >>>>>> Bus Error - core dumped >>>>> hmm above might be relevant I''d guess. >>>>> >>>>> What release are you on , ie is this Solaris 10, or is this Nevada >>>>> build? >>>>> >>>>> Enda >>>>>> Creating shared file system mount points. >>>>>> Creating compare databases for boot environment <zfsBE>. >>>>>> Creating compare database for file system </var>. >>>>>> Creating compare database for file system </usr>. >>>>>> Creating compare database for file system </rootpool/ROOT>. >>>>>> Creating compare database for file system </>. >>>>>> Updating compare databases on boot environment <zfsBE>. >>>>>> Making boot environment <zfsBE> bootable. >>>> >>>> Anyway I did restart the whole process again, and I got again that >>>> Bus Error >>>> >>>> [07:59:01] root at adas: /root > zpool create rootpool c1t1d0s0 >>>> [07:59:22] root at adas: /root > zfs set compression=on rootpool/ROOT >>>> cannot open ''rootpool/ROOT'': dataset does not exist >>>> [07:59:27] root at adas: /root > zfs set compression=on rootpool >>>> [07:59:31] root at adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool >>>> Analyzing system configuration. >>>> Comparing source boot environment <ufsBE> file systems with the file >>>> system(s) you specified for the new boot environment. Determining which >>>> file systems should be in the new boot environment. >>>> Updating boot environment description database on all BEs. >>>> Updating system configuration files. >>>> The device </dev/dsk/c1t1d0s0> is not a root device for any boot >>>> environment; cannot get BE ID. >>>> Creating configuration for boot environment <zfsBE>. >>>> Source boot environment is <ufsBE>. >>>> Creating boot environment <zfsBE>. >>>> Creating file systems on boot environment <zfsBE>. >>>> Creating <zfs> file system for </> in zone <global> on >>>> <rootpool/ROOT/zfsBE>. >>>> Populating file systems on boot environment <zfsBE>. >>>> Checking selection integrity. >>>> Integrity check OK. >>>> Populating contents of mount point </>. >>>> Copying. >>>> Bus Error - core dumped >>>> Creating shared file system mount points. >>>> Creating compare databases for boot environment <zfsBE>. >>>> Creating compare database for file system </var>. >>>> Creating compare database for file system </usr>. >>>> >>>> >>>> >>>> _______________________________________________ >>>> zfs-discuss mailing list >>>> zfs-discuss at opensolaris.org >>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >>> >>> >> >> >> -- >> Enda O''Connor x19781 Software Product Engineering >> Patch System Test : Ireland : x19781/353-1-8199718 >> >> >> !DSPAM:122,4911ac7c27292151120594! >>-- Enda O''Connor x19781 Software Product Engineering Patch System Test : Ireland : x19781/353-1-8199718
THis is so bizare, I am unable to pass this problem. I though I had not enough space on my hard drive (new one) so I replaced it with 72gb drive, but still getting that bus error. Originally when I restarted my server it did not want to boot, do I had to power it off and then back on and it then booted up. But constantly I am getting this "Bus Error - core dumped" anyway in my /var/crash I see hundreds of core.void files and 3 core.cpio files. I would imagine core.cpio are the ones that are direct result of what I am probably eperiencing. -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24854 -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24867 -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24880 -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24893 -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24906 -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24919 -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24932 -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24950 -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24978 drwxr-xr-x 3 root root 81408 Nov 5 20:06 . -rw------- 1 root root 31351099 Nov 5 20:06 core.cpio.6208 On Wed, 5 Nov 2008, Enda O''Connor wrote:> Hi > Looks ok, some mounts left over from pervious fail. > In regards to swap and dump on zpool you can set them > zfs set volsize=1G rootpool/dump > zfs set volsize=1G rootpool/swap > > for instance, of course above are only an example of how to do it. > or make the zvol doe rootpool/dump etc before lucreate, in which case it will > take the swap and dump size you have preset. > > But I think we need to see the coredump/truss at this point to get an idea of > where things went wrong. > Enda > > On 11/05/08 15:38, Krzys wrote: >> I did upgrade my U5 to U6 from DVD, went trough the upgrade process. >> my file system is setup as follow: >> [10:11:54] root at adas: /root > df -h | egrep -v >> "platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr" >> Filesystem size used avail capacity Mounted on >> /dev/dsk/c1t0d0s0 16G 7.2G 8.4G 47% / >> swap 8.3G 1.5M 8.3G 1% /etc/svc/volatile >> /dev/dsk/c1t0d0s6 16G 8.7G 6.9G 56% /usr >> /dev/dsk/c1t0d0s1 16G 2.5G 13G 17% /var >> swap 8.5G 229M 8.3G 3% /tmp >> swap 8.3G 40K 8.3G 1% /var/run >> /dev/dsk/c1t0d0s7 78G 1.2G 76G 2% /export/home >> rootpool 33G 19K 21G 1% /rootpool >> rootpool/ROOT 33G 18K 21G 1% /rootpool/ROOT >> rootpool/ROOT/zfsBE 33G 31M 21G 1% /.alt.tmp.b-UUb.mnt >> /export/home 78G 1.2G 76G 2% >> /.alt.tmp.b-UUb.mnt/export/home >> /rootpool 21G 19K 21G 1% >> /.alt.tmp.b-UUb.mnt/rootpool >> /rootpool/ROOT 21G 18K 21G 1% >> /.alt.tmp.b-UUb.mnt/rootpool/ROOT >> swap 8.3G 0K 8.3G 0% >> /.alt.tmp.b-UUb.mnt/var/run >> swap 8.3G 0K 8.3G 0% /.alt.tmp.b-UUb.mnt/tmp >> [10:12:00] root at adas: /root > >> >> >> so I have /, /usr, /var and /export/home on that primary disk. Original >> disk is 140gb, this new one is only 36gb, but disk utilization on that >> primary disk is much less utilized so easily should fit on it. >> >> / 7.2GB >> /usr 8.7GB >> /var 2.5GB >> /export/home 1.2GB >> total space 19.6GB >> I did notice that lucreate did alocate 8GB to SWAP and 4GB to DUMP >> total space needed 31.6GB >> seems like total available disk space on my disk should be 33.92GB >> so its quite close as both numbers do approach. So to make sure I will >> change disk for 72gb and will try again. I do not beleive that I need to >> match my main disk size as 146gb as I am not using that much disk space on >> it. But let me try this and it might be why I am getting this problem... >> >> >> >> On Wed, 5 Nov 2008, Enda O''Connor wrote: >> >>> Hi Krzys >>> Also some info on the actual system >>> ie what was it upgraded to u6 from and how. >>> and an idea of how the filesystems are laid out, ie is usr seperate from / >>> and so on ( maybe a df -k ). Don''t appear to have any zones installed, >>> just to confirm. >>> Enda >>> >>> On 11/05/08 14:07, Enda O''Connor wrote: >>>> Hi >>>> did you get a core dump? >>>> would be nice to see the core file to get an idea of what dumped core, >>>> might configure coreadm if not already done >>>> run coreadm first, if the output looks like >>>> >>>> # coreadm >>>> global core file pattern: /var/crash/core.%f.%p >>>> global core file content: default >>>> init core file pattern: core >>>> init core file content: default >>>> global core dumps: enabled >>>> per-process core dumps: enabled >>>> global setid core dumps: enabled >>>> per-process setid core dumps: disabled >>>> global core dump logging: enabled >>>> >>>> then all should be good, and cores should appear in /var/crash >>>> >>>> otherwise the following should configure coreadm: >>>> coreadm -g /var/crash/core.%f.%p >>>> coreadm -G all >>>> coreadm -e global >>>> coreadm -e per-process >>>> >>>> >>>> coreadm -u to load the new settings without rebooting. >>>> >>>> also might need to set the size of the core dump via >>>> ulimit -c unlimited >>>> check ulimit -a first. >>>> >>>> then rerun test and check /var/crash for core dump. >>>> >>>> If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c >>>> ufsBE -n zfsBE -p rootpool >>>> >>>> might give an indication, look for SIGBUS in the truss log >>>> >>>> NOTE, that you might want to reset the coreadm and ulimit for coredumps >>>> after this, in order to not risk filling the system with coredumps in the >>>> case of some utility coredumping in a loop say. >>>> >>>> >>>> Enda
what makes me wonder is why I am not even able to see anything under boot -L ? and it is just not seeing this disk as a boot device? so strange. On Wed, 5 Nov 2008, Krzys wrote:> THis is so bizare, I am unable to pass this problem. I though I had not enough > space on my hard drive (new one) so I replaced it with 72gb drive, but still > getting that bus error. Originally when I restarted my server it did not want to > boot, do I had to power it off and then back on and it then booted up. But > constantly I am getting this "Bus Error - core dumped" > > anyway in my /var/crash I see hundreds of core.void files and 3 core.cpio files. > I would imagine core.cpio are the ones that are direct result of what I am > probably eperiencing. > > -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24854 > -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24867 > -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24880 > -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24893 > -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24906 > -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24919 > -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24932 > -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24950 > -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24978 > drwxr-xr-x 3 root root 81408 Nov 5 20:06 . > -rw------- 1 root root 31351099 Nov 5 20:06 core.cpio.6208 > > > > On Wed, 5 Nov 2008, Enda O''Connor wrote: > >> Hi >> Looks ok, some mounts left over from pervious fail. >> In regards to swap and dump on zpool you can set them >> zfs set volsize=1G rootpool/dump >> zfs set volsize=1G rootpool/swap >> >> for instance, of course above are only an example of how to do it. >> or make the zvol doe rootpool/dump etc before lucreate, in which case it will >> take the swap and dump size you have preset. >> >> But I think we need to see the coredump/truss at this point to get an idea of >> where things went wrong. >> Enda >> >> On 11/05/08 15:38, Krzys wrote: >>> I did upgrade my U5 to U6 from DVD, went trough the upgrade process. >>> my file system is setup as follow: >>> [10:11:54] root at adas: /root > df -h | egrep -v >>> "platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr" >>> Filesystem size used avail capacity Mounted on >>> /dev/dsk/c1t0d0s0 16G 7.2G 8.4G 47% / >>> swap 8.3G 1.5M 8.3G 1% /etc/svc/volatile >>> /dev/dsk/c1t0d0s6 16G 8.7G 6.9G 56% /usr >>> /dev/dsk/c1t0d0s1 16G 2.5G 13G 17% /var >>> swap 8.5G 229M 8.3G 3% /tmp >>> swap 8.3G 40K 8.3G 1% /var/run >>> /dev/dsk/c1t0d0s7 78G 1.2G 76G 2% /export/home >>> rootpool 33G 19K 21G 1% /rootpool >>> rootpool/ROOT 33G 18K 21G 1% /rootpool/ROOT >>> rootpool/ROOT/zfsBE 33G 31M 21G 1% /.alt.tmp.b-UUb.mnt >>> /export/home 78G 1.2G 76G 2% >>> /.alt.tmp.b-UUb.mnt/export/home >>> /rootpool 21G 19K 21G 1% >>> /.alt.tmp.b-UUb.mnt/rootpool >>> /rootpool/ROOT 21G 18K 21G 1% >>> /.alt.tmp.b-UUb.mnt/rootpool/ROOT >>> swap 8.3G 0K 8.3G 0% >>> /.alt.tmp.b-UUb.mnt/var/run >>> swap 8.3G 0K 8.3G 0% /.alt.tmp.b-UUb.mnt/tmp >>> [10:12:00] root at adas: /root > >>> >>> >>> so I have /, /usr, /var and /export/home on that primary disk. Original >>> disk is 140gb, this new one is only 36gb, but disk utilization on that >>> primary disk is much less utilized so easily should fit on it. >>> >>> / 7.2GB >>> /usr 8.7GB >>> /var 2.5GB >>> /export/home 1.2GB >>> total space 19.6GB >>> I did notice that lucreate did alocate 8GB to SWAP and 4GB to DUMP >>> total space needed 31.6GB >>> seems like total available disk space on my disk should be 33.92GB >>> so its quite close as both numbers do approach. So to make sure I will >>> change disk for 72gb and will try again. I do not beleive that I need to >>> match my main disk size as 146gb as I am not using that much disk space on >>> it. But let me try this and it might be why I am getting this problem... >>> >>> >>> >>> On Wed, 5 Nov 2008, Enda O''Connor wrote: >>> >>>> Hi Krzys >>>> Also some info on the actual system >>>> ie what was it upgraded to u6 from and how. >>>> and an idea of how the filesystems are laid out, ie is usr seperate from / >>>> and so on ( maybe a df -k ). Don''t appear to have any zones installed, >>>> just to confirm. >>>> Enda >>>> >>>> On 11/05/08 14:07, Enda O''Connor wrote: >>>>> Hi >>>>> did you get a core dump? >>>>> would be nice to see the core file to get an idea of what dumped core, >>>>> might configure coreadm if not already done >>>>> run coreadm first, if the output looks like >>>>> >>>>> # coreadm >>>>> global core file pattern: /var/crash/core.%f.%p >>>>> global core file content: default >>>>> init core file pattern: core >>>>> init core file content: default >>>>> global core dumps: enabled >>>>> per-process core dumps: enabled >>>>> global setid core dumps: enabled >>>>> per-process setid core dumps: disabled >>>>> global core dump logging: enabled >>>>> >>>>> then all should be good, and cores should appear in /var/crash >>>>> >>>>> otherwise the following should configure coreadm: >>>>> coreadm -g /var/crash/core.%f.%p >>>>> coreadm -G all >>>>> coreadm -e global >>>>> coreadm -e per-process >>>>> >>>>> >>>>> coreadm -u to load the new settings without rebooting. >>>>> >>>>> also might need to set the size of the core dump via >>>>> ulimit -c unlimited >>>>> check ulimit -a first. >>>>> >>>>> then rerun test and check /var/crash for core dump. >>>>> >>>>> If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c >>>>> ufsBE -n zfsBE -p rootpool >>>>> >>>>> might give an indication, look for SIGBUS in the truss log >>>>> >>>>> NOTE, that you might want to reset the coreadm and ulimit for coredumps >>>>> after this, in order to not risk filling the system with coredumps in the >>>>> case of some utility coredumping in a loop say. >>>>> >>>>> >>>>> Enda > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > !DSPAM:122,4912484314258371292! >
Enda O''Connor
2008-Nov-06 11:12 UTC
[zfs-discuss] migrating ufs to zfs - cant boot system
Hi try and get the stack trace from the core ie mdb core.vold.24978 ::status $C $r also run the same 3 mdb commands on the cpio core dump. also if you could extract some data from the truss log, ie a few hundred lines before the first SIGBUS Enda On 11/06/08 01:25, Krzys wrote:> THis is so bizare, I am unable to pass this problem. I though I had not > enough space on my hard drive (new one) so I replaced it with 72gb > drive, but still getting that bus error. Originally when I restarted my > server it did not want to boot, do I had to power it off and then back > on and it then booted up. But constantly I am getting this "Bus Error - > core dumped" > > anyway in my /var/crash I see hundreds of core.void files and 3 > core.cpio files. I would imagine core.cpio are the ones that are direct > result of what I am probably eperiencing. > > -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24854 > -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24867 > -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24880 > -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24893 > -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24906 > -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24919 > -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24932 > -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24950 > -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24978 > drwxr-xr-x 3 root root 81408 Nov 5 20:06 . > -rw------- 1 root root 31351099 Nov 5 20:06 core.cpio.6208 > > > > On Wed, 5 Nov 2008, Enda O''Connor wrote: > >> Hi >> Looks ok, some mounts left over from pervious fail. >> In regards to swap and dump on zpool you can set them >> zfs set volsize=1G rootpool/dump >> zfs set volsize=1G rootpool/swap >> >> for instance, of course above are only an example of how to do it. >> or make the zvol doe rootpool/dump etc before lucreate, in which case >> it will take the swap and dump size you have preset. >> >> But I think we need to see the coredump/truss at this point to get an >> idea of where things went wrong. >> Enda >> >> On 11/05/08 15:38, Krzys wrote: >>> I did upgrade my U5 to U6 from DVD, went trough the upgrade process. >>> my file system is setup as follow: >>> [10:11:54] root at adas: /root > df -h | egrep -v >>> "platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr" >>> Filesystem size used avail capacity Mounted on >>> /dev/dsk/c1t0d0s0 16G 7.2G 8.4G 47% / >>> swap 8.3G 1.5M 8.3G 1% /etc/svc/volatile >>> /dev/dsk/c1t0d0s6 16G 8.7G 6.9G 56% /usr >>> /dev/dsk/c1t0d0s1 16G 2.5G 13G 17% /var >>> swap 8.5G 229M 8.3G 3% /tmp >>> swap 8.3G 40K 8.3G 1% /var/run >>> /dev/dsk/c1t0d0s7 78G 1.2G 76G 2% /export/home >>> rootpool 33G 19K 21G 1% /rootpool >>> rootpool/ROOT 33G 18K 21G 1% /rootpool/ROOT >>> rootpool/ROOT/zfsBE 33G 31M 21G 1% /.alt.tmp.b-UUb.mnt >>> /export/home 78G 1.2G 76G 2% >>> /.alt.tmp.b-UUb.mnt/export/home >>> /rootpool 21G 19K 21G 1% >>> /.alt.tmp.b-UUb.mnt/rootpool >>> /rootpool/ROOT 21G 18K 21G 1% >>> /.alt.tmp.b-UUb.mnt/rootpool/ROOT >>> swap 8.3G 0K 8.3G 0% >>> /.alt.tmp.b-UUb.mnt/var/run >>> swap 8.3G 0K 8.3G 0% >>> /.alt.tmp.b-UUb.mnt/tmp >>> [10:12:00] root at adas: /root > >>> >>> >>> so I have /, /usr, /var and /export/home on that primary disk. >>> Original disk is 140gb, this new one is only 36gb, but disk >>> utilization on that primary disk is much less utilized so easily >>> should fit on it. >>> >>> / 7.2GB >>> /usr 8.7GB >>> /var 2.5GB >>> /export/home 1.2GB >>> total space 19.6GB >>> I did notice that lucreate did alocate 8GB to SWAP and 4GB to DUMP >>> total space needed 31.6GB >>> seems like total available disk space on my disk should be 33.92GB >>> so its quite close as both numbers do approach. So to make sure I >>> will change disk for 72gb and will try again. I do not beleive that I >>> need to match my main disk size as 146gb as I am not using that much >>> disk space on it. But let me try this and it might be why I am >>> getting this problem... >>> >>> >>> >>> On Wed, 5 Nov 2008, Enda O''Connor wrote: >>> >>>> Hi Krzys >>>> Also some info on the actual system >>>> ie what was it upgraded to u6 from and how. >>>> and an idea of how the filesystems are laid out, ie is usr seperate >>>> from / and so on ( maybe a df -k ). Don''t appear to have any zones >>>> installed, just to confirm. >>>> Enda >>>> >>>> On 11/05/08 14:07, Enda O''Connor wrote: >>>>> Hi >>>>> did you get a core dump? >>>>> would be nice to see the core file to get an idea of what dumped core, >>>>> might configure coreadm if not already done >>>>> run coreadm first, if the output looks like >>>>> >>>>> # coreadm >>>>> global core file pattern: /var/crash/core.%f.%p >>>>> global core file content: default >>>>> init core file pattern: core >>>>> init core file content: default >>>>> global core dumps: enabled >>>>> per-process core dumps: enabled >>>>> global setid core dumps: enabled >>>>> per-process setid core dumps: disabled >>>>> global core dump logging: enabled >>>>> >>>>> then all should be good, and cores should appear in /var/crash >>>>> >>>>> otherwise the following should configure coreadm: >>>>> coreadm -g /var/crash/core.%f.%p >>>>> coreadm -G all >>>>> coreadm -e global >>>>> coreadm -e per-process >>>>> >>>>> >>>>> coreadm -u to load the new settings without rebooting. >>>>> >>>>> also might need to set the size of the core dump via >>>>> ulimit -c unlimited >>>>> check ulimit -a first. >>>>> >>>>> then rerun test and check /var/crash for core dump. >>>>> >>>>> If that fails a truss via say truss -fae -o /tmp/truss.out lucreate >>>>> -c ufsBE -n zfsBE -p rootpool >>>>> >>>>> might give an indication, look for SIGBUS in the truss log >>>>> >>>>> NOTE, that you might want to reset the coreadm and ulimit for >>>>> coredumps after this, in order to not risk filling the system with >>>>> coredumps in the case of some utility coredumping in a loop say. >>>>> >>>>> >>>>> Enda-- Enda O''Connor x19781 Software Product Engineering Patch System Test : Ireland : x19781/353-1-8199718
Seems like core.vold.* are not being created until I try to boot from zfsBE, just creating zfsBE gets onlu core.cpio created. [10:29:48] @adas: /var/crash > mdb core.cpio.5545 Loading modules: [ libc.so.1 libavl.so.1 ld.so.1 ]> ::statusdebugging core file of cpio (32-bit) from adas file: /usr/bin/cpio initial argv: /usr/bin/cpio -pPcdum /.alt.tmp.b-Prb.mnt threading model: multi-threaded status: process terminated by SIGBUS (Bus Error)> $Cffbfe5b0 libc.so.1`_malloc_unlocked+0x164(30, 0, 39c28, ff, 2e2f2e2f, 0) ffbfe610 libc.so.1`malloc+0x4c(30, 1, e8070, 0, ff33e3c0, ff3485b8) ffbfe670 libsec.so.1`cacl_get+0x138(ffbfe7c4, 2, 0, 35bc0, 0, 35f98) ffbfe768 libsec.so.1`acl_get+0x14(37fe2, 2, 35bc0, 354c0, 1000, 1) ffbfe7d0 0x183b4(1, 35800, 359e8, 346b0, 34874, 34870) ffbfec30 main+0x28c(34708, 1, 35bc0, 166fc, 35800, 34400) ffbfec90 _start+0x108(0, 0, 0, 0, 0, 0)> $r%g0 = 0x00000000 %l0 = 0x00000000 %g1 = 0xff25638c libc.so.1`malloc+0x44 %l1 = 0x00039c28 %g2 = 0x00037fe0 %l2 = 0x2e2f2e2f %g3 = 0x00008000 %l3 = 0x000003c8 %g4 = 0x00000000 %l4 = 0x2e2f2e2f %g5 = 0x00000000 %l5 = 0x00000000 %g6 = 0x00000000 %l6 = 0xffffdc00 %g7 = 0xff382a00 %l7 = 0xff347344 libc.so.1`Lfree %o0 = 0x00000000 %i0 = 0x00000030 %o1 = 0x00000000 %i1 = 0x00000000 %o2 = 0x000e70c4 %i2 = 0x00039c28 %o3 = 0x00000000 %i3 = 0x000000ff %o4 = 0xff33e3c0 %i4 = 0x2e2f2e2f %o5 = 0xff347344 libc.so.1`Lfree %i5 = 0x00000000 %o6 = 0xffbfe5b0 %i6 = 0xffbfe610 %o7 = 0xff2564a4 libc.so.1`_malloc_unlocked+0xf4 %i7 = 0xff256394 libc.so.1`malloc+0x4c %psr = 0xfe001002 impl=0xf ver=0xe icc=nzvc ec=0 ef=4096 pil=0 s=0 ps=0 et=0 cwp=0x2 %y = 0x00000000 %pc = 0xff256514 libc.so.1`_malloc_unlocked+0x164 %npc = 0xff2564d8 libc.so.1`_malloc_unlocked+0x128 %sp = 0xffbfe5b0 %fp = 0xffbfe610 %wim = 0x00000000 %tbr = 0x00000000>On Thu, 6 Nov 2008, Enda O''Connor wrote:> Hi > try and get the stack trace from the core > ie mdb core.vold.24978 > ::status > $C > $r > > also run the same 3 mdb commands on the cpio core dump. > > also if you could extract some data from the truss log, ie a few hundred > lines before the first SIGBUS > > > Enda > > On 11/06/08 01:25, Krzys wrote: >> THis is so bizare, I am unable to pass this problem. I though I had not >> enough space on my hard drive (new one) so I replaced it with 72gb drive, >> but still getting that bus error. Originally when I restarted my server it >> did not want to boot, do I had to power it off and then back on and it then >> booted up. But constantly I am getting this "Bus Error - core dumped" >> >> anyway in my /var/crash I see hundreds of core.void files and 3 core.cpio >> files. I would imagine core.cpio are the ones that are direct result of >> what I am probably eperiencing. >> >> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24854 >> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24867 >> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24880 >> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24893 >> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24906 >> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24919 >> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24932 >> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24950 >> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24978 >> drwxr-xr-x 3 root root 81408 Nov 5 20:06 . >> -rw------- 1 root root 31351099 Nov 5 20:06 core.cpio.6208 >> >> >> >> On Wed, 5 Nov 2008, Enda O''Connor wrote: >> >>> Hi >>> Looks ok, some mounts left over from pervious fail. >>> In regards to swap and dump on zpool you can set them >>> zfs set volsize=1G rootpool/dump >>> zfs set volsize=1G rootpool/swap >>> >>> for instance, of course above are only an example of how to do it. >>> or make the zvol doe rootpool/dump etc before lucreate, in which case it >>> will take the swap and dump size you have preset. >>> >>> But I think we need to see the coredump/truss at this point to get an idea >>> of where things went wrong. >>> Enda >>> >>> On 11/05/08 15:38, Krzys wrote: >>>> I did upgrade my U5 to U6 from DVD, went trough the upgrade process. >>>> my file system is setup as follow: >>>> [10:11:54] root at adas: /root > df -h | egrep -v >>>> "platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr" >>>> Filesystem size used avail capacity Mounted on >>>> /dev/dsk/c1t0d0s0 16G 7.2G 8.4G 47% / >>>> swap 8.3G 1.5M 8.3G 1% /etc/svc/volatile >>>> /dev/dsk/c1t0d0s6 16G 8.7G 6.9G 56% /usr >>>> /dev/dsk/c1t0d0s1 16G 2.5G 13G 17% /var >>>> swap 8.5G 229M 8.3G 3% /tmp >>>> swap 8.3G 40K 8.3G 1% /var/run >>>> /dev/dsk/c1t0d0s7 78G 1.2G 76G 2% /export/home >>>> rootpool 33G 19K 21G 1% /rootpool >>>> rootpool/ROOT 33G 18K 21G 1% /rootpool/ROOT >>>> rootpool/ROOT/zfsBE 33G 31M 21G 1% /.alt.tmp.b-UUb.mnt >>>> /export/home 78G 1.2G 76G 2% >>>> /.alt.tmp.b-UUb.mnt/export/home >>>> /rootpool 21G 19K 21G 1% >>>> /.alt.tmp.b-UUb.mnt/rootpool >>>> /rootpool/ROOT 21G 18K 21G 1% >>>> /.alt.tmp.b-UUb.mnt/rootpool/ROOT >>>> swap 8.3G 0K 8.3G 0% >>>> /.alt.tmp.b-UUb.mnt/var/run >>>> swap 8.3G 0K 8.3G 0% >>>> /.alt.tmp.b-UUb.mnt/tmp >>>> [10:12:00] root at adas: /root > >>>> >>>> >>>> so I have /, /usr, /var and /export/home on that primary disk. Original >>>> disk is 140gb, this new one is only 36gb, but disk utilization on that >>>> primary disk is much less utilized so easily should fit on it. >>>> >>>> / 7.2GB >>>> /usr 8.7GB >>>> /var 2.5GB >>>> /export/home 1.2GB >>>> total space 19.6GB >>>> I did notice that lucreate did alocate 8GB to SWAP and 4GB to DUMP >>>> total space needed 31.6GB >>>> seems like total available disk space on my disk should be 33.92GB >>>> so its quite close as both numbers do approach. So to make sure I will >>>> change disk for 72gb and will try again. I do not beleive that I need to >>>> match my main disk size as 146gb as I am not using that much disk space >>>> on it. But let me try this and it might be why I am getting this >>>> problem... >>>> >>>> >>>> >>>> On Wed, 5 Nov 2008, Enda O''Connor wrote: >>>> >>>>> Hi Krzys >>>>> Also some info on the actual system >>>>> ie what was it upgraded to u6 from and how. >>>>> and an idea of how the filesystems are laid out, ie is usr seperate from >>>>> / and so on ( maybe a df -k ). Don''t appear to have any zones installed, >>>>> just to confirm. >>>>> Enda >>>>> >>>>> On 11/05/08 14:07, Enda O''Connor wrote: >>>>>> Hi >>>>>> did you get a core dump? >>>>>> would be nice to see the core file to get an idea of what dumped core, >>>>>> might configure coreadm if not already done >>>>>> run coreadm first, if the output looks like >>>>>> >>>>>> # coreadm >>>>>> global core file pattern: /var/crash/core.%f.%p >>>>>> global core file content: default >>>>>> init core file pattern: core >>>>>> init core file content: default >>>>>> global core dumps: enabled >>>>>> per-process core dumps: enabled >>>>>> global setid core dumps: enabled >>>>>> per-process setid core dumps: disabled >>>>>> global core dump logging: enabled >>>>>> >>>>>> then all should be good, and cores should appear in /var/crash >>>>>> >>>>>> otherwise the following should configure coreadm: >>>>>> coreadm -g /var/crash/core.%f.%p >>>>>> coreadm -G all >>>>>> coreadm -e global >>>>>> coreadm -e per-process >>>>>> >>>>>> >>>>>> coreadm -u to load the new settings without rebooting. >>>>>> >>>>>> also might need to set the size of the core dump via >>>>>> ulimit -c unlimited >>>>>> check ulimit -a first. >>>>>> >>>>>> then rerun test and check /var/crash for core dump. >>>>>> >>>>>> If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c >>>>>> ufsBE -n zfsBE -p rootpool >>>>>> >>>>>> might give an indication, look for SIGBUS in the truss log >>>>>> >>>>>> NOTE, that you might want to reset the coreadm and ulimit for coredumps >>>>>> after this, in order to not risk filling the system with coredumps in >>>>>> the case of some utility coredumping in a loop say. >>>>>> >>>>>> >>>>>> Enda > > > -- > Enda O''Connor x19781 Software Product Engineering > Patch System Test : Ireland : x19781/353-1-8199718 > > > !DSPAM:122,4912d10015286266247132! >
Enda O''Connor
2008-Nov-06 16:50 UTC
[zfs-discuss] migrating ufs to zfs - cant boot system
Hi Wierd, almost like some kind of memory corruption. Could I see the upgrade logs, that got you to u6 ie /var/sadm/system/logs/upgrade_log for the u6 env. What kind of upgrade did you do, liveupgrade, text based etc? Enda On 11/06/08 15:41, Krzys wrote:> Seems like core.vold.* are not being created until I try to boot from zfsBE, > just creating zfsBE gets onlu core.cpio created. > > > > [10:29:48] @adas: /var/crash > mdb core.cpio.5545 > Loading modules: [ libc.so.1 libavl.so.1 ld.so.1 ] >> ::status > debugging core file of cpio (32-bit) from adas > file: /usr/bin/cpio > initial argv: /usr/bin/cpio -pPcdum /.alt.tmp.b-Prb.mnt > threading model: multi-threaded > status: process terminated by SIGBUS (Bus Error) >> $C > ffbfe5b0 libc.so.1`_malloc_unlocked+0x164(30, 0, 39c28, ff, 2e2f2e2f, 0) > ffbfe610 libc.so.1`malloc+0x4c(30, 1, e8070, 0, ff33e3c0, ff3485b8) > ffbfe670 libsec.so.1`cacl_get+0x138(ffbfe7c4, 2, 0, 35bc0, 0, 35f98) > ffbfe768 libsec.so.1`acl_get+0x14(37fe2, 2, 35bc0, 354c0, 1000, 1) > ffbfe7d0 0x183b4(1, 35800, 359e8, 346b0, 34874, 34870) > ffbfec30 main+0x28c(34708, 1, 35bc0, 166fc, 35800, 34400) > ffbfec90 _start+0x108(0, 0, 0, 0, 0, 0) >> $r > %g0 = 0x00000000 %l0 = 0x00000000 > %g1 = 0xff25638c libc.so.1`malloc+0x44 %l1 = 0x00039c28 > %g2 = 0x00037fe0 %l2 = 0x2e2f2e2f > %g3 = 0x00008000 %l3 = 0x000003c8 > %g4 = 0x00000000 %l4 = 0x2e2f2e2f > %g5 = 0x00000000 %l5 = 0x00000000 > %g6 = 0x00000000 %l6 = 0xffffdc00 > %g7 = 0xff382a00 %l7 = 0xff347344 libc.so.1`Lfree > %o0 = 0x00000000 %i0 = 0x00000030 > %o1 = 0x00000000 %i1 = 0x00000000 > %o2 = 0x000e70c4 %i2 = 0x00039c28 > %o3 = 0x00000000 %i3 = 0x000000ff > %o4 = 0xff33e3c0 %i4 = 0x2e2f2e2f > %o5 = 0xff347344 libc.so.1`Lfree %i5 = 0x00000000 > %o6 = 0xffbfe5b0 %i6 = 0xffbfe610 > %o7 = 0xff2564a4 libc.so.1`_malloc_unlocked+0xf4 %i7 = 0xff256394 > libc.so.1`malloc+0x4c > > %psr = 0xfe001002 impl=0xf ver=0xe icc=nzvc > ec=0 ef=4096 pil=0 s=0 ps=0 et=0 cwp=0x2 > %y = 0x00000000 > %pc = 0xff256514 libc.so.1`_malloc_unlocked+0x164 > %npc = 0xff2564d8 libc.so.1`_malloc_unlocked+0x128 > %sp = 0xffbfe5b0 > %fp = 0xffbfe610 > > %wim = 0x00000000 > %tbr = 0x00000000 > > > > > > > > On Thu, 6 Nov 2008, Enda O''Connor wrote: > >> Hi >> try and get the stack trace from the core >> ie mdb core.vold.24978 >> ::status >> $C >> $r >> >> also run the same 3 mdb commands on the cpio core dump. >> >> also if you could extract some data from the truss log, ie a few hundred >> lines before the first SIGBUS >> >> >> Enda >> >> On 11/06/08 01:25, Krzys wrote: >>> THis is so bizare, I am unable to pass this problem. I though I had not >>> enough space on my hard drive (new one) so I replaced it with 72gb drive, >>> but still getting that bus error. Originally when I restarted my server it >>> did not want to boot, do I had to power it off and then back on and it then >>> booted up. But constantly I am getting this "Bus Error - core dumped" >>> >>> anyway in my /var/crash I see hundreds of core.void files and 3 core.cpio >>> files. I would imagine core.cpio are the ones that are direct result of >>> what I am probably eperiencing. >>> >>> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24854 >>> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24867 >>> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24880 >>> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24893 >>> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24906 >>> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24919 >>> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24932 >>> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24950 >>> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24978 >>> drwxr-xr-x 3 root root 81408 Nov 5 20:06 . >>> -rw------- 1 root root 31351099 Nov 5 20:06 core.cpio.6208 >>> >>> >>> >>> On Wed, 5 Nov 2008, Enda O''Connor wrote: >>> >>>> Hi >>>> Looks ok, some mounts left over from pervious fail. >>>> In regards to swap and dump on zpool you can set them >>>> zfs set volsize=1G rootpool/dump >>>> zfs set volsize=1G rootpool/swap >>>> >>>> for instance, of course above are only an example of how to do it. >>>> or make the zvol doe rootpool/dump etc before lucreate, in which case it >>>> will take the swap and dump size you have preset. >>>> >>>> But I think we need to see the coredump/truss at this point to get an idea >>>> of where things went wrong. >>>> Enda >>>> >>>> On 11/05/08 15:38, Krzys wrote: >>>>> I did upgrade my U5 to U6 from DVD, went trough the upgrade process. >>>>> my file system is setup as follow: >>>>> [10:11:54] root at adas: /root > df -h | egrep -v >>>>> "platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr" >>>>> Filesystem size used avail capacity Mounted on >>>>> /dev/dsk/c1t0d0s0 16G 7.2G 8.4G 47% / >>>>> swap 8.3G 1.5M 8.3G 1% /etc/svc/volatile >>>>> /dev/dsk/c1t0d0s6 16G 8.7G 6.9G 56% /usr >>>>> /dev/dsk/c1t0d0s1 16G 2.5G 13G 17% /var >>>>> swap 8.5G 229M 8.3G 3% /tmp >>>>> swap 8.3G 40K 8.3G 1% /var/run >>>>> /dev/dsk/c1t0d0s7 78G 1.2G 76G 2% /export/home >>>>> rootpool 33G 19K 21G 1% /rootpool >>>>> rootpool/ROOT 33G 18K 21G 1% /rootpool/ROOT >>>>> rootpool/ROOT/zfsBE 33G 31M 21G 1% /.alt.tmp.b-UUb.mnt >>>>> /export/home 78G 1.2G 76G 2% >>>>> /.alt.tmp.b-UUb.mnt/export/home >>>>> /rootpool 21G 19K 21G 1% >>>>> /.alt.tmp.b-UUb.mnt/rootpool >>>>> /rootpool/ROOT 21G 18K 21G 1% >>>>> /.alt.tmp.b-UUb.mnt/rootpool/ROOT >>>>> swap 8.3G 0K 8.3G 0% >>>>> /.alt.tmp.b-UUb.mnt/var/run >>>>> swap 8.3G 0K 8.3G 0% >>>>> /.alt.tmp.b-UUb.mnt/tmp >>>>> [10:12:00] root at adas: /root > >>>>> >>>>> >>>>> so I have /, /usr, /var and /export/home on that primary disk. Original >>>>> disk is 140gb, this new one is only 36gb, but disk utilization on that >>>>> primary disk is much less utilized so easily should fit on it. >>>>> >>>>> / 7.2GB >>>>> /usr 8.7GB >>>>> /var 2.5GB >>>>> /export/home 1.2GB >>>>> total space 19.6GB >>>>> I did notice that lucreate did alocate 8GB to SWAP and 4GB to DUMP >>>>> total space needed 31.6GB >>>>> seems like total available disk space on my disk should be 33.92GB >>>>> so its quite close as both numbers do approach. So to make sure I will >>>>> change disk for 72gb and will try again. I do not beleive that I need to >>>>> match my main disk size as 146gb as I am not using that much disk space >>>>> on it. But let me try this and it might be why I am getting this >>>>> problem... >>>>> >>>>> >>>>> >>>>> On Wed, 5 Nov 2008, Enda O''Connor wrote: >>>>> >>>>>> Hi Krzys >>>>>> Also some info on the actual system >>>>>> ie what was it upgraded to u6 from and how. >>>>>> and an idea of how the filesystems are laid out, ie is usr seperate from >>>>>> / and so on ( maybe a df -k ). Don''t appear to have any zones installed, >>>>>> just to confirm. >>>>>> Enda >>>>>> >>>>>> On 11/05/08 14:07, Enda O''Connor wrote: >>>>>>> Hi >>>>>>> did you get a core dump? >>>>>>> would be nice to see the core file to get an idea of what dumped core, >>>>>>> might configure coreadm if not already done >>>>>>> run coreadm first, if the output looks like >>>>>>> >>>>>>> # coreadm >>>>>>> global core file pattern: /var/crash/core.%f.%p >>>>>>> global core file content: default >>>>>>> init core file pattern: core >>>>>>> init core file content: default >>>>>>> global core dumps: enabled >>>>>>> per-process core dumps: enabled >>>>>>> global setid core dumps: enabled >>>>>>> per-process setid core dumps: disabled >>>>>>> global core dump logging: enabled >>>>>>> >>>>>>> then all should be good, and cores should appear in /var/crash >>>>>>> >>>>>>> otherwise the following should configure coreadm: >>>>>>> coreadm -g /var/crash/core.%f.%p >>>>>>> coreadm -G all >>>>>>> coreadm -e global >>>>>>> coreadm -e per-process >>>>>>> >>>>>>> >>>>>>> coreadm -u to load the new settings without rebooting. >>>>>>> >>>>>>> also might need to set the size of the core dump via >>>>>>> ulimit -c unlimited >>>>>>> check ulimit -a first. >>>>>>> >>>>>>> then rerun test and check /var/crash for core dump. >>>>>>> >>>>>>> If that fails a truss via say truss -fae -o /tmp/truss.out lucreate -c >>>>>>> ufsBE -n zfsBE -p rootpool >>>>>>> >>>>>>> might give an indication, look for SIGBUS in the truss log >>>>>>> >>>>>>> NOTE, that you might want to reset the coreadm and ulimit for coredumps >>>>>>> after this, in order to not risk filling the system with coredumps in >>>>>>> the case of some utility coredumping in a loop say. >>>>>>> >>>>>>> >>>>>>> Enda >> >> -- >> Enda O''Connor x19781 Software Product Engineering >> Patch System Test : Ireland : x19781/353-1-8199718 >> >> >> !DSPAM:122,4912d10015286266247132! >> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Enda O''Connor x19781 Software Product Engineering Patch System Test : Ireland : x19781/353-1-8199718
I think I did figure it out. It is the issue with cpio that is in my system... I am not sure but I did copy cpio from my solaris sparc 9 server and it seems like lucreate completed without bus error, and system booted up using root zpool. original cpio that I have on all of my solaris 10 U6 boxes are: [11:04:16] @adas: /usr/bin > ls -la cpi* -r-xr-xr-x 1 root bin 85856 May 21 18:48 cpio then I did copy solaris 9 cpio to my system: -r-xr-xr-x 1 root root 76956 May 14 15:46 cpio.3_sol9 so that old CPIO seems to work, new cpio on Soalris 10 U6 does not work. :( [11:03:49] root at adas: /root > zfs list NAME USED AVAIL REFER MOUNTPOINT rootpool 12.0G 54.9G 19K /rootpool rootpool/ROOT 18K 54.9G 18K /rootpool/ROOT rootpool/dump 4G 58.9G 16K - rootpool/swap 8.00G 62.9G 16K - [11:04:06] root at adas: /root > lucreate -c ufsBE -n zfsBE -p rootpool Analyzing system configuration. Comparing source boot environment <ufsBE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment <zfsBE>. Source boot environment is <ufsBE>. Creating boot environment <zfsBE>. Creating file systems on boot environment <zfsBE>. Creating <zfs> file system for </> in zone <global> on <rootpool/ROOT/zfsBE>. Populating file systems on boot environment <zfsBE>. Checking selection integrity. Integrity check OK. Populating contents of mount point </>. Copying. Creating shared file system mount points. Creating compare databases for boot environment <zfsBE>. Creating compare database for file system </var>. Creating compare database for file system </usr>. Creating compare database for file system </>. Updating compare databases on boot environment <zfsBE>. Making boot environment <zfsBE> bootable. Creating boot_archive for /.alt.tmp.b-tvg.mnt updating /.alt.tmp.b-tvg.mnt/platform/sun4u/boot_archive Population of boot environment <zfsBE> successful. Creation of boot environment <zfsBE> successful. [12:45:04] root at adas: /root > lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- ufsBE yes yes yes no - zfsBE yes no no yes - [13:14:57] root at adas: /root > [13:14:59] root at adas: /root > zfs list NAME USED AVAIL REFER MOUNTPOINT rootpool 24.3G 42.6G 19K /rootpool rootpool/ROOT 12.3G 42.6G 18K /rootpool/ROOT rootpool/ROOT/zfsBE 12.3G 42.6G 12.3G / rootpool/dump 4G 46.6G 16K - rootpool/swap 8.00G 50.6G 16K - [13:15:25] root at adas: /root > luactivate zfsBE A Live Upgrade Sync operation will be performed on startup of boot environment <zfsBE>. ********************************************************************** The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target BE. ********************************************************************** In case of a failure while booting to the target BE, the following process needs to be followed to fallback to the currently working boot environment: 1. Enter the PROM monitor (ok prompt). 2. Change the boot device back to the original boot environment by typing: setenv boot-device /pci at 1c,600000/scsi at 2/disk at 0,0:a 3. Boot to the original boot environment by typing: boot ********************************************************************** Modifying boot archive service Activation of boot environment <zfsBE> successful. [13:16:57] root at adas: /root > init 6 stopping NetWorker daemons: nsr_shutdown -q svc.startd: The system is coming down. Please wait. svc.startd: 90 system services are now being stopped. Nov 6 13:18:09 adas syslogd: going down on signal 15 umount: /appl busy svc.startd: The system is down. syncing file systems... done rebooting... SC Alert: Host System has Reset Probing system devices Probing memory Probing I/O buses Sun Fire V210, No Keyboard Copyright 2007 Sun Microsystems, Inc. All rights reserved. OpenBoot 4.22.33, 4096 MB memory installed, Serial #64938415. Ethernet address 0:3:ba:de:e1:af, Host ID: 83dee1af. Rebooting with command: boot Boot device: /pci at 1c,600000/scsi at 2/disk at 1,0:a File and args: SunOS Release 5.10 Version Generic_137137-09 64-bit Copyright 1983-2008 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Hardware watchdog enabled Hostname: adas Configuring devices. /dev/rdsk/c1t0d0s7 is clean Reading ZFS config: done. Mounting ZFS filesystems: (3/3) Nov 6 13:22:23 squid[380]: Squid Parent: child process 383 started adas console login: root Password: Nov 6 13:22:38 adas login: ROOT LOGIN /dev/console Last login: Thu Nov 6 10:44:17 from kasiczynka.ny.p Sun Microsystems Inc. SunOS 5.10 Generic January 2005 You have mail. # bash [13:22:40] @adas: /root > df -h Filesystem size used avail capacity Mounted on rootpool/ROOT/zfsBE 67G 12G 43G 23% / /devices 0K 0K 0K 0% /devices ctfs 0K 0K 0K 0% /system/contract proc 0K 0K 0K 0% /proc mnttab 0K 0K 0K 0% /etc/mnttab swap 7.8G 360K 7.8G 1% /etc/svc/volatile objfs 0K 0K 0K 0% /system/object sharefs 0K 0K 0K 0% /etc/dfs/sharetab /platform/sun4u-us3/lib/libc_psr/libc_psr_hwcap1.so.1 55G 12G 43G 23% /platform/sun4u-us3/lib/libc_psr.so.1 /platform/sun4u-us3/lib/sparcv9/libc_psr/libc_psr_hwcap1.so.1 55G 12G 43G 23% /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1 fd 0K 0K 0K 0% /dev/fd swap 7.8G 72K 7.8G 1% /tmp swap 7.8G 56K 7.8G 1% /var/run /dev/dsk/c1t0d0s7 78G 1.2G 76G 2% /export/home rootpool 67G 21K 43G 1% /rootpool rootpool/ROOT 67G 18K 43G 1% /rootpool/ROOT [13:22:42] @adas: /root > starting NetWorker daemons: nsrexecd On Thu, 6 Nov 2008, Enda O''Connor wrote:> Hi > Wierd, almost like some kind of memory corruption. > > Could I see the upgrade logs, that got you to u6 > ie > /var/sadm/system/logs/upgrade_log > for the u6 env. > What kind of upgrade did you do, liveupgrade, text based etc? > > Enda > > On 11/06/08 15:41, Krzys wrote: >> Seems like core.vold.* are not being created until I try to boot from >> zfsBE, just creating zfsBE gets onlu core.cpio created. >> >> >> >> [10:29:48] @adas: /var/crash > mdb core.cpio.5545 >> Loading modules: [ libc.so.1 libavl.so.1 ld.so.1 ] >>> ::status >> debugging core file of cpio (32-bit) from adas >> file: /usr/bin/cpio >> initial argv: /usr/bin/cpio -pPcdum /.alt.tmp.b-Prb.mnt >> threading model: multi-threaded >> status: process terminated by SIGBUS (Bus Error) >>> $C >> ffbfe5b0 libc.so.1`_malloc_unlocked+0x164(30, 0, 39c28, ff, 2e2f2e2f, 0) >> ffbfe610 libc.so.1`malloc+0x4c(30, 1, e8070, 0, ff33e3c0, ff3485b8) >> ffbfe670 libsec.so.1`cacl_get+0x138(ffbfe7c4, 2, 0, 35bc0, 0, 35f98) >> ffbfe768 libsec.so.1`acl_get+0x14(37fe2, 2, 35bc0, 354c0, 1000, 1) >> ffbfe7d0 0x183b4(1, 35800, 359e8, 346b0, 34874, 34870) >> ffbfec30 main+0x28c(34708, 1, 35bc0, 166fc, 35800, 34400) >> ffbfec90 _start+0x108(0, 0, 0, 0, 0, 0) >>> $r >> %g0 = 0x00000000 %l0 = 0x00000000 >> %g1 = 0xff25638c libc.so.1`malloc+0x44 %l1 = 0x00039c28 >> %g2 = 0x00037fe0 %l2 = 0x2e2f2e2f >> %g3 = 0x00008000 %l3 = 0x000003c8 >> %g4 = 0x00000000 %l4 = 0x2e2f2e2f >> %g5 = 0x00000000 %l5 = 0x00000000 >> %g6 = 0x00000000 %l6 = 0xffffdc00 >> %g7 = 0xff382a00 %l7 = 0xff347344 libc.so.1`Lfree >> %o0 = 0x00000000 %i0 = 0x00000030 >> %o1 = 0x00000000 %i1 = 0x00000000 >> %o2 = 0x000e70c4 %i2 = 0x00039c28 >> %o3 = 0x00000000 %i3 = 0x000000ff >> %o4 = 0xff33e3c0 %i4 = 0x2e2f2e2f >> %o5 = 0xff347344 libc.so.1`Lfree %i5 = 0x00000000 >> %o6 = 0xffbfe5b0 %i6 = 0xffbfe610 >> %o7 = 0xff2564a4 libc.so.1`_malloc_unlocked+0xf4 %i7 = 0xff256394 >> libc.so.1`malloc+0x4c >> >> %psr = 0xfe001002 impl=0xf ver=0xe icc=nzvc >> ec=0 ef=4096 pil=0 s=0 ps=0 et=0 cwp=0x2 >> %y = 0x00000000 >> %pc = 0xff256514 libc.so.1`_malloc_unlocked+0x164 >> %npc = 0xff2564d8 libc.so.1`_malloc_unlocked+0x128 >> %sp = 0xffbfe5b0 >> %fp = 0xffbfe610 >> >> %wim = 0x00000000 >> %tbr = 0x00000000 >> >> >> >> >> >> >> >> On Thu, 6 Nov 2008, Enda O''Connor wrote: >> >>> Hi >>> try and get the stack trace from the core >>> ie mdb core.vold.24978 >>> ::status >>> $C >>> $r >>> >>> also run the same 3 mdb commands on the cpio core dump. >>> >>> also if you could extract some data from the truss log, ie a few hundred >>> lines before the first SIGBUS >>> >>> >>> Enda >>> >>> On 11/06/08 01:25, Krzys wrote: >>>> THis is so bizare, I am unable to pass this problem. I though I had not >>>> enough space on my hard drive (new one) so I replaced it with 72gb drive, >>>> but still getting that bus error. Originally when I restarted my server >>>> it did not want to boot, do I had to power it off and then back on and it >>>> then booted up. But constantly I am getting this "Bus Error - core >>>> dumped" >>>> >>>> anyway in my /var/crash I see hundreds of core.void files and 3 core.cpio >>>> files. I would imagine core.cpio are the ones that are direct result of >>>> what I am probably eperiencing. >>>> >>>> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24854 >>>> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24867 >>>> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24880 >>>> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24893 >>>> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24906 >>>> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24919 >>>> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24932 >>>> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24950 >>>> -rw------- 1 root root 4126301 Nov 5 19:22 core.vold.24978 >>>> drwxr-xr-x 3 root root 81408 Nov 5 20:06 . >>>> -rw------- 1 root root 31351099 Nov 5 20:06 core.cpio.6208 >>>> >>>> >>>> >>>> On Wed, 5 Nov 2008, Enda O''Connor wrote: >>>> >>>>> Hi >>>>> Looks ok, some mounts left over from pervious fail. >>>>> In regards to swap and dump on zpool you can set them >>>>> zfs set volsize=1G rootpool/dump >>>>> zfs set volsize=1G rootpool/swap >>>>> >>>>> for instance, of course above are only an example of how to do it. >>>>> or make the zvol doe rootpool/dump etc before lucreate, in which case it >>>>> will take the swap and dump size you have preset. >>>>> >>>>> But I think we need to see the coredump/truss at this point to get an >>>>> idea of where things went wrong. >>>>> Enda >>>>> >>>>> On 11/05/08 15:38, Krzys wrote: >>>>>> I did upgrade my U5 to U6 from DVD, went trough the upgrade process. >>>>>> my file system is setup as follow: >>>>>> [10:11:54] root at adas: /root > df -h | egrep -v >>>>>> "platform|sharefs|objfs|mnttab|proc|ctfs|devices|fd|nsr" >>>>>> Filesystem size used avail capacity Mounted on >>>>>> /dev/dsk/c1t0d0s0 16G 7.2G 8.4G 47% / >>>>>> swap 8.3G 1.5M 8.3G 1% /etc/svc/volatile >>>>>> /dev/dsk/c1t0d0s6 16G 8.7G 6.9G 56% /usr >>>>>> /dev/dsk/c1t0d0s1 16G 2.5G 13G 17% /var >>>>>> swap 8.5G 229M 8.3G 3% /tmp >>>>>> swap 8.3G 40K 8.3G 1% /var/run >>>>>> /dev/dsk/c1t0d0s7 78G 1.2G 76G 2% /export/home >>>>>> rootpool 33G 19K 21G 1% /rootpool >>>>>> rootpool/ROOT 33G 18K 21G 1% /rootpool/ROOT >>>>>> rootpool/ROOT/zfsBE 33G 31M 21G 1% /.alt.tmp.b-UUb.mnt >>>>>> /export/home 78G 1.2G 76G 2% >>>>>> /.alt.tmp.b-UUb.mnt/export/home >>>>>> /rootpool 21G 19K 21G 1% >>>>>> /.alt.tmp.b-UUb.mnt/rootpool >>>>>> /rootpool/ROOT 21G 18K 21G 1% >>>>>> /.alt.tmp.b-UUb.mnt/rootpool/ROOT >>>>>> swap 8.3G 0K 8.3G 0% >>>>>> /.alt.tmp.b-UUb.mnt/var/run >>>>>> swap 8.3G 0K 8.3G 0% >>>>>> /.alt.tmp.b-UUb.mnt/tmp >>>>>> [10:12:00] root at adas: /root > >>>>>> >>>>>> >>>>>> so I have /, /usr, /var and /export/home on that primary disk. Original >>>>>> disk is 140gb, this new one is only 36gb, but disk utilization on that >>>>>> primary disk is much less utilized so easily should fit on it. >>>>>> >>>>>> / 7.2GB >>>>>> /usr 8.7GB >>>>>> /var 2.5GB >>>>>> /export/home 1.2GB >>>>>> total space 19.6GB >>>>>> I did notice that lucreate did alocate 8GB to SWAP and 4GB to DUMP >>>>>> total space needed 31.6GB >>>>>> seems like total available disk space on my disk should be 33.92GB >>>>>> so its quite close as both numbers do approach. So to make sure I will >>>>>> change disk for 72gb and will try again. I do not beleive that I need >>>>>> to match my main disk size as 146gb as I am not using that much disk >>>>>> space on it. But let me try this and it might be why I am getting this >>>>>> problem... >>>>>> >>>>>> >>>>>> >>>>>> On Wed, 5 Nov 2008, Enda O''Connor wrote: >>>>>> >>>>>>> Hi Krzys >>>>>>> Also some info on the actual system >>>>>>> ie what was it upgraded to u6 from and how. >>>>>>> and an idea of how the filesystems are laid out, ie is usr seperate >>>>>>> from / and so on ( maybe a df -k ). Don''t appear to have any zones >>>>>>> installed, just to confirm. >>>>>>> Enda >>>>>>> >>>>>>> On 11/05/08 14:07, Enda O''Connor wrote: >>>>>>>> Hi >>>>>>>> did you get a core dump? >>>>>>>> would be nice to see the core file to get an idea of what dumped >>>>>>>> core, >>>>>>>> might configure coreadm if not already done >>>>>>>> run coreadm first, if the output looks like >>>>>>>> >>>>>>>> # coreadm >>>>>>>> global core file pattern: /var/crash/core.%f.%p >>>>>>>> global core file content: default >>>>>>>> init core file pattern: core >>>>>>>> init core file content: default >>>>>>>> global core dumps: enabled >>>>>>>> per-process core dumps: enabled >>>>>>>> global setid core dumps: enabled >>>>>>>> per-process setid core dumps: disabled >>>>>>>> global core dump logging: enabled >>>>>>>> >>>>>>>> then all should be good, and cores should appear in /var/crash >>>>>>>> >>>>>>>> otherwise the following should configure coreadm: >>>>>>>> coreadm -g /var/crash/core.%f.%p >>>>>>>> coreadm -G all >>>>>>>> coreadm -e global >>>>>>>> coreadm -e per-process >>>>>>>> >>>>>>>> >>>>>>>> coreadm -u to load the new settings without rebooting. >>>>>>>> >>>>>>>> also might need to set the size of the core dump via >>>>>>>> ulimit -c unlimited >>>>>>>> check ulimit -a first. >>>>>>>> >>>>>>>> then rerun test and check /var/crash for core dump. >>>>>>>> >>>>>>>> If that fails a truss via say truss -fae -o /tmp/truss.out lucreate >>>>>>>> -c ufsBE -n zfsBE -p rootpool >>>>>>>> >>>>>>>> might give an indication, look for SIGBUS in the truss log >>>>>>>> >>>>>>>> NOTE, that you might want to reset the coreadm and ulimit for >>>>>>>> coredumps after this, in order to not risk filling the system with >>>>>>>> coredumps in the case of some utility coredumping in a loop say. >>>>>>>> >>>>>>>> >>>>>>>> Enda >>> >>> -- >>> Enda O''Connor x19781 Software Product Engineering >>> Patch System Test : Ireland : x19781/353-1-8199718 >>> >>> >>> >>> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > -- > Enda O''Connor x19781 Software Product Engineering > Patch System Test : Ireland : x19781/353-1-8199718 > > > !DSPAM:122,4913201c5081163845084! >