Hi, Given two disk c1t0d0 (DISK A) and c1t1d0 (DISK B)... 1/ Standard install on DISK A. 2/ zfs boot install on DISK B. 3/ I change the boot order and my zfs boot works fine. 4/ I install grub on the mbr of DISK B 5/ I disconnect and replace DISK A with DISK B 6/ Reboot, get the grub menu select Solaris ZFS and it panics that it cannot mount root path @ device XXX... This is not a ZFS specific issue since even the UFS install will fail to boot if I don''t put back the disks in the exact order they were in during the initial install. What am I missing?
Did some googling, I guess the culprit is the bootpath in /boot/ solaris/bootenv.rc setprop bootpath ''/pci at 0,0/pci1000,30 at 10/sd at 0,0:a'' I have to touch /reconfigure and reboot. I hope it works. On 08/10/2007, at 7:26 AM, Kugutsumen wrote:> Hi, > > Given two disk c1t0d0 (DISK A) and c1t1d0 (DISK B)... > > 1/ Standard install on DISK A. > 2/ zfs boot install on DISK B. > 3/ I change the boot order and my zfs boot works fine. > > 4/ I install grub on the mbr of DISK B > 5/ I disconnect and replace DISK A with DISK B > > 6/ Reboot, get the grub menu select Solaris ZFS and it panics that > it cannot mount root path @ device XXX... > > This is not a ZFS specific issue since even the UFS install will > fail to boot if I don''t put back the disks in the exact order they > were in during the initial install. > > What am I missing? > > >
What driver are you using? The SATA framework has a bug that prevents ldi_open_by_devid() from working early in boot. ZFS is trying to do the right thing, but has to fall back on the physical device path, which in this case is the wrong value. - Eric On Mon, Oct 08, 2007 at 07:26:39AM +0700, Kugutsumen wrote:> Hi, > > Given two disk c1t0d0 (DISK A) and c1t1d0 (DISK B)... > > 1/ Standard install on DISK A. > 2/ zfs boot install on DISK B. > 3/ I change the boot order and my zfs boot works fine. > > 4/ I install grub on the mbr of DISK B > 5/ I disconnect and replace DISK A with DISK B > > 6/ Reboot, get the grub menu select Solaris ZFS and it panics that it > cannot mount root path @ device XXX... > > This is not a ZFS specific issue since even the UFS install will fail > to boot if I don''t put back the disks in the exact order they were in > during the initial install. > > What am I missing? > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
I just tried.. mount -o rw,remount / zpool import -f tank mount -F zfs tank/rootfs /a zpool status ls -l /dev/dsk/c1t0d0s0 # /pci at 0,0/pci1000,30 at 10/sd at 0,0:a csh setenv TERM vt100 vi /a/boot/solaris/bootenv.rc # the bootpath was actually set to the proper device. cp /etc/path_to_inst /a/etc/path_to_inst touch /a/reconfigure rm /a/etc/devices/* bootadm update-archive -R /a zpool export tank reboot After all this.. It still fails... I think I am really missing something. panic[cpu0]/thread=fffffffffbc257a0: cannot mount root path /pci at 0,0/ pci1000,30@ 10/sd at 0,0:a fffffffffbc467b0 genunix:rootconf+11f () fffffffffbc46800 genunix:vfs_mountroot+65 () fffffffffbc46830 genunix:main+ce () fffffffffbc46840 unix:_locore_start+92 () panic: entering debugger (no dump device, continue to reboot) Welcome to kmdb kmdb: unable to determine terminal type: assuming `vt100'' Loaded modules: [ scsi_vhci uppc unix zfs krtld genunix specfs pcplusmp random cpu.AuthenticAMD.15 ] Again if I put back that disk in the order it was before, it doesn''t fail. Why does it have to be so complicated? On 08/10/2007, at 7:48 AM, Kugutsumen wrote:> Did some googling, I guess the culprit is the bootpath in /boot/ > solaris/bootenv.rc > > setprop bootpath ''/pci at 0,0/pci1000,30 at 10/sd at 0,0:a'' > > I have to touch /reconfigure and reboot. I hope it works. > > On 08/10/2007, at 7:26 AM, Kugutsumen wrote: > >> Hi, >> >> Given two disk c1t0d0 (DISK A) and c1t1d0 (DISK B)... >> >> 1/ Standard install on DISK A. >> 2/ zfs boot install on DISK B. >> 3/ I change the boot order and my zfs boot works fine. >> >> 4/ I install grub on the mbr of DISK B >> 5/ I disconnect and replace DISK A with DISK B >> >> 6/ Reboot, get the grub menu select Solaris ZFS and it panics that >> it cannot mount root path @ device XXX... >> >> This is not a ZFS specific issue since even the UFS install will >> fail to boot if I don''t put back the disks in the exact order they >> were in during the initial install. >> >> What am I missing? >> >> >> >
It is the vmware lsi scsi controller... I managed to fix the UFS disk (DISK A) using the procedure describe here (https://www.opensolaris.org/jive/thread.jspa?threadID=7615) but I am still struggling with the ZFS boot disk (DISK B). On 08/10/2007, at 7:53 AM, Eric Schrock wrote:> What driver are you using? The SATA framework has a bug that prevents > ldi_open_by_devid() from working early in boot. ZFS is trying to > do the > right thing, but has to fall back on the physical device path, > which in > this case is the wrong value. > > - Eric > > On Mon, Oct 08, 2007 at 07:26:39AM +0700, Kugutsumen wrote: >> Hi, >> >> Given two disk c1t0d0 (DISK A) and c1t1d0 (DISK B)... >> >> 1/ Standard install on DISK A. >> 2/ zfs boot install on DISK B. >> 3/ I change the boot order and my zfs boot works fine. >> >> 4/ I install grub on the mbr of DISK B >> 5/ I disconnect and replace DISK A with DISK B >> >> 6/ Reboot, get the grub menu select Solaris ZFS and it panics that it >> cannot mount root path @ device XXX... >> >> This is not a ZFS specific issue since even the UFS install will fail >> to boot if I don''t put back the disks in the exact order they were in >> during the initial install. >> >> What am I missing? >> >> >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > -- > Eric Schrock, Solaris Kernel Development http://blogs.sun.com/ > eschrock
On Mon, Oct 08, 2007 at 09:09:39AM +0700, Kugutsumen wrote:> It is the vmware lsi scsi controller... > > I managed to fix the UFS disk (DISK A) using the procedure describe > here (https://www.opensolaris.org/jive/thread.jspa?threadID=7615) but > I am still struggling with the ZFS boot disk (DISK B). >Does this use the mpt driver bundled with Solaris, or a third party driver? A recent regression in the mpt driver broke devid lookup early in boot, though it does manage to work if you have a proper devid cache. An interesting RFE would be to have ZFS attempt to use the boot devicepath if all else fails, though that will only work for a single device. It would be better to get device IDs working like they''re supposed to. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
On Mon, 8 Oct 2007, Kugutsumen wrote:> I just tried.. > mount -o rw,remount / > zpool import -f tank > mount -F zfs tank/rootfs /a > zpool status > ls -l /dev/dsk/c1t0d0s0 > # /pci at 0,0/pci1000,30 at 10/sd at 0,0:a > csh > setenv TERM vt100 > vi /a/boot/solaris/bootenv.rc > # the bootpath was actually set to the proper device. > > cp /etc/path_to_inst /a/etc/path_to_inst > touch /a/reconfigure > rm /a/etc/devices/* > bootadm update-archive -R /a > > zpool export tank > reboot(Shouldn''t need to export) Check to see that your zpool.cache file in /a/etc is the same as in /etc. Also, the filelist.ramdisk (under /a/boot/solaris) should include etc/zfs/zpool.cache. If it doesn''t, add it and re-do bootadm. Regards, markm