Hi,
I''ve a somewhat strange configuration here:
[root at sol9 Mon Feb 09 21:40:26 ~]
$ uname -a
SunOS sol9 5.11 snv_107 sun4u sparc SUNW,Sun-Blade-1000
[root at sol9 Mon Feb 09 21:30:50 ~]
$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
rootpool 14.0G 116G 63K /rpool
rootpool/ROOT 7.32G 116G 18K legacy
rootpool/ROOT/snv_107 7.32G 116G 7.15G /
rootpool/dump 2.00G 116G 2.00G -
rootpool/export 133K 116G 21K /export
rootpool/export/home 78K 116G 42K /export/home
rootpool/swap 4G 120G 5.34M -
rootpool/zones 639M 116G 20K /zones
rootpool/zones/dnsserver 638M 116G 638M /zones/dnsserver
rpool 14.0G 116G 63K /rpool
rpool/ROOT 7.32G 116G 18K legacy
rpool/ROOT/snv_107 7.32G 116G 7.15G /
rpool/dump 2.00G 116G 2.00G -
rpool/export 133K 116G 21K /export
rpool/export/home 78K 116G 42K /export/home
rpool/swap 4G 120G 5.34M -
rpool/zones 639M 116G 20K /zones
rpool/zones/dnsserver 638M 116G 638M /zones/dnsserver
....
and more in other pools on other disks.
The problem here is that Solaris thinks both pools are on the same disk:
[root at sol9 Mon Feb 09 21:31:06 ~]
$ zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rootpool 132G 9.96G 122G 7% ONLINE -
rpool 132G 9.96G 122G 7% ONLINE -
usbbox001 5.44T 413G 5.03T 7% ONLINE -
[root at sol9 Mon Feb 09 21:37:11 ~]
$ zpool status rpool
pool: rpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
c3t2d0s0 ONLINE 0 0 0
errors: No known data errors
[root at sol9 Mon Feb 09 21:37:17 ~]
$ zpool status rootpool
pool: rootpool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
rootpool ONLINE 0 0 0
c3t2d0s0 ONLINE 0 0 0
How can I fix this situation?
Solaris only boots to maintenance because obviously the mount of the local
filesytems fails:
$ svcadm clear svc:/system/filesystem/local:default
[root at sol9 Mon Feb 09 21:30:35 ~]
$ Reading ZFS config: done.
Mounting ZFS filesystems: (1/50)cannot mount ''/export'':
directory is not empty
cannot mount ''/export/home'': directory is not empty
cannot mount ''/rpool'': directory is not empty
cannot mount ''/zones'': directory is not empty
cannot mount ''/zones/dnsserver'': directory is not empty
(50/50)
svc:/system/filesystem/local:default: WARNING: /usr/sbin/zfs mount -a failed:
exit status 1
Feb 9 21:30:36 svc.startd[7]: svc:/system/filesystem/local:default: Method
"/lib/svc/method/fs-local" failed with exit status 95.
Feb 9 21:30:36 svc.startd[7]: system/filesystem/local:default failed fatally:
transitioned to maintenance (see ''svcs -xv'' for details)
Argghh .... I really do not want to loose my customization for this installation
.
$ format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1t1d0 <SAMSUNG-SP1614N-TM10 cyl 65533 alt 2 hd 75 sec 63>
/pci at 8,700000/scsi at 2,1/sd at 1,0
1. c2t0d0 <ST315003-41AS- -1.36TB>
/pci at 8,700000/pci at 3/usb at 8,2/storage at 4/disk at 0,0
2. c2t0d1 <ST315003-41AS- -1.36TB>
/pci at 8,700000/pci at 3/usb at 8,2/storage at 4/disk at 0,1
3. c2t0d2 <ST315003-41AS- -1.36TB>
/pci at 8,700000/pci at 3/usb at 8,2/storage at 4/disk at 0,2
4. c2t0d3 <ST315003-41AS- -1.36TB>
/pci at 8,700000/pci at 3/usb at 8,2/storage at 4/disk at 0,3
5. c3t1d0 <SUN36G cyl 24620 alt 2 hd 27 sec 107>
/pci at 8,600000/SUNW,qlc at 4/fp at 0,0/ssd at w21000004cf9ff1fa,0
6. c3t2d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/pci at 8,600000/SUNW,qlc at 4/fp at 0,0/ssd at w21000014c3cf7ae3,0
What I''ve done until now:
Before snv_107 I had snv_89 running on the 36 GB internal disk (using SVM) .
Then I tried to upgrade to snv_107 using liveupgrade several times but it did
not work. So I decided to do a new installation on the 146 GB disk and that
worked (via liveupgrade into an empty boot environment). I could boot the
snv_107 and use it (including reboots). Today I booted back to snv_89 to copy
some files from an SVM metadevice and after that booting back into snv_107
fails..
Any hints are welcome...
regards
Bernd
--
This message posted from opensolaris.org