Hi,
I know, this would be more a message to the LU-list, but I''m not
subscribed there (is there such a list?), but here, and the problem is
at least zfs related:
I have a Solaris 10 installation from last year, it''s a jumpstarted u6
with zfs rootpool and some (zfs-rooted) zones. Last week I did a
luupgrade which worked like a charm.
Now I inteded to install some more patches which I overlooked last week
and decided to do that with live upgrade again.
But though I''m able to create a new BE (with some ERRORs, see below), a
luupgrade fails and leaves an orphaned /a/var/run and a broken
/a/zones/myzonename.
The errors lucreate delivers:
----8<----
Creating snapshot for <rootpool/ROOT/s10u8-01/zones> on
rootpool/ROOT/s10u8-01/zones at s10u8-2010010
<mailto:rootpool/ROOT/s10u8-01/zones at s10u8-20100106> 6.
Creating clone for rootpool/ROOT/s10u8-01/zones at s10u8-2010010
<mailto:rootpool/ROOT/s10u8-01/zones at s10u8-20100106> 6 on
<rootpool/ROOT/s10u8-20100106/zones>.
Setting canmount=noauto for </zones> in zone <global> on
<rootpool/ROOT/s10u8-20100106/zones>.
Creating snapshot for <rootpool/ROOT/s10u8-01/zones/myzonename> on
<rootpool/ROOT/s10u8-01/zones/myzonename at s10u8-20100106>.
Creating clone for rootpool/ROOT/s10u8-01/zones/myzonemane at s10u8-2010010
<mailto:rootpool/ROOT/s10u8-01/zones/myzonemane at s10u8-20100106> 6 on
<rootpool/ROOT/s10u8-20100106/zones/myzonename-s10u8-20100106>.
cannot mount
''rootpool/ROOT/s10u8-20100106/zones/myzonename-s10u8-20100106'':
legacy
mountpoint
use mount(1M) to mount this filesystem
ERROR: Failed to mount dataset
<rootpool/ROOT/s10u8-20100106/zones/myzonename-s10u8-20100106>
legacy is not an absolute path.
Population of boot environment <s10u8-20100106> successful.
Creation of boot environment <s10u8-20100106> successful.
---8<---
and here is, what luupgrade does:
---8<---
[0]root at global[~]>>luupgrade -t -n s10u8-20100106 -s /root/patches
Validating the contents of the media </root/patches>.
The media contains 106 software patches that can be added.
All 106 patches will be added because you did not specify any specific
patches to add.
Mounting the BE <s10u8-20100106>.
ERROR: unable to mount zones:
zoneadm: zone ''myzonename'': zone root /zones/myzonename/root
already in
use by zone myzonename
zoneadm: zone ''myzonename'': call to zoneadmd failed
ERROR: unable to mount zone <myzonename> in </a>
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file
</tmp/.luupgrade.beicf.22173>
cat: cannot open /tmp/.luupgrade.tmp.22173
ERROR: Unable to mount ABE disk slices: < >.
ERROR: Unable to mount the BE <s10u8-20100106>.
[0]root at global[~]>>df -hl
Filesystem size used avail capacity Mounted on
rootpool/ROOT/s10u8-01
134G 8.4G 91G 9% /
/devices 0K 0K 0K 0% /devices
ctfs 0K 0K 0K 0% /system/contract
proc 0K 0K 0K 0% /proc
mnttab 0K 0K 0K 0% /etc/mnttab
swap 10G 416K 10G 1% /etc/svc/volatile
objfs 0K 0K 0K 0% /system/object
sharefs 0K 0K 0K 0% /etc/dfs/sharetab
/platform/SUNW,SPARC-Enterprise-T5220/lib/libc_psr/libc_psr_hwcap2.so.1
99G 8.4G 91G 9%
/platform/sun4v/lib/libc_psr.so.1
/platform/SUNW,SPARC-Enterprise-T5220/lib/sparcv9/libc_psr/libc_psr_hwca
p2.so.1
99G 8.4G 91G 9%
/platform/sun4v/lib/sparcv9/libc_psr.so.1
fd 0K 0K 0K 0% /dev/fd
swap 10G 56K 10G 1% /tmp
swap 10G 80K 10G 1% /var/run
rootpool/ROOT/s10u8-01/zones
134G 27K 91G 1% /zones
rootpool/ROOT/s10u8-01/zones/myzonename
134G 1.3G 91G 2% /zones/myzonename
rootpool/export 134G 20K 91G 1% /export
rootpool/export/home 134G 18K 91G 1% /export/home
rootpool 134G 96K 91G 1% /rootpool
rootpool/ROOT/s10u8-20100106
134G 13G 91G 13% /a
swap 10G 0K 10G 0% /a/var/run
df: cannot statvfs /a/zones/myzonename: No such file or directory
---8<---
anyone here, who can help out with what I did wrong?
Greetings
Jan Dreyer
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100107/3c96cf69/attachment.html>