After successful upgrade from snv_95 to snv_98 ( ufs boot -> zfs boot). After luactive new BE with zfs. I am not able to ludelete old BE with ufs. problem is, I think that zfs boot is /rpool/boot/grub. ludelete snv_b95 System has findroot enabled GRUB Checking if last BE on any disk... BE <snv_b95> is not the last BE on any disk. Updating GRUB menu default setting Changing GRUB menu default setting to <0> ERROR: Failed to copy file </boot/grub/menu.lst> to top level dataset for BE <snv_b95> ERROR: Unable to delete GRUB menu entry for deleted boot environment <snv_b95>. Unable to delete boot environment. please help. thank you wozik -- This message posted from opensolaris.org
I had exactly the same problem and have not been able to find a resolution yet. Marcin Wo?niak wrote:> After successful upgrade from snv_95 to snv_98 ( ufs boot -> zfs boot). > After luactive new BE with zfs. I am not able to ludelete old BE with ufs. > problem is, I think that zfs boot is /rpool/boot/grub. > > ludelete snv_b95 > System has findroot enabled GRUB > Checking if last BE on any disk... > BE <snv_b95> is not the last BE on any disk. > Updating GRUB menu default setting > Changing GRUB menu default setting to <0> > ERROR: Failed to copy file </boot/grub/menu.lst> to top level dataset for BE <snv_b95> > ERROR: Unable to delete GRUB menu entry for deleted boot environment <snv_b95>. > Unable to delete boot environment. > > > please help. > > thank you > > wozik > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/x-pkcs7-signature Size: 3253 bytes Desc: S/MIME Cryptographic Signature URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080929/8cc4d05f/attachment.bin>
On Sat, 27 Sep 2008, Marcin Wo?niak wrote:> After successful upgrade from snv_95 to snv_98 ( ufs boot -> zfs boot). > After luactive new BE with zfs. I am not able to ludelete old BE with > ufs. problem is, I think that zfs boot is /rpool/boot/grub.This is due to a bug in the /usr/lib/lu/lulib script. I just submitted CR 6753735 to cover this. Regards, markm
Mark J Musante wrote:> On Sat, 27 Sep 2008, Marcin Wo?niak wrote: > >> After successful upgrade from snv_95 to snv_98 ( ufs boot -> zfs >> boot). After luactive new BE with zfs. I am not able to ludelete old >> BE with ufs. problem is, I think that zfs boot is /rpool/boot/grub. > > This is due to a bug in the /usr/lib/lu/lulib script. I just > submitted CR 6753735 to cover this. >Please make sure this gets into Solaris 10 Update 6! Ian
On Tue, 30 Sep 2008, Ian Collins wrote:> Mark J Musante wrote: >> On Sat, 27 Sep 2008, Marcin Wo?niak wrote: >> >>> After successful upgrade from snv_95 to snv_98 ( ufs boot -> zfs >>> boot). After luactive new BE with zfs. I am not able to ludelete old >>> BE with ufs. problem is, I think that zfs boot is /rpool/boot/grub. >> >> This is due to a bug in the /usr/lib/lu/lulib script. I just >> submitted CR 6753735 to cover this. >> > Please make sure this gets into Solaris 10 Update 6!Luckily, this bug isn''t present in update 6 - it''s in nevada only. Regards, markm
I beg to differ. # cat /etc/release Solaris 10 10/08 s10s_u6wos_07b SPARC Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 27 October 2008 # lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- beA yes no no yes - [i]old ufs boot[/i] beB yes no no yes - [i]old ufs boot[/i] beC yes yes yes no - [i]new zfs root[/i] # ludelete beA ERROR: cannot open ''pool00/zones/global/home'': dataset does not exist ERROR: cannot mount mount point </.alt.tmp.b-QY.mnt/home> device <pool00/zones/global/home> ERROR: failed to mount file system <pool00/zones/global/home> on </.alt.tmp.b-QY.mnt/home> ERROR: unmounting partially mounted boot environment file systems ERROR: cannot mount boot environment by icf file </etc/lu/ICF.1> ERROR: Cannot mount BE <beA>. Unable to delete boot environment. # ludelete beB Determining the devices to be marked free. Updating boot environment configuration database. Updating compare databases on boot environment <beA>. INFORMATION: Skipping update of boot environment <beA>: not configured properly. Updating boot environment description database on all BEs. Updating all boot environment configuration databases. Boot environment <beB> deleted. # ludelete beA ERROR: cannot open ''pool00/zones/global/home'': dataset does not exist ERROR: cannot mount mount point </.alt.tmp.b-QY.mnt/home> device <pool00/zones/global/home> ERROR: failed to mount file system <pool00/zones/global/home> on </.alt.tmp.b-QY.mnt/home> ERROR: unmounting partially mounted boot environment file systems ERROR: cannot mount boot environment by icf file </etc/lu/ICF.1> ERROR: Cannot mount BE <beA>. Unable to delete boot environment. On this dev/test lab machine I''ve been bouncing between two UFS BEs (beA & beB) located in different slices on c1t0d0. My other three disks were in one zpool (pool00). Big Mistake... For ZFS boot I need space for a seperate zfs root pool. So whilst booted under beB I backup my pool00 data, destroy pool00, re-create pool00 (a little differently, thus the error it would seem) but hold out one of the drives and use it to create a rpool00 root pool. Then I # lucreate -n beC -p rpool01 # luactivate beC # init 6 and reboot to the zfs boot/root. Voila! But now I cannot delete beA. Anybody have any ideas on how I might ludelete beA? -- This message posted from opensolaris.org
mah042 at gmail.com said:> # ludelete beA > ERROR: cannot open ''pool00/zones/global/home'': dataset does not exist > ERROR: cannot mount mount point </.alt.tmp.b-QY.mnt/home> device <pool00/zones/global/home> > ERROR: failed to mount file system <pool00/zones/global/home> on </.alt.tmp.b-QY.mnt/home> > ERROR: unmounting partially mounted boot environment file systems > ERROR: cannot mount boot environment by icf file </etc/lu/ICF.1> > ERROR: Cannot mount BE <beA>. > Unable to delete boot environment. > . . . > Big Mistake... For ZFS boot I need space for a seperate zfs root pool. So > whilst booted under beB I backup my pool00 data, destroy pool00, re-create > pool00 (a little differently, thus the error it would seem) but hold out one > of the drives and use it to create a rpool00 root pool. Then I > . . .I made this same mistake. If you "grep pool00 /etc/lu/ICF.1" you''ll see filesystems beA expects to be mounted in beA; Some of those it may expect to be able to share between the current BE and beA. The way to fix things is to create a temporary pool "pool00"; This need not be on an actual disk, it could be hosted in a file or a slice, etc. Then create those datasets in the temporary pool, and try the "ludelete beA" again. Note that if the problem datasets are supposed to be shared between current BE and beA, you''ll need them mounted on the original paths in the current BE, because "ludelete" will use loopback mounts to attach them into beA during the deletion process. I guess the moral of the story is that you should ludelete any old BE''s before you alter the filesystems/datasets that it mounts. Regards, Marion
Ya. That''s what I ended up doing. Re-creating my UFS soft-partition boot meta devices and all the zfs filesystems (even though they were all empty). Then I was able to ludelete beA. There ought to be an option to ''-f'' or ''--force'' the ludelete so you can ignore the errors and just delete the darn thing. What I don''t understand is... I had another old UFS BE, beB, that was identical to beA, except it''s boot/root was another, identically sized (and identically metaclear''d) UFS soft-partition, and I was able to ludelete that one (see above). Very odd. Something was messed up. Thanks for the quick responses, eh? Mark -- This message posted from opensolaris.org