amy.rich at tufts.edu
2009-Jan-16 19:08 UTC
[zfs-discuss] s10u6 ludelete issues with zones on zfs root
I''ve installed an s10u6 machine with no UFS partitions at all. I''ve created a dataset for zones and one for a zone named "default." I then do an lucreate and luactivate and a subsequent boot off the new BE. All of that appears to go just fine (though I''ve found that I MUST call the zone dataset zoneds for some reason, or it will rename it ot that for me). When I try to delete the old BE, it fails with the following message: # ludelete s10-RC ERROR: cannot mount ''/zoneds'': directory is not empty ERROR: cannot mount mount point </.alt.tmp.b-VK.mnt/zoneds> device <rpool/ROOT/s10-RC/zoneds> ERROR: failed to mount file system <rpool/ROOT/s10-RC/zoneds> on </.alt.tmp.b-VK.mnt/zoneds> ERROR: unmounting partially mounted boot environment file systems ERROR: cannot mount boot environment by icf file </etc/lu/ICF.1> ERROR: Cannot mount BE <s10-RC>. Unable to delete boot environment. It''s obvious that luactivate is not correctly resetting the mount point of /zoneds and /zoneds/default (the zone named default) in the old BE so that it''s under /.alt like the rest of the ROOT dataset: # zfs list |grep s10-RC rpool/ROOT/s10-RC 14.6M 57.3G 1.29G /.alt.tmp.b-VK.mnt/ rpool/ROOT/s10-RC/var 2.69M 57.3G 21.1M /.alt.tmp.b-VK.mnt//var rpool/ROOT/s10-RC/zoneds 5.56M 57.3G 19K /zoneds rpool/ROOT/s10-RC/zoneds/default 5.55M 57.3G 29.9M /zoneds/default Obviously I can reset the mount points by hand with "zfs set mountpoint," but this seems like something that luactivate and the subsequent boot should handle. Is this a bug, or am I missing a step/have something misconfigured? Also, once I run ludelete on a BE, it seems like it should also clean up the old ZFS filesystems for the BE s10-RC (the old BE) instead of me having to do an explicit zfs destroy. The very weird thing is that, if I run lucreate again (new BE is named bar) and boot off of the new BE, it does the right thing with the old BE (foo): rpool/ROOT/bar 1.52G 57.2G 1.29G / rpool/ROOT/bar at foo 89.1M - 1.29G - rpool/ROOT/bar at bar 84.1M - 1.29G - rpool/ROOT/bar/var 24.7M 57.2G 21.2M /var rpool/ROOT/bar/var at foo 2.64M - 21.0M - rpool/ROOT/bar/var at bar 923K - 21.2M - rpool/ROOT/bar/zoneds 32.7M 57.2G 20K /zoneds rpool/ROOT/bar/zoneds at foo 18K - 19K - rpool/ROOT/bar/zoneds at bar 19K - 20K - rpool/ROOT/bar/zoneds/default 32.6M 57.2G 29.9M /zoneds/default rpool/ROOT/bar/zoneds/default at foo 2.61M - 27.0M - rpool/ROOT/bar/zoneds/default at bar 162K - 29.9M - rpool/ROOT/foo 2.93M 57.2G 1.29G /.alt.foo rpool/ROOT/foo/var 818K 57.2G 21.2M /.alt.foo/var rpool/ROOT/foo/zoneds 270K 57.2G 20K /.alt.foo/zoneds rpool/ROOT/foo/zoneds/default 253K 57.2G 29.9M /.alt.foo/zoneds/default And then DOES clean up the zfs filesystem when I run ludelete. Does anyone know where there''s a discrepancy? The same lucreate command (-n <BE> -p rpool) command was used both times.
Mark J Musante
2009-Jan-16 19:30 UTC
[zfs-discuss] s10u6 ludelete issues with zones on zfs root
Hi Amy, This is a known problem with ZFS and live upgrade. I believe the docs for s10u6 discourage the config you show here. A patch should be ready some time next month with a fix for this. On Fri, 16 Jan 2009, amy.rich at tufts.edu wrote:> I''ve installed an s10u6 machine with no UFS partitions at all. I''ve created a > dataset for zones and one for a zone named "default." I then do an lucreate > and luactivate and a subsequent boot off the new BE. All of that appears to > go just fine (though I''ve found that I MUST call the zone dataset zoneds for > some reason, or it will rename it ot that for me). When I try to delete the > old BE, it fails with the following message: > > # ludelete s10-RC > ERROR: cannot mount ''/zoneds'': directory is not empty > ERROR: cannot mount mount point </.alt.tmp.b-VK.mnt/zoneds> device <rpool/ROOT/s10-RC/zoneds> > ERROR: failed to mount file system <rpool/ROOT/s10-RC/zoneds> on </.alt.tmp.b-VK.mnt/zoneds> > ERROR: unmounting partially mounted boot environment file systems > ERROR: cannot mount boot environment by icf file </etc/lu/ICF.1> > ERROR: Cannot mount BE <s10-RC>. > Unable to delete boot environment. > > It''s obvious that luactivate is not correctly resetting the mount point of > /zoneds and /zoneds/default (the zone named default) in the old BE so that > it''s under /.alt like the rest of the ROOT dataset: > > # zfs list |grep s10-RC > rpool/ROOT/s10-RC 14.6M 57.3G 1.29G /.alt.tmp.b-VK.mnt/ > rpool/ROOT/s10-RC/var 2.69M 57.3G 21.1M /.alt.tmp.b-VK.mnt//var > rpool/ROOT/s10-RC/zoneds 5.56M 57.3G 19K /zoneds > rpool/ROOT/s10-RC/zoneds/default 5.55M 57.3G 29.9M /zoneds/default > > Obviously I can reset the mount points by hand with "zfs set mountpoint," but > this seems like something that luactivate and the subsequent boot should > handle. Is this a bug, or am I missing a step/have something misconfigured? > > Also, once I run ludelete on a BE, it seems like it should also clean up the > old ZFS filesystems for the BE s10-RC (the old BE) instead of me having to do > an explicit zfs destroy. > > The very weird thing is that, if I run lucreate again (new BE is named bar) > and boot off of the new BE, it does the right thing with the old BE (foo): > > rpool/ROOT/bar 1.52G 57.2G 1.29G / > rpool/ROOT/bar at foo 89.1M - 1.29G - > rpool/ROOT/bar at bar 84.1M - 1.29G - > rpool/ROOT/bar/var 24.7M 57.2G 21.2M /var > rpool/ROOT/bar/var at foo 2.64M - 21.0M - > rpool/ROOT/bar/var at bar 923K - 21.2M - > rpool/ROOT/bar/zoneds 32.7M 57.2G 20K /zoneds > rpool/ROOT/bar/zoneds at foo 18K - 19K - > rpool/ROOT/bar/zoneds at bar 19K - 20K - > rpool/ROOT/bar/zoneds/default 32.6M 57.2G 29.9M /zoneds/default > rpool/ROOT/bar/zoneds/default at foo 2.61M - 27.0M - > rpool/ROOT/bar/zoneds/default at bar 162K - 29.9M - > rpool/ROOT/foo 2.93M 57.2G 1.29G /.alt.foo > rpool/ROOT/foo/var 818K 57.2G 21.2M /.alt.foo/var > rpool/ROOT/foo/zoneds 270K 57.2G 20K /.alt.foo/zoneds > rpool/ROOT/foo/zoneds/default 253K 57.2G 29.9M /.alt.foo/zoneds/default > > And then DOES clean up the zfs filesystem when I run ludelete. Does anyone > know where there''s a discrepancy? The same lucreate command (-n <BE> -p > rpool) command was used both times. > > > > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >Regards, markm
amy.rich at tufts.edu
2009-Jan-16 19:38 UTC
[zfs-discuss] s10u6 ludelete issues with zones on zfs root
mmusante> This is a known problem with ZFS and live upgrade. I believe the mmusante> docs for s10u6 discourage the config you show here. A patch should mmusante> be ready some time next month with a fix for this. Do you happen to have a bugid handy? I had done various searches to try and determine what the best way to set up zones without any UFS was, and the closest I came was to setting them up under the pool/ROOT/<BE> area. I tried giving them their own dataset in pool (not under ROOT/<BE>), but that was also unsuccessful. I guess let me back up and ask... If one only has one mirrored disk set and one pool (the root pool), what''s the recommended way to put zones on a zfs root? Most of the documentation I''ve seen is specific to putting the zone roots on UFS and giving them access to ZFS datasets.
Mark J Musante
2009-Jan-16 19:49 UTC
[zfs-discuss] s10u6 ludelete issues with zones on zfs root
On Fri, 16 Jan 2009, amy.rich at tufts.edu wrote:> mmusante> This is a known problem with ZFS and live upgrade. I believe the > mmusante> docs for s10u6 discourage the config you show here. A patch should > mmusante> be ready some time next month with a fix for this. > > Do you happen to have a bugid handy?The closest one would be 6742586. The description of the bug doesn''t exactly match what you saw, but the cause was the same: the zone mounting code in LU was broken and had to be rewritten.> I had done various searches to try and determine what the best way to > set up zones without any UFS was, and the closest I came was to setting > them up under the pool/ROOT/<BE> area. I tried giving them their own > dataset in pool (not under ROOT/<BE>), but that was also unsuccessful.There are some docs on s10u6''s restrictions of where zones can be put if you are using live upgrade. I''ll have to see if I can dig them up.> I guess let me back up and ask... If one only has one mirrored disk set > and one pool (the root pool), what''s the recommended way to put zones on > a zfs root? Most of the documentation I''ve seen is specific to putting > the zone roots on UFS and giving them access to ZFS datasets.If you put your zones in a subdirectory of a dataset, they should work. But I''d recommend waiting for the patch to be released. Regards, markm
Cindy.Swearingen at Sun.COM
2009-Jan-16 19:59 UTC
[zfs-discuss] s10u6 ludelete issues with zones on zfs root
Hi Amy, You can review the ZFS/LU/zones issues here: http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Live_Upgrade_with_Zones The entire Solaris 10 10/08 UFS to ZFS with zones migration is described here: http://docs.sun.com/app/docs/doc/819-5461/zfsboot-1?a=view Let us know if you can''t find something... Cindy amy.rich at tufts.edu wrote:> mmusante> This is a known problem with ZFS and live upgrade. I believe the > mmusante> docs for s10u6 discourage the config you show here. A patch should > mmusante> be ready some time next month with a fix for this. > > Do you happen to have a bugid handy? > > I had done various searches to try and determine what the best way to set up > zones without any UFS was, and the closest I came was to setting them up under > the pool/ROOT/<BE> area. I tried giving them their own dataset in pool (not > under ROOT/<BE>), but that was also unsuccessful. > > I guess let me back up and ask... If one only has one mirrored disk set and > one pool (the root pool), what''s the recommended way to put zones on a zfs > root? Most of the documentation I''ve seen is specific to putting the zone > roots on UFS and giving them access to ZFS datasets. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
amy.rich at tufts.edu
2009-Jan-16 20:28 UTC
[zfs-discuss] s10u6 ludelete issues with zones on zfs root
cindy.swearingen> http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Live_Upgrade_with_Zones Thanks, Cindy, that was in fact the page I had been originally referencing when I set up my datasets, and it was very helpful. I found it by reading a comp.unix.solaris post in which someone else was talking about not being able to ludelete an old BE. Unfortunately, it wasn''t quite the same issue as you cover in "Recover from BE Removal Failure (ludelete)," and that fix had already been applied to my system. cindy.swearingen> The entire Solaris 10 10/08 UFS to ZFS with zones migration cindy.swearingen> is described here: cindy.swearingen> http://docs.sun.com/app/docs/doc/819-5461/zfsboot-1?a=view Thanks, I find most of the ZFS stuff to be fairly straightforward. And I''m never doing any migration from UFS (which is what many of the zones and zfs docs seem to be aimed at). It''s mixing ZFS, Zones, and liveupgrade that''s been... challenging. :} But now I know that there''s definitely a bug involved, and I''ll wait for the patch. Thanks to you and Mark for your help.
Peter Pickford
2009-Jan-16 20:58 UTC
[zfs-discuss] s10u6 ludelete issues with zones on zfs root
This what I discovered Yo cant have sub directories of the zone root file system that is part of the BE filesystem tree with zfs and lu (no spearate /var etc) zone roots must be on the root pool for lu to work extra file system must be from a none BE zfs file system tree ( I use datasets) [root at buildsun4u ~]# zfs list -r rpool/zones NAME USED AVAIL REFER MOUNTPOINT rpool/zones 162M 14.8G 21K /zones rpool/zones/zone1-restore_080915 73.2M 14.8G 73.2M /zones/zone1 rpool/zones/zone1-restore_080915 at patch_090115 0 - 73.2M - rpool/zones/zone1-restore_080915-patch_090115 7.76M 14.8G 76.1M /.alt.patch_090115/zones/zone1 rpool/zones/zone2-restore_080915 73.4M 14.8G 73.4M /zones/zone2 rpool/zones/zone2-restore_080915 at patch_090115 0 - 73.4M - rpool/zones/zone2-restore_080915-patch_090115 7.75M 14.8G 76.3M /.alt.patch_090115/zones/zone2 You can have datasets and probably mount that are not part of the BE [root at buildsun4u ~]# zfs list -r rpool/zonesextra NAME USED AVAIL REFER MOUNTPOINT rpool/zonesextra 284K 14.8G 18K legacy rpool/zonesextra/zone1 132K 14.8G 18K legacy rpool/zonesextra/zone1/app 18K 8.00G 18K /opt/app rpool/zonesextra/zone1/core 18K 8.00G 18K /var/core rpool/zonesextra/zone1/export 78.5K 14.8G 20K /export rpool/zonesextra/zone1/export/home 58.5K 8.00G 58.5K /export/home rpool/zonesextra/zone2 133K 14.8G 18K legacy rpool/zonesextra/zone2/app 18K 8.00G 18K /opt/app rpool/zonesextra/zone2/core 18K 8.00G 18K /var/core rpool/zonesextra/zone2/export 79K 14.8G 20K /export rpool/zonesextra/zone2/export/home 59K 8.00G 59K /export/home 2009/1/16 <amy.rich at tufts.edu>> cindy.swearingen> > http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#Live_Upgrade_with_Zones > > Thanks, Cindy, that was in fact the page I had been originally referencing > when I set up my datasets, and it was very helpful. I found it by reading > a > comp.unix.solaris post in which someone else was talking about not being > able > to ludelete an old BE. Unfortunately, it wasn''t quite the same issue as > you > cover in "Recover from BE Removal Failure (ludelete)," and that fix had > already been applied to my system. > > cindy.swearingen> The entire Solaris 10 10/08 UFS to ZFS with zones > migration > cindy.swearingen> is described here: > cindy.swearingen> > http://docs.sun.com/app/docs/doc/819-5461/zfsboot-1?a=view > > Thanks, I find most of the ZFS stuff to be fairly straightforward. And I''m > never doing any migration from UFS (which is what many of the zones and zfs > docs seem to be aimed at). It''s mixing ZFS, Zones, and liveupgrade that''s > been... challenging. :} > > But now I know that there''s definitely a bug involved, and I''ll wait for > the > patch. Thanks to you and Mark for your help. > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090116/c4e0bd40/attachment.html>
Jens Elkner
2009-Jan-17 21:09 UTC
[zfs-discuss] s10u6 ludelete issues with zones on zfs root
On Fri, Jan 16, 2009 at 02:08:09PM -0500, amy.rich at tufts.edu wrote:> I''ve installed an s10u6 machine with no UFS partitions at all. I''ve created a > dataset for zones and one for a zone named "default." I then do an lucreate > and luactivate and a subsequent boot off the new BE. All of that appears to > go just fine (though I''ve found that I MUST call the zone dataset zoneds for > some reason, or it will rename it ot that for me). When I try to delete the > old BE, it fails with the following message:It''s a LU bug. Have a look at http://iws.cs.uni-magdeburg.de/~elkner/luc/lutrouble.html The following patch fix it and provides an oppurtunity to speedup lucreate/lumount/luactivate and friends dramtically wrt. a machine with lots of LU unrelated filesystems (e.g. user homes). http://iws.cs.uni-magdeburg.de/~elkner/luc/lu-5.10.patch or http://iws.cs.uni-magdeburg.de/~elkner/luc/lu-5.11.patch Have fun, jel. -- Otto-von-Guericke University http://www.cs.uni-magdeburg.de/ Department of Computer Science Geb. 29 R 027, Universitaetsplatz 2 39106 Magdeburg, Germany Tel: +49 391 67 12768