Seymour Krebs
2008-Dec-08 18:03 UTC
[zfs-discuss] zfs is a co-dependent parent and won''t let children leave home
Please if anyone can help with this mess, I''d appreciate it. ~# beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- b100 - - 6.88G static 2008-10-30 13:59 b101a - - 960.95M static 2008-11-09 08:46 b101b - - 6.15G static 2008-11-14 09:17 b103 - - 13.44M static 2008-11-25 12:09 b103pre NR / 10.86G static 2008-12-07 11:25 b99 - - 16777216.01T static 2008-10-01 14:35 ~# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 81.4G 145G 62K /rpool rpool/ROOT 33.3G 145G 18K /rpool/ROOT rpool/ROOT/b100 6.54G 145G 14.4G / rpool/ROOT/b100/opt 0 145G 3.60M /opt rpool/ROOT/b101a 622M 145G 14.3G / rpool/ROOT/b101a/opt 151M 145G 2.76G /opt rpool/ROOT/b101b 3.22G 145G 14.9G / rpool/ROOT/b101b/opt 2.92G 145G 2.06G /opt rpool/ROOT/b103 13.3M 145G 21.4G / rpool/ROOT/b103pre 10.5G 145G 21.6G / rpool/ROOT/b99 12.4G 145G 10.2G / rpool/ROOT/b99/opt 0 145G 3.60M /opt rpool/export 48.1G 145G 3.22G /export rpool/export/home 44.8G 145G 38.6G /export/home ~# zfs destroy rpool/ROOT/b99 cannot destroy ''rpool/ROOT/b99'': filesystem has children use ''-r'' to destroy the following datasets: rpool/ROOT/b99 at opensolaris-2 rpool/ROOT/b99/opt at snap rpool/ROOT/b99/opt at zfs-auto-snap:daily-2008-10-31-11:02 rpool/ROOT/b99/opt at zfs-auto-snap:weekly-2008-10-31-11:02 rpool/ROOT/b99/opt at zfs-auto-snap:daily-2008-11-01-12:34 (...abridged...)rpool/ROOT/b99/opt at zfs-auto-snap:daily-2008-12-07-12:02 rpool/ROOT/b99/opt rpool/ROOT/b99 at b100 rpool/ROOT/b99 at snap rpool/ROOT/b99 at zfs-auto-snap:daily-2008-10-31-11:02 rpool/ROOT/b99 at zfs-auto-snap:weekly-2008-10-31-11:02 (...abridged...) rpool/ROOT/b99 at zfs-auto-snap:weekly-2008-12-07-12:02 rpool/ROOT/b99 at zfs-auto-snap:hourly-2008-12-07-12:02 rpool/ROOT/b99 at zfs-auto-snap:daily-2008-12-07-12:02 ~# zfs destroy -r rpool/ROOT/b99 cannot destroy ''rpool/ROOT/b99'': filesystem has dependent clones use ''-R'' to destroy the following datasets: rpool/ROOT/b100/opt at zfs-auto-snap:daily-2008-11-09-11:13 rpool/ROOT/b100/opt at zfs-auto-snap:daily-2008-11-11-11:10 rpool/ROOT/b100/opt at zfs-auto-snap:daily-2008-11-12-00:00 rpool/ROOT/b100/opt at zfs-auto-snap:weekly-2008-11-15-00:00 r(...abridged...) rpool/ROOT/b100/opt at zfs-auto-snap:hourly-2008-12-07-12:02 rpool/ROOT/b100/opt at zfs-auto-snap:daily-2008-12-07-12:02 rpool/ROOT/b100/opt rpool/ROOT/b100 at snap rpool/ROOT/b100 at zfs-auto-snap:daily-2008-10-31-11:02 rpool/ROOT/b100 at zfs-auto-snap:daily-2008-11-01-12:34 (...abridged...)rpool/ROOT/b101a/opt at zfs-auto-snap:hourly-2008-12-07-12:02 rpool/ROOT/b101a/opt at zfs-auto-snap:daily-2008-12-07-12:02 rpool/ROOT/b101a/opt rpool/ROOT/b101a at zfs-auto-snap:weekly-2008-11-15-00:00 rpool/ROOT/b101a at zfs-auto-snap:daily-2008-11-18-09:32 rpool/ROOT/b101a at zfs-auto-snap:daily-2008-11-19-00:00 (...abridged...) rpool/ROOT/b101a at zfs-auto-snap:hourly-2008-12-07-12:02 rpool/ROOT/b101a at zfs-auto-snap:daily-2008-12-07-12:02 rpool/ROOT/b101a rpool/ROOT/b103pre at b101b rpool/ROOT/b103pre at zfs-auto-snap:weekly-2008-11-15-00:00 rpool/ROOT/b103pre at zfs-auto-snap:daily-2008-11-18-09:32 rpool/ROOT/b103pre at zfs-auto-snap:daily-2008-11-19-00:00 (...abridged...) rpool/ROOT/b103pre at zfs-auto-snap:hourly-2008-11-25-07:00 rpool/ROOT/b103pre at zfs-auto-snap:hourly-2008-11-25-08:00 rpool/ROOT/b103pre at zfs-auto-snap:hourly-2008-11-25-09:00 rpool/ROOT/b101b/opt at opensolaris-2 rpool/ROOT/b101b/opt at b100 rpool/ROOT/b101b/opt at snap rpool/ROOT/b101b/opt at zfs-auto-snap:daily-2008-10-31-11:02 rpool/ROOT/b101b/opt at zfs-auto-snap:daily-2008-11-01-12:34 (...abridged...) rpool/ROOT/b101b/opt at zfs-auto-snap:daily-2008-11-08-00:00 rpool/ROOT/b101b/opt at zfs-auto-snap:weekly-2008-11-08-00:00 rpool/ROOT/b101b/opt at b101a rpool/ROOT/b101b/opt at zfs-auto-snap:daily-2008-11-09-11:13 rpool/ROOT/b101b/opt at zfs-auto-snap:daily-2008-11-11-11:10 rpool/ROOT/b101b/opt at zfs-auto-snap:daily-2008-11-12-00:00 rpool/ROOT/b101b/opt at b101b rpool/ROOT/b101b/opt at zfs-auto-snap:weekly-2008-11-15-00:00 rpool/ROOT/b101b/opt at zfs-auto-snap:daily-2008-11-18-09:32 (...abridged...) rpool/ROOT/b101b/opt at zfs-auto-snap:hourly-2008-11-25-09:00 rpool/ROOT/b101b/opt at b103 rpool/ROOT/b101b/opt at zfs-auto-snap:weekly-2008-12-07-12:02 rpool/ROOT/b101b/opt at zfs-auto-snap:hourly-2008-12-07-12:02 (...abridged...) rpool/ROOT/b101b/opt at zfs-auto-snap:frequent-2008-12-07-14:15 rpool/ROOT/b101b/opt at zfs-auto-snap:hourly-2008-12-07-15:00 rpool/ROOT/b101b/opt rpool/ROOT/b101b at zfs-auto-snap:weekly-2008-12-07-12:02 rpool/ROOT/b101b at zfs-auto-snap:hourly-2008-12-07-12:02 (...abridged...) rpool/ROOT/b101b at zfs-auto-snap:frequent-2008-12-07-14:15 rpool/ROOT/b101b at zfs-auto-snap:hourly-2008-12-07-15:00 rpool/ROOT/b101b rpool/ROOT/b103pre at b103 rpool/ROOT/b103 at zfs-auto-snap:weekly-2008-12-07-12:02 (...abridged...) rpool/ROOT/b103 at zfs-auto-snap:hourly-2008-12-07-17:00 rpool/ROOT/b103 at zfs-auto-snap:daily-2008-12-08-00:00 rpool/ROOT/b103 at zfs-auto-snap:weekly-2008-12-08-00:00 rpool/ROOT/b103 rpool/ROOT/b103pre at b103pre rpool/ROOT/b103pre at zfs-auto-snap:weekly-2008-12-07-12:02 (...abridged...) rpool/ROOT/b103pre at zfs-auto-snap:daily-2008-12-08-00:00 rpool/ROOT/b103pre at zfs-auto-snap:weekly-2008-12-08-00:00 rpool/ROOT/b103pre rpool/ROOT/b100 at b101a rpool/ROOT/b100 at zfs-auto-snap:daily-2008-11-09-11:13 rpool/ROOT/b100 at zfs-auto-snap:daily-2008-11-11-11:10 rpool/ROOT/b100 at zfs-auto-snap:daily-2008-11-12-00:00 (...abridged...) rpool/ROOT/b100 at zfs-auto-snap:daily-2008-12-07-12:02 rpool/ROOT/b100 So, it appears that any and all BE''s are children of BE b99. Attempting to destroy b99 leaves it mounted on a tmp mountpoint.: sgk at Bastion:~# beadm destroy b99 Are you sure you want to destroy b99? This action cannot be undone(y/[n]): y Unable to destroy b99. Mount failed. ~# beadm list -a (...abridged...) rpool/ROOT/b99 - /tmp/.be.fFayjf 12.44G static 2008-10-01 14:35 Can be mount and unmounted but not destroyed using beadm . When mounting beadm gives message: Unable to mount b99. Mount failed. But, it actually is mounted on the specified mountpoint. At the same time, grub no longer will put up the menu.lst and now has some very funky video (vertical lines) at the first Opensolaris version text. install_grub or it ''s variant update_grub have not worked. I don''t have the time to reinstall and am worried about losing any data by mistake. I have some customer data on this machine at the moment, need to not disrupt the software installs for a few days and am not experienced at trying to zfs send to a file or another disk (none formatted for solaris atm) Thanks, sgk. -- This message posted from opensolaris.org
Will Murnane
2008-Dec-08 18:43 UTC
[zfs-discuss] zfs is a co-dependent parent and won''t let children leave home
On Mon, Dec 8, 2008 at 13:03, Seymour Krebs <seymour.krebs at gmail.com> wrote:> ~# zfs destroy -r rpool/ROOT/b99 > cannot destroy ''rpool/ROOT/b99'': filesystem has dependent clonesTake a look at the output of "zfs get origin" for the other filesystems in the pool. One of them is a clone of rpool/ROOT/b99; to delete b99 you need to promote it to be the parent of b99 rather than the other way around. This is done like so: zfs promote rpool/ROOT/b100 or whatever filesystem you find is the child of b99. Then you should be able to destroy b99 with no ill effects. Will
Seymour Krebs
2008-Dec-08 21:19 UTC
[zfs-discuss] zfs is a co-dependent parent and won''t let children leave home
Will, thanks for the info on the ''zfs get origin'' command. I had previously tried to promote the BEs but saw no effect. I can now see with ''origin'' that there is a sequential promotion scheme and some fo the BEs had to be promoted twice to free them from their life of servitude and disgrace. ~# zfs get origin rpool/ROOT/b103 NAME PROPERTY VALUE SOURCE rpool/ROOT/b103 origin rpool/ROOT/b103pre at b103pre - ~# zfs promote rpool/ROOT/b103 ~# zfs get origin rpool/ROOT/b103 NAME PROPERTY VALUE SOURCE rpool/ROOT/b103 origin rpool/ROOT/b101b at b103 - ~# zfs promote rpool/ROOT/b103 # zfs get origin rpool/ROOT/b103 NAME PROPERTY VALUE SOURCE rpool/ROOT/b103 origin - - b103 is now free and beadm destroy functions. -- This message posted from opensolaris.org
Ross
2008-Dec-09 13:47 UTC
[zfs-discuss] zfs is a co-dependent parent and won''t let children leave home
While it''s good that this is at least possible, that looks horribly complicated to me. Does anybody know if there''s any work being done on making it easy to remove obsolete boot environments? -- This message posted from opensolaris.org
Tim Haley
2008-Dec-09 14:17 UTC
[zfs-discuss] zfs is a co-dependent parent and won''t let children leave home
Ross wrote:> While it''s good that this is at least possible, that looks horribly complicated to me. > Does anybody know if there''s any work being done on making it easy to remove obsolete > boot environments?If the clones were promoted at the time of their creation the BEs would stay independent and individually deletable. Promotes can fail, though, if there is not enough space. I was told a little while back when I ran into this myself on an Nevada build where ludelete failed, that beadm *did* promote clones. This thread appears to be evidence to the contrary. I think it''s a bug, we should either promote immediately on creation, or perhaps beadm destroy could do the promotion behind the covers. -tim
Kyle McDonald
2008-Dec-09 14:34 UTC
[zfs-discuss] zfs is a co-dependent parent and won''t let children leave home
Tim Haley wrote:> Ross wrote: > >> While it''s good that this is at least possible, that looks horribly complicated to me. >> Does anybody know if there''s any work being done on making it easy to remove obsolete >> boot environments? >> > > If the clones were promoted at the time of their creation the BEs would > stay independent and individually deletable. Promotes can fail, though, > if there is not enough space. > > I was told a little while back when I ran into this myself on an Nevada > build where ludelete failed, that beadm *did* promote clones. This > thread appears to be evidence to the contrary. I think it''s a bug, we > should either promote immediately on creation, or perhaps beadm destroy > could do the promotion behind the covers. >If I understand this right, the latter option looks better to me. Why consume the disk space before you have to? What does LU do? -Kyle> -tim > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Tim Haley
2008-Dec-09 15:11 UTC
[zfs-discuss] zfs is a co-dependent parent and won''t let children leave home
Kyle McDonald wrote:> Tim Haley wrote: >> Ross wrote: >> >>> While it''s good that this is at least possible, that looks horribly complicated to me. >>> Does anybody know if there''s any work being done on making it easy to remove obsolete >>> boot environments? >>> >> If the clones were promoted at the time of their creation the BEs would >> stay independent and individually deletable. Promotes can fail, though, >> if there is not enough space. >> >> I was told a little while back when I ran into this myself on an Nevada >> build where ludelete failed, that beadm *did* promote clones. This >> thread appears to be evidence to the contrary. I think it''s a bug, we >> should either promote immediately on creation, or perhaps beadm destroy >> could do the promotion behind the covers. >> > If I understand this right, the latter option looks better to me. Why > consume the disk space before you have to? > What does LU do? >ludelete doesn''t handle this any better than beadm destroy does, it fails for the same reasons. lucreate does not promote the clone it creates when a new BE is spawned, either. -tim> -Kyle > >> -tim >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Mark J Musante
2008-Dec-09 15:34 UTC
[zfs-discuss] zfs is a co-dependent parent and won''t let children leave home
On Tue, 9 Dec 2008, Tim Haley wrote:> > ludelete doesn''t handle this any better than beadm destroy does, it > fails for the same reasons. lucreate does not promote the clone it > creates when a new BE is spawned, either.Live upgrade''s luactivate command is meant to promote the BE during init 6 processing. And ludelete calls lulib_demote_and_destroy_dataset to attempt to put a BE in the right configuration for zfs destroy to work. If it doesn''t then that''s a bug in LU. Regards, markm
Ross
2008-Dec-09 15:58 UTC
[zfs-discuss] zfs is a co-dependent parent and won''t let children leave home
>> we should either promote immediately on creation, or perhaps beadm destroy >> could do the promotion behind the covers. >> - Tim> If I understand this right, the latter option looks > better to me. Why > consume the disk space before you have to? > What does LU do? > -KyleHowever, if you want to delete something to free up space and only then find you haven''t got enough free space to do so, you''re in a bit of a catch-22. Might be better to at least attempt the promotion early on, and warn the user if it fails for any reason. Would it be sensible to fail the update if the promotion fails? -- This message posted from opensolaris.org
Evan J. Layton
2008-Dec-11 20:47 UTC
[zfs-discuss] zfs is a co-dependent parent and won''t let
> Kyle McDonald wrote: > > Tim Haley wrote: > >> Ross wrote: > >> > >>> While it''s good that this is at least possible, > that looks horribly complicated to me. > >>> Does anybody know if there''s any work being done > on making it easy to remove obsolete > >>> boot environments? > >>> > >> If the clones were promoted at the time of their > creation the BEs would > >> stay independent and individually deletable. > Promotes can fail, though, > > if there is not enough space. > >> > >> I was told a little while back when I ran into > this myself on an Nevada > >> build where ludelete failed, that beadm *did* > promote clones. This > >> thread appears to be evidence to the contrary. I > think it''s a bug, we > >> should either promote immediately on creation, or > perhaps beadm destroy > >> could do the promotion behind the covers.It is the beadm activate that will do this promotion it is not done at creation time. Also beadm destroy does promotes but in "reverse" so that the BE being destroyed becomes a leaf without any dependencies.> >> > > If I understand this right, the latter option looks > better to me. Why > > consume the disk space before you have to? > > What does LU do? > > > > ludelete doesn''t handle this any better than beadm > destroy does, it > fails for the same reasons. lucreate does not promote > the clone it > creates when a new BE is spawned, either.Are we saying that the currently active BE should not be the parent but that a newly spawned BE should be automatically promoted when it''s created? As far as beadm is concerned the activated BE will always be the parent... -evan -- This message posted from opensolaris.org
Cindy.Swearingen at Sun.COM
2008-Dec-16 22:49 UTC
[zfs-discuss] zfs is a co-dependent parent and won''t let children leave home
Hi Seymour, I didn''t get a chance to reproduce this until today and I noticed that originally you used zfs destroy to remove the unwanted BE (b99). I tested these steps by using beadm destroy with the auto snapshots running and didn''t see the problems listed below. I think your eventual beadm destroy failed because the original zfs destroy failed and/or stuff was left mounted. With auto snapshots running, do the following: 1. Create a new BE # beadm create opensolaris2 2. Activate the new BE # beadm activate opensolaris2 3. Reboot to the new BE 4. Identify the BEs # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- opensolaris - - 6.40M static 2008-11-13 14:38 opensolaris2 NR / 2.58G static 2008-12-16 14:51 4. Use beadm destroy # beadm destroy opensolaris My test is simpler than your scenario, but it looks to me like beadm destroy deals correctly with any dependency issues. The auto snapshots seems to be oblivious to change in BE because they are started at the rpool level. Cindy Seymour Krebs wrote:> Please if anyone can help with this mess, I''d appreciate it. > > ~# beadm list > > BE Active Mountpoint Space Policy Created > -- ------ ---------- ----- ------ ------- > b100 - - 6.88G static 2008-10-30 13:59 > b101a - - 960.95M static 2008-11-09 08:46 > b101b - - 6.15G static 2008-11-14 09:17 > b103 - - 13.44M static 2008-11-25 12:09 > b103pre NR / 10.86G static 2008-12-07 11:25 > b99 - - 16777216.01T static 2008-10-01 14:35 > > > ~# zfs list > NAME USED AVAIL REFER MOUNTPOINT > rpool 81.4G 145G 62K /rpool > rpool/ROOT 33.3G 145G 18K /rpool/ROOT > rpool/ROOT/b100 6.54G 145G 14.4G / > rpool/ROOT/b100/opt 0 145G 3.60M /opt > rpool/ROOT/b101a 622M 145G 14.3G / > rpool/ROOT/b101a/opt 151M 145G 2.76G /opt > rpool/ROOT/b101b 3.22G 145G 14.9G / > rpool/ROOT/b101b/opt 2.92G 145G 2.06G /opt > rpool/ROOT/b103 13.3M 145G 21.4G / > rpool/ROOT/b103pre 10.5G 145G 21.6G / > rpool/ROOT/b99 12.4G 145G 10.2G / > rpool/ROOT/b99/opt 0 145G 3.60M /opt > rpool/export 48.1G 145G 3.22G /export > rpool/export/home 44.8G 145G 38.6G /export/home > > ~# zfs destroy rpool/ROOT/b99 > cannot destroy ''rpool/ROOT/b99'': filesystem has children > use ''-r'' to destroy the following datasets: > rpool/ROOT/b99 at opensolaris-2 > rpool/ROOT/b99/opt at snap > rpool/ROOT/b99/opt at zfs-auto-snap:daily-2008-10-31-11:02 > rpool/ROOT/b99/opt at zfs-auto-snap:weekly-2008-10-31-11:02 > rpool/ROOT/b99/opt at zfs-auto-snap:daily-2008-11-01-12:34 > (...abridged...)rpool/ROOT/b99/opt at zfs-auto-snap:daily-2008-12-07-12:02 > rpool/ROOT/b99/opt > rpool/ROOT/b99 at b100 > rpool/ROOT/b99 at snap > rpool/ROOT/b99 at zfs-auto-snap:daily-2008-10-31-11:02 > rpool/ROOT/b99 at zfs-auto-snap:weekly-2008-10-31-11:02 > (...abridged...) > rpool/ROOT/b99 at zfs-auto-snap:weekly-2008-12-07-12:02 > rpool/ROOT/b99 at zfs-auto-snap:hourly-2008-12-07-12:02 > rpool/ROOT/b99 at zfs-auto-snap:daily-2008-12-07-12:02 > > ~# zfs destroy -r rpool/ROOT/b99 > cannot destroy ''rpool/ROOT/b99'': filesystem has dependent clones > use ''-R'' to destroy the following datasets: > rpool/ROOT/b100/opt at zfs-auto-snap:daily-2008-11-09-11:13 > rpool/ROOT/b100/opt at zfs-auto-snap:daily-2008-11-11-11:10 > rpool/ROOT/b100/opt at zfs-auto-snap:daily-2008-11-12-00:00 > rpool/ROOT/b100/opt at zfs-auto-snap:weekly-2008-11-15-00:00 > r(...abridged...) > rpool/ROOT/b100/opt at zfs-auto-snap:hourly-2008-12-07-12:02 > rpool/ROOT/b100/opt at zfs-auto-snap:daily-2008-12-07-12:02 > rpool/ROOT/b100/opt > rpool/ROOT/b100 at snap > rpool/ROOT/b100 at zfs-auto-snap:daily-2008-10-31-11:02 > rpool/ROOT/b100 at zfs-auto-snap:daily-2008-11-01-12:34 > (...abridged...)rpool/ROOT/b101a/opt at zfs-auto-snap:hourly-2008-12-07-12:02 > rpool/ROOT/b101a/opt at zfs-auto-snap:daily-2008-12-07-12:02 > rpool/ROOT/b101a/opt > rpool/ROOT/b101a at zfs-auto-snap:weekly-2008-11-15-00:00 > rpool/ROOT/b101a at zfs-auto-snap:daily-2008-11-18-09:32 > rpool/ROOT/b101a at zfs-auto-snap:daily-2008-11-19-00:00 > (...abridged...) > rpool/ROOT/b101a at zfs-auto-snap:hourly-2008-12-07-12:02 > rpool/ROOT/b101a at zfs-auto-snap:daily-2008-12-07-12:02 > rpool/ROOT/b101a > rpool/ROOT/b103pre at b101b > rpool/ROOT/b103pre at zfs-auto-snap:weekly-2008-11-15-00:00 > rpool/ROOT/b103pre at zfs-auto-snap:daily-2008-11-18-09:32 > rpool/ROOT/b103pre at zfs-auto-snap:daily-2008-11-19-00:00 > (...abridged...) > rpool/ROOT/b103pre at zfs-auto-snap:hourly-2008-11-25-07:00 > rpool/ROOT/b103pre at zfs-auto-snap:hourly-2008-11-25-08:00 > rpool/ROOT/b103pre at zfs-auto-snap:hourly-2008-11-25-09:00 > rpool/ROOT/b101b/opt at opensolaris-2 > rpool/ROOT/b101b/opt at b100 > rpool/ROOT/b101b/opt at snap > rpool/ROOT/b101b/opt at zfs-auto-snap:daily-2008-10-31-11:02 > rpool/ROOT/b101b/opt at zfs-auto-snap:daily-2008-11-01-12:34 > (...abridged...) > rpool/ROOT/b101b/opt at zfs-auto-snap:daily-2008-11-08-00:00 > rpool/ROOT/b101b/opt at zfs-auto-snap:weekly-2008-11-08-00:00 > rpool/ROOT/b101b/opt at b101a > rpool/ROOT/b101b/opt at zfs-auto-snap:daily-2008-11-09-11:13 > rpool/ROOT/b101b/opt at zfs-auto-snap:daily-2008-11-11-11:10 > rpool/ROOT/b101b/opt at zfs-auto-snap:daily-2008-11-12-00:00 > rpool/ROOT/b101b/opt at b101b > rpool/ROOT/b101b/opt at zfs-auto-snap:weekly-2008-11-15-00:00 > rpool/ROOT/b101b/opt at zfs-auto-snap:daily-2008-11-18-09:32 > (...abridged...) > rpool/ROOT/b101b/opt at zfs-auto-snap:hourly-2008-11-25-09:00 > rpool/ROOT/b101b/opt at b103 > rpool/ROOT/b101b/opt at zfs-auto-snap:weekly-2008-12-07-12:02 > rpool/ROOT/b101b/opt at zfs-auto-snap:hourly-2008-12-07-12:02 > (...abridged...) > rpool/ROOT/b101b/opt at zfs-auto-snap:frequent-2008-12-07-14:15 > rpool/ROOT/b101b/opt at zfs-auto-snap:hourly-2008-12-07-15:00 > rpool/ROOT/b101b/opt > rpool/ROOT/b101b at zfs-auto-snap:weekly-2008-12-07-12:02 > rpool/ROOT/b101b at zfs-auto-snap:hourly-2008-12-07-12:02 > (...abridged...) > rpool/ROOT/b101b at zfs-auto-snap:frequent-2008-12-07-14:15 > rpool/ROOT/b101b at zfs-auto-snap:hourly-2008-12-07-15:00 > rpool/ROOT/b101b > rpool/ROOT/b103pre at b103 > rpool/ROOT/b103 at zfs-auto-snap:weekly-2008-12-07-12:02 > (...abridged...) > rpool/ROOT/b103 at zfs-auto-snap:hourly-2008-12-07-17:00 > rpool/ROOT/b103 at zfs-auto-snap:daily-2008-12-08-00:00 > rpool/ROOT/b103 at zfs-auto-snap:weekly-2008-12-08-00:00 > rpool/ROOT/b103 > rpool/ROOT/b103pre at b103pre > rpool/ROOT/b103pre at zfs-auto-snap:weekly-2008-12-07-12:02 > (...abridged...) > rpool/ROOT/b103pre at zfs-auto-snap:daily-2008-12-08-00:00 > rpool/ROOT/b103pre at zfs-auto-snap:weekly-2008-12-08-00:00 > rpool/ROOT/b103pre > rpool/ROOT/b100 at b101a > rpool/ROOT/b100 at zfs-auto-snap:daily-2008-11-09-11:13 > rpool/ROOT/b100 at zfs-auto-snap:daily-2008-11-11-11:10 > rpool/ROOT/b100 at zfs-auto-snap:daily-2008-11-12-00:00 > (...abridged...) > rpool/ROOT/b100 at zfs-auto-snap:daily-2008-12-07-12:02 > rpool/ROOT/b100 > > So, it appears that any and all BE''s are children of BE b99. > Attempting to destroy b99 leaves it mounted on a tmp mountpoint.: > > sgk at Bastion:~# beadm destroy b99 > Are you sure you want to destroy b99? This action cannot be undone(y/[n]): y > Unable to destroy b99. > Mount failed. > > ~# beadm list -a > (...abridged...) > rpool/ROOT/b99 - /tmp/.be.fFayjf 12.44G static 2008-10-01 14:35 > > > Can be mount and unmounted but not destroyed using beadm . When mounting beadm gives message: > > Unable to mount b99. > Mount failed. > > But, it actually is mounted on the specified mountpoint. > > At the same time, grub no longer will put up the menu.lst and now has some very funky video (vertical lines) at the first Opensolaris version text. install_grub or it ''s variant update_grub have not worked. > > I don''t have the time to reinstall and am worried about losing any data by mistake. I have some customer data on this machine at the moment, need to not disrupt the software installs for a few days and am not experienced at trying to zfs send to a file or another disk (none formatted for solaris atm) > > Thanks, > > sgk.
Ethan Quach
2008-Dec-17 01:06 UTC
[zfs-discuss] zfs is a co-dependent parent and won''t let children leave home
Cindy.Swearingen at sun.com wrote:> Hi Seymour, > > I didn''t get a chance to reproduce this until today and I noticed > that originally you used zfs destroy to remove the unwanted BE (b99). > > I tested these steps by using beadm destroy with the auto snapshots > running and didn''t see the problems listed below. I think your > eventual beadm destroy failed because the original zfs destroy > failed and/or stuff was left mounted.I suspect there could be something else causing the mount error during that beadm destroy failure. After the failure, its mountpoint was set to an internally created mountpoint, "/tmp/.be.xxxxxx", so that means we tried to mount the BE somewhere to gather data from it, and that mount failed (or a subordinate dataset failed to mount --like /opt or a zone dataset) Seymour, do you have zones on this system? Can you enable debug and try the destroy again? # BE_PRINT_ERR=true I''m also curious as to how the boot environment you''re currently booted to (b103pre) shows up as being the ''active on reboot'' BE yet its a dependent of b99. If at any point you did a beadm activate of b103pre, it should have promoted b103pre above every other boot environments'' datasets.> > With auto snapshots running, do the following: > > 1. Create a new BE > # beadm create opensolaris2 > > 2. Activate the new BE > # beadm activate opensolaris2 > > 3. Reboot to the new BE > > 4. Identify the BEs > # beadm list > BE Active Mountpoint Space Policy Created > -- ------ ---------- ----- ------ ------- > opensolaris - - 6.40M static 2008-11-13 14:38 > opensolaris2 NR / 2.58G static 2008-12-16 14:51 > > 4. Use beadm destroy > # beadm destroy opensolaris > > My test is simpler than your scenario, but it looks to me like beadm > destroy deals correctly with any dependency issues.Yes, its supposed to. We find and promote the "right" clone of the dataset we need to destroy before attempting to destroy it. I''ve never had issues with beadm destroy wrt dependents, though I don''t enable TS on my boot environment datasets. -ethan
Seymour Krebs
2008-Dec-18 18:07 UTC
[zfs-discuss] zfs is a co-dependent parent and won''t let children leave home
1. sorry for the delay in replying. 2. the reason that I was originally using zfs destroy was that beadm destroy failed. 3. Current state of affairs: ~# beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- b101b - - 6.14G static 2008-11-14 09:17 b103pre NR / 16777216.03T static 2008-12-07 11:25 b103pre-2 - - 6.65M static 2008-12-08 13:22 ~# beadm create test ~# beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- b101b - - 6.14G static 2008-11-14 09:17 b103pre NR / 16777216.03T static 2008-12-07 11:25 b103pre-2 - - 6.65M static 2008-12-08 13:22 test - - 112.0K static 2008-12-18 09:59 ~# beadm activate test Unable to activate test. Unknown external error. ~# beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- b101b - - 6.14G static 2008-11-14 09:17 b103pre NR / 16777216.03T static 2008-12-07 11:25 b103pre-2 - - 6.65M static 2008-12-08 13:22 test - - 250.0K static 2008-12-18 09:59 ~# beadm destroy test Are you sure you want to destroy test? This action cannot be undone(y/[n]): y ~# beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- b101b - - 6.14G static 2008-11-14 09:17 b103pre NR / 16777216.03T static 2008-12-07 11:25 b103pre-2 - - 6.65M static 2008-12-08 13:22 Notice: A. the incorrect size for the current BE, follows the active BE around like a faithful dog. B. if I zfs set mountpoint=, mount, unmount re-set mountpoint=, then beadm activate can usually be made to work. this is one reason I wanted to lighten the BE load cause even with a script to do the work, Im pretty tired of it, every update. -- This message posted from opensolaris.org
Seymour Krebs
2008-Dec-18 18:21 UTC
[zfs-discuss] zfs is a co-dependent parent and won''t let children leave home
Ethan, 1. No zones. 2. with BE_PRINT_ERR=true (sorry destroy now works) ~# beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- b101b - - 6.14G static 2008-11-14 09:17 b103pre NR / 16777216.03T static 2008-12-07 11:25 b103pre-2 - - 6.65M static 2008-12-08 13:22 ~# beadm create test be_get_uuid: failed to get uuid property from BE root dataset user properties. ~# beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- b101b - - 6.14G static 2008-11-14 09:17 b103pre NR / 16777216.03T static 2008-12-07 11:25 b103pre-2 - - 6.65M static 2008-12-08 13:22 ~# beadm create test be_get_uuid: failed to get uuid property from BE root dataset user properties. ~# beadm activate test be_do_installgrub: installgrub failed for device c7d0s2. Unable to activate test. Unknown external error. ~# beadm destroy test Are you sure you want to destroy test? This action cannot be undone(y/[n]): y so: I can see that a configuration problem with the second disk in the mirror is causing a problem. [b] c6d0[/b] Current partition table (original): Total disk cylinders available: 30398 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 root wm 262 - 30397 230.85GB (30136/0/0) 484134840 1 swap wu 1 - 261 2.00GB (261/0/0) 4192965 2 backup wu 0 - 30397 232.86GB (30398/0/0) 488343870 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 unassigned wm 0 0 (0/0/0) 0 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 9 unassigned wm 0 0 (0/0/0) 0 [b] c7d0[/b] Current partition table (original): Total disk cylinders available: 30398 + 2 (reserved cylinders) Part Tag Flag Cylinders Size Blocks 0 root wm 3 - 19 133.35MB (17/0/0) 273105 1 swap wu 20 - 36 133.35MB (17/0/0) 273105 2 backup wu 0 - 30399 232.88GB (30400/0/0) 488376000 3 unassigned wm 0 0 (0/0/0) 0 4 unassigned wm 0 0 (0/0/0) 0 5 unassigned wm 0 0 (0/0/0) 0 6 usr wm 37 - 30397 232.58GB (30361/0/0) 487749465 7 unassigned wm 0 0 (0/0/0) 0 8 boot wu 0 - 0 7.84MB (1/0/0) 16065 9 alternates wm 1 - 2 15.69MB (2/0/0) 32130 The BIOS on this machine boots off of C6, not C7. -- This message posted from opensolaris.org
Ethan Quach
2008-Dec-18 18:54 UTC
[zfs-discuss] zfs is a co-dependent parent and won''t let children leave home
Seymour Krebs wrote:> Ethan, > > 1. No zones. > > 2. with BE_PRINT_ERR=true (sorry destroy now works) > > ~# beadm list > BE Active Mountpoint Space Policy Created > -- ------ ---------- ----- ------ ------- > b101b - - 6.14G static 2008-11-14 09:17 > b103pre NR / 16777216.03T static 2008-12-07 11:25 > b103pre-2 - - 6.65M static 2008-12-08 13:22 > > ~# beadm create test > be_get_uuid: failed to get uuid property from BE root dataset user properties. > > ~# beadm list > BE Active Mountpoint Space Policy Created > -- ------ ---------- ----- ------ ------- > b101b - - 6.14G static 2008-11-14 09:17 > b103pre NR / 16777216.03T static 2008-12-07 11:25This is bug 3233, recently fixed after the 2008.11 build, so you likely don''t have this fix yet. It should be harmless though.> b103pre-2 - - 6.65M static 2008-12-08 13:22 > > ~# beadm create test > be_get_uuid: failed to get uuid property from BE root dataset user properties. > > ~# beadm activate test > be_do_installgrub: installgrub failed for device c7d0s2. > Unable to activate test. > Unknown external error.Is the second disk an EFI labeled disk? You could be running into the issue described in bug 4444. thanks, -ethan