I had a zpool and was using the implicit zfs filesystem in it: data 10.2G 124G 8.53G /var/tellme data at hotbackup.H23 83.4M - 8.52G - data at hotbackup.H00 25.9M - 8.52G - data at hotbackup.H01 16.2M - 8.52G - ... These contained hourly zfs snapshots that I preferred to preserve. However, I was also trying to convert this to follow our standard naming, which meant that this filesystem should have been called data/var_tellme. I ran the following: # zfs snapshot data at clean # zfs clone data at clean data/var_tellme # zfs promote data/var_tellme This worked as expected and now I have: NAME USED AVAIL REFER MOUNTPOINT data 9.70G 124G 8.53G legacy data/var_tellme 9.70G 124G 8.10G legacy data/var_tellme at clean 717M - 8.53G - data/var_tellme at hotbackup.H14 24.4M - 8.09G - data/var_tellme at hotbackup.H15 10.0M - 8.09G - data/var_tellme at hotbackup.H16 6.14M - 8.09G - ... However, now I cannot remove the data/var_tellme at clean snapshot because it is now labelled as the ''origin'' for data itself: # zfs get origin data NAME PROPERTY VALUE SOURCE data origin data/var_tellme at clean - I don''t care about the ''data'' filesystem anymore, I just want to be able to nuke the data/var_tellme at clean snapshot so it doesn''t end up filling my zpool with changes. Any thoughts on how this can be done? I do have other systems I can use to test this procedure, but ideally it would not introduce any downtime, but that can be arranged if necessary. Thanks, Todd
On Mon, 15 Jun 2009, Todd Stansell wrote:> Any thoughts on how this can be done? I do have other systems I can use > to test this procedure, but ideally it would not introduce any downtime, > but that can be arranged if necessary.I think the only work-around is to re-promote ''data'', destroy the data/var_tellme (or rename it if there are changes you need to keep), wait for the next hourly snapshot, clone data/var_tellme off of *that*, and then do the promote. That way there''s no extra @clean snapshot sitting around. But in general it''s not a good idea to have datasets at higher levels be children of snapshots at lower levels. See CR 6622809, or this thread on zfs-discuss: http://www.opensolaris.org/jive/thread.jspa?messageID=368609񙿡 You may experience boot-time mount problems with this kind of inverted parent-child relationship. I see you changed the mountpoint to "legacy", so maybe you already ran into that. Perhaps a better way to handle this is to snapshot data at clean, and then do a zfs send | zfs recv to make a working copy of the dataset. This will take a while to copy the 10g, but then you can destroy data at clean and data/var_tellme at clean, and delete everything from the ''data'' dataset. The downside is that you won''t be able to transfer the snapshots to be children of the new dataset. Regards, markm