I am getting the following message when I try and remove a snapshot from a clone: bash-3.00# zfs destroy data/webserver at sys_unconfigd cannot destroy ''data/webserver at sys_unconfigd'': snapshot has dependent clones use ''-R'' to destroy the following datasets: The datasets are being used, so why can''t I delete the snapshot? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100211/e1ec7826/attachment.html>
On Fri, Feb 12, 2010 at 10:55 AM, Tony MacDoodle <tpsdoodle at gmail.com> wrote:> I am getting the following message when I try and remove a snapshot from a > clone: > > bash-3.00# zfs destroy data/webserver at sys_unconfigd > cannot destroy ''data/webserver at sys_unconfigd'': snapshot has dependent clones > use ''-R'' to destroy the following datasets:Is there something else below that line? Like the name of the clones?> The datasets are being used, so why can''t I delete the snapshot?because it''s used as base for clones. -- Fajar
On Thu, Feb 11, 2010 at 10:55:20PM -0500, Tony MacDoodle wrote:> I am getting the following message when I try and remove a snapshot from a > clone: > > bash-3.00# zfs destroy data/webserver at sys_unconfigd > cannot destroy ''data/webserver at sys_unconfigd'': snapshot has dependent clones > use ''-R'' to destroy the following datasets: > > The datasets are being used, so why can''t I delete the snapshot?Clones are writable copies of snapshots, and share space with the snapshot that is their basis (initially, all the space). That space belongs to the snapshot, which in turn belongs to another dataset (from which it was originally taken). For clones, you will see that "referenced" is often much more than "usedbydataset", as a result. You can use zfs promote to change around which dataset owns the base snapshot, and which is the dependant clone with a parent, so you can deletehe other - but if you want both datasets you will need to keep the snapshot they share. -- Dan. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 194 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100212/5b5ca292/attachment.bin>
On Fri, 12 Feb 2010, Daniel Carosone wrote:> You can use zfs promote to change around which dataset owns the base > snapshot, and which is the dependant clone with a parent, so you can > deletehe other - but if you want both datasets you will need to keep the > snapshot they share.Right. The other option is to zfs send the snapshot to create a copy instead of a clone. Once the zfs recv completes, the snapshot can be destroyed. Of course, it takes much longer to do this, as zfs is going to create a full copy of the snapshot. The appeal of clones is that they, at least initially, take no extra space, and also that they''re nearly instantaneous. But they require the snapshot to remain for the lifetime of the clone. Regards, markm
On Fri, Feb 12, 2010 at 09:50:32AM -0500, Mark J Musante wrote:> The other option is to zfs send the snapshot to create a copy > instead of a clone.One day, in the future, I hope there might be a third option, somewhat as an optimimsation. With dedup and bp-rewrite, a new operation could be created that takes the shared data and makes it uniquely-referenced but deduplicated data. This could be a lot more efficient and less disruptive because of the advanced knnowledge that the data must already be the same. Whether its worth the implementation effort is another issue, but in the meantime we have plenty of time to try and come up with a sensible name for it. "unclone" is too boring :) -- Dan. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 194 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100213/5c5159c3/attachment.bin>
On Fri, Feb 12, 2010 at 1:08 PM, Daniel Carosone <dan at geek.com.au> wrote:> With dedup and bp-rewrite, a new operation could be created that takes > the shared data and makes it uniquely-referenced but deduplicated data. > This could be a lot more efficient and less disruptive because of the > advanced knnowledge that the data must already be the same.That''s essentially what a send/recv does when dedup is enabled. -B -- Brandon High : bhigh at freaks.com There is absolutely no substitute for a genuine lack of preparation.