Matthew Ahrens
2006-May-09 20:09 UTC
[zfs-discuss] fwd: ZFS Clone Promotion [PSARC/2006/303 Timeout: 05/12/2006]
FYI folks, I have implemented "clone promotion", also known as "clone swap" or "clone pivot", as described in this bug report: 6276916 support for "clone swap" Look for it in an upcoming release... Here is a copy of PSARC case which is currently under review. 1. Introduction 1.1. Project/Component Working Name: ZFS Clone Promotion 1.2. Name of Document Author/Supplier: Author: Matt Ahrens 1.3 Date of This Document: 06 May, 2006 4. Technical Description ZFS provides the ability to create read-only snapshots of any filesystem, and to create writeable clones of any snapshot. Suppose that F is a filesystem, S is a snapshot of F, and C is a clone of S. Topologically, F and C are peers: that is, S is a common origin point from which F and C diverge. F and C differ only in how their space is accounted and where they appear in the namespace. After using a clone to explore some alternate reality (e.g. to test a patch), it''s often desirable to ''promote'' the clone to ''main'' filesystem status -- that is, to swap F and C in the namespace. This is what ''zfs promote'' does. Here are man page changes: in the SYNOPSIS section (after ''zfs clone''):> zfs promote <clone filesystem>in the DESCRIPTION - Clones section (only last paragraph is added): Clones A clone is a writable volume or file system whose initial contents are the same as another dataset. As with snapshots, creating a clone is nearly instantaneous, and initially con- sumes no additional space. Clones can only be created from a snapshot. When a snapshot is cloned, it creates an implicit dependency between the parent and child. Even though the clone is created somewhere else in the dataset hierarchy, the original snapshot cannot be destroyed as long as a clone exists. The "origin" pro- perty exposes this dependency, and the destroy command lists any such dependencies, if they exist.> The clone parent-child dependency relationship can be reversed by > using the _promote_ subcommand. This causes the "origin" > filesystem to become a clone of the specified filesystem, which > makes it possible to destroy the filesystem that the clone was > created from.in the SUBCOMMANDS section (after ''zfs clone''):> zfs promote <clone filesystem> > > Promotes a clone filesystem to no longer be dependent on its > "origin" snapshot. This makes it possible to destroy the > filesystem that the clone was created from. The dependency > relationship is reversed, so that the "origin" filesystem > becomes a clone of the specified filesystem. > > The snaphot that was cloned, and any snapshots previous to this > snapshot will now be owned by the promoted clone. The space > they use will move from the "origin" filesystem to the promoted > clone, so is must have enough space available to accommodate > these snapshots. Note: no new space is consumed by this > operation, but the space accounting is adjusted. Also note that > the promoted clone must not have any conflicting snapshot names > of its own. The _rename_ subcommand can be used to rename any > conflicting snapshots.in the EXAMPLES section (after ''Example 8: Creating a clone''):> Example 9: Promoting a Clone > > The following commands illustrate how to test out changes to a > filesystem, and then replace the original filesystem with the > changed one, using clones, clone promotion, and renaming. > > # zfs create pool/project/production > <populate /pool/project/production with data> > # zfs snapshot pool/project/production at today > # zfs clone pool/project/production at today pool/project/beta > <make changes to /pool/project/beta and test them> > # zfs promote pool/project/beta > # zfs rename pool/project/production pool/project/legacy > # zfs rename pool/project/beta pool/project/production > <once the legacy version is no longer needed, it can be > destroyed> > # zfs destroy pool/project/legacy6. Resources and Schedule 6.4. Steering Committee requested information 6.4.1. Consolidation C-team Name: ON 6.5. ARC review type: FastTrack ----- End forwarded message -----
Al Hopper
2006-May-09 21:29 UTC
[zfs-discuss] fwd: ZFS Clone Promotion [PSARC/2006/303 Timeout: 05/12/2006]
On Tue, 9 May 2006, Matthew Ahrens wrote:> FYI folks, I have implemented "clone promotion", also known as "clone > swap" or "clone pivot", as described in this bug report: > > 6276916 support for "clone swap" > > Look for it in an upcoming release... > > Here is a copy of PSARC case which is currently under review. > > 1. Introduction > 1.1. Project/Component Working Name: > ZFS Clone Promotion > 1.2. Name of Document Author/Supplier: > Author: Matt Ahrens > 1.3 Date of This Document: > 06 May, 2006 > 4. Technical Description > ZFS provides the ability to create read-only snapshots of any filesystem, > and to create writeable clones of any snapshot. Suppose that F is a > filesystem, S is a snapshot of F, and C is a clone of S. Topologically, > F and C are peers: that is, S is a common origin point from which F and C > diverge. F and C differ only in how their space is accounted and where > they appear in the namespace. > > After using a clone to explore some alternate reality (e.g. to test a patch), > it''s often desirable to ''promote'' the clone to ''main'' filesystem status -- > that is, to swap F and C in the namespace. This is what ''zfs promote'' does. > > Here are man page changes: > > in the SYNOPSIS section (after ''zfs clone''): > > zfs promote <clone filesystem> > > in the DESCRIPTION - Clones section (only last paragraph is added): > Clones > A clone is a writable volume or file system whose initial > contents are the same as another dataset. As with snapshots, > creating a clone is nearly instantaneous, and initially con- > sumes no additional space. > > Clones can only be created from a snapshot. When a snapshot > is cloned, it creates an implicit dependency between the > parent and child. Even though the clone is created somewhere > else in the dataset hierarchy, the original snapshot cannot > be destroyed as long as a clone exists. The "origin" pro- > perty exposes this dependency, and the destroy command lists > any such dependencies, if they exist. > > > The clone parent-child dependency relationship can be reversed by > > using the _promote_ subcommand. This causes the "origin" > > filesystem to become a clone of the specified filesystem, which > > makes it possible to destroy the filesystem that the clone was > > created from. > > in the SUBCOMMANDS section (after ''zfs clone''): > > zfs promote <clone filesystem> > > > > Promotes a clone filesystem to no longer be dependent on its > > "origin" snapshot. This makes it possible to destroy the > > filesystem that the clone was created from. The dependency > > relationship is reversed, so that the "origin" filesystem > > becomes a clone of the specified filesystem. > > > > The snaphot that was cloned, and any snapshots previous to this > > snapshot will now be owned by the promoted clone. The space > > they use will move from the "origin" filesystem to the promoted > > clone, so is must have enough space available to accommodate > > these snapshots. Note: no new space is consumed by this > > operation, but the space accounting is adjusted. Also note that > > the promoted clone must not have any conflicting snapshot names > > of its own. The _rename_ subcommand can be used to rename any > > conflicting snapshots. > > in the EXAMPLES section (after ''Example 8: Creating a clone''): > > Example 9: Promoting a Clone > > > > The following commands illustrate how to test out changes to a > > filesystem, and then replace the original filesystem with the > > changed one, using clones, clone promotion, and renaming. > > > > # zfs create pool/project/production > > <populate /pool/project/production with data> > > # zfs snapshot pool/project/production at today > > # zfs clone pool/project/production at today pool/project/beta > > <make changes to /pool/project/beta and test them> > > # zfs promote pool/project/beta > > # zfs rename pool/project/production pool/project/legacy > > # zfs rename pool/project/beta pool/project/production > > <once the legacy version is no longer needed, it can be > > destroyed> > > # zfs destroy pool/project/legacy > > 6. Resources and Schedule > 6.4. Steering Committee requested information > 6.4.1. Consolidation C-team Name: > ON > 6.5. ARC review type: FastTrack > > ----- End forwarded message -----Un-Bloody-Believable! Awesome work Matt! ZFS (achievements) read like a work of pure fiction if read by someone who came from Solaris 10 Update 1 (which is not that long ago) and then read this post! Regards, Al Hopper Logical Approach Inc, Plano, TX. al at logical-approach.com Voice: 972.379.2133 Fax: 972.379.2134 Timezone: US CDT OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
Lori Alt
2006-May-10 08:07 UTC
[zfs-discuss] fwd: ZFS Clone Promotion [PSARC/2006/303 Timeout: 05/12/2006]
So let me work through a scenario of how clone promotion might work in conjunction with liveupgrade once we have bootable zfs datasets: 1. We are booted off the dataset pool/root_sol10_u4 2. We want to upgrade to U5. So we begin by lucreating a new boot environment (BE) as a clone of the current root # lucreate -n root_sol10_u5 -m /:pool/root_sol10_u4:zfs By default, liveupgrade will use zfs cloning when creating a new BE from an existing zfs dataset. So behind the scenes, lucreate will execute: # zfs snapshot pool/root_sol10_u4 at now # zfs clone pool/root_sol10_u4 at now pool/root_sol10_u5 (this will take only seconds, and require no pre-allocated space) 3. Now we do the luupgrade of the newly lucreate''d BE to U5. Note that the only space required for the upgrade is the space needed for packages that are new or modified in U5. 4. So the administrator tries out the new BE by luactivate''ing it and booting it. (This is where we need a menu''ing interface at boot time, so we can choose between the various bootable datasets in the pool. Conveniently, GRUB provides us with one.) 5. The new BE works fine, so the administrator decides to promote the BE''s dataset (which is still a clone) to primary dataset status. Here I''m not sure what''s best: should liveupgrade promote the dataset as part of its management of boot environments? Or should the administrator have to (or be able to) promote a a bootable dataset explicitly? I''ll have to give that one a bit of thought, but one way or another, this happens: # zfs promote pool/root_sol10_u5 6. So we can rename the newly-promoted BE if we want, but let''s assume we leave it with its name "root_sol10_u5". Now if we want to get rid of the old U4 root dataset, we should do the following: # ludelete root_sol10_u4 which, in addition to the usual liveupgrade tasks to delete the BE, will do this: # zfs destroy pool/root_sol10_u4 So, for the purposes of zfs boot and liveupgrade, I think your new "promote" function works very well. Am I missing anything? Lori Matthew Ahrens wrote:>FYI folks, I have implemented "clone promotion", also known as "clone >swap" or "clone pivot", as described in this bug report: > > 6276916 support for "clone swap" > >Look for it in an upcoming release... > >Here is a copy of PSARC case which is currently under review. > >1. Introduction > 1.1. Project/Component Working Name: > ZFS Clone Promotion > 1.2. Name of Document Author/Supplier: > Author: Matt Ahrens > 1.3 Date of This Document: > 06 May, 2006 >4. Technical Description >ZFS provides the ability to create read-only snapshots of any filesystem, >and to create writeable clones of any snapshot. Suppose that F is a >filesystem, S is a snapshot of F, and C is a clone of S. Topologically, >F and C are peers: that is, S is a common origin point from which F and C >diverge. F and C differ only in how their space is accounted and where >they appear in the namespace. > >After using a clone to explore some alternate reality (e.g. to test a patch), >it''s often desirable to ''promote'' the clone to ''main'' filesystem status -- >that is, to swap F and C in the namespace. This is what ''zfs promote'' does. > >Here are man page changes: > >in the SYNOPSIS section (after ''zfs clone''): > > >> zfs promote <clone filesystem> >> >> > >in the DESCRIPTION - Clones section (only last paragraph is added): > Clones > A clone is a writable volume or file system whose initial > contents are the same as another dataset. As with snapshots, > creating a clone is nearly instantaneous, and initially con- > sumes no additional space. > > Clones can only be created from a snapshot. When a snapshot > is cloned, it creates an implicit dependency between the > parent and child. Even though the clone is created somewhere > else in the dataset hierarchy, the original snapshot cannot > be destroyed as long as a clone exists. The "origin" pro- > perty exposes this dependency, and the destroy command lists > any such dependencies, if they exist. > > > >> The clone parent-child dependency relationship can be reversed by >> using the _promote_ subcommand. This causes the "origin" >> filesystem to become a clone of the specified filesystem, which >> makes it possible to destroy the filesystem that the clone was >> created from. >> >> > >in the SUBCOMMANDS section (after ''zfs clone''): > > >> zfs promote <clone filesystem> >> >> Promotes a clone filesystem to no longer be dependent on its >> "origin" snapshot. This makes it possible to destroy the >> filesystem that the clone was created from. The dependency >> relationship is reversed, so that the "origin" filesystem >> becomes a clone of the specified filesystem. >> >> The snaphot that was cloned, and any snapshots previous to this >> snapshot will now be owned by the promoted clone. The space >> they use will move from the "origin" filesystem to the promoted >> clone, so is must have enough space available to accommodate >> these snapshots. Note: no new space is consumed by this >> operation, but the space accounting is adjusted. Also note that >> the promoted clone must not have any conflicting snapshot names >> of its own. The _rename_ subcommand can be used to rename any >> conflicting snapshots. >> >> > >in the EXAMPLES section (after ''Example 8: Creating a clone''): > > >> Example 9: Promoting a Clone >> >> The following commands illustrate how to test out changes to a >> filesystem, and then replace the original filesystem with the >> changed one, using clones, clone promotion, and renaming. >> >> # zfs create pool/project/production >> <populate /pool/project/production with data> >> # zfs snapshot pool/project/production at today >> # zfs clone pool/project/production at today pool/project/beta >> <make changes to /pool/project/beta and test them> >> # zfs promote pool/project/beta >> # zfs rename pool/project/production pool/project/legacy >> # zfs rename pool/project/beta pool/project/production >> <once the legacy version is no longer needed, it can be >> destroyed> >> # zfs destroy pool/project/legacy >> >> > >6. Resources and Schedule > 6.4. Steering Committee requested information > 6.4.1. Consolidation C-team Name: > ON > 6.5. ARC review type: FastTrack > >----- End forwarded message ----- >_______________________________________________ >zfs-discuss mailing list >zfs-discuss at opensolaris.org >http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20060510/e4019cec/attachment.html>
Matthew Ahrens
2006-May-10 14:01 UTC
[zfs-discuss] fwd: ZFS Clone Promotion [PSARC/2006/303 Timeout: 05/12/2006]
On Wed, May 10, 2006 at 02:07:01AM -0600, Lori Alt wrote:> So, for the purposes of zfs boot and liveupgrade, I think your new > "promote" function works very well. Am I missing anything?Thanks! Your use case with liveupgrade looks great! --matt
Nicolas Williams
2006-May-10 15:27 UTC
[zfs-discuss] fwd: ZFS Clone Promotion [PSARC/2006/303 Timeout: 05/12/2006]
On Wed, May 10, 2006 at 02:07:01AM -0600, Lori Alt wrote:> 5. The new BE works fine, so the administrator decides to promote > the BE''s dataset (which is still a clone) to primary dataset status. > Here I''m not sure what''s best: should liveupgrade promote the > dataset as part of its management of boot environments? Or > should the administrator have to (or be able to) promote a > a bootable dataset explicitly? I''ll have to give that one a bit of > thought, but one way or another, this happens: > > # zfs promote pool/root_sol10_u5But, does it matter? I mean, unless you want to release space associated with old BEs that you never use anymore, it probably doesn''t matter much. More importantly, from the user perspective, is re-arranging the GRUB menu so the BEs appear in the user''s preferred order and setting a proper default.> 6. So we can rename the newly-promoted BE if we want, but let''s > assume we leave it with its name "root_sol10_u5". Now if we > want to get rid of the old U4 root dataset, we should do the > following: > > # ludelete root_sol10_u4 > > which, in addition to the usual liveupgrade tasks to delete the > BE, will do this: > > # zfs destroy pool/root_sol10_u4This would require promotion, yes, and answers the above question about whether the sysadmin should have to manually promote BEs ("no"). One possibility would be to always promote, at boot time, whichever BE is being booted, unless promotion is expensive. Could there be a tree of BEs? Does this make any difference? Nico --
Edward Pilatowicz
2006-May-10 16:10 UTC
[zfs-discuss] fwd: ZFS Clone Promotion [PSARC/2006/303 Timeout: 05/12/2006]
out of curiousity, how are properties handled? for example if you have a fs with compression disabled, you snapshot it, you clone it, and you enable compression on the clone, and then you promote the clone. will compressions be enabled on the new parent? and what about other clones that have properties that are inherited from the parent? will they all now have compression enabled as well? ed On Tue, May 09, 2006 at 01:09:46PM -0700, Matthew Ahrens wrote:> FYI folks, I have implemented "clone promotion", also known as "clone > swap" or "clone pivot", as described in this bug report: > > 6276916 support for "clone swap" > > Look for it in an upcoming release... > > Here is a copy of PSARC case which is currently under review. > > 1. Introduction > 1.1. Project/Component Working Name: > ZFS Clone Promotion > 1.2. Name of Document Author/Supplier: > Author: Matt Ahrens > 1.3 Date of This Document: > 06 May, 2006 > 4. Technical Description > ZFS provides the ability to create read-only snapshots of any filesystem, > and to create writeable clones of any snapshot. Suppose that F is a > filesystem, S is a snapshot of F, and C is a clone of S. Topologically, > F and C are peers: that is, S is a common origin point from which F and C > diverge. F and C differ only in how their space is accounted and where > they appear in the namespace. > > After using a clone to explore some alternate reality (e.g. to test a patch), > it''s often desirable to ''promote'' the clone to ''main'' filesystem status -- > that is, to swap F and C in the namespace. This is what ''zfs promote'' does. > > Here are man page changes: > > in the SYNOPSIS section (after ''zfs clone''): > > zfs promote <clone filesystem> > > in the DESCRIPTION - Clones section (only last paragraph is added): > Clones > A clone is a writable volume or file system whose initial > contents are the same as another dataset. As with snapshots, > creating a clone is nearly instantaneous, and initially con- > sumes no additional space. > > Clones can only be created from a snapshot. When a snapshot > is cloned, it creates an implicit dependency between the > parent and child. Even though the clone is created somewhere > else in the dataset hierarchy, the original snapshot cannot > be destroyed as long as a clone exists. The "origin" pro- > perty exposes this dependency, and the destroy command lists > any such dependencies, if they exist. > > > The clone parent-child dependency relationship can be reversed by > > using the _promote_ subcommand. This causes the "origin" > > filesystem to become a clone of the specified filesystem, which > > makes it possible to destroy the filesystem that the clone was > > created from. > > in the SUBCOMMANDS section (after ''zfs clone''): > > zfs promote <clone filesystem> > > > > Promotes a clone filesystem to no longer be dependent on its > > "origin" snapshot. This makes it possible to destroy the > > filesystem that the clone was created from. The dependency > > relationship is reversed, so that the "origin" filesystem > > becomes a clone of the specified filesystem. > > > > The snaphot that was cloned, and any snapshots previous to this > > snapshot will now be owned by the promoted clone. The space > > they use will move from the "origin" filesystem to the promoted > > clone, so is must have enough space available to accommodate > > these snapshots. Note: no new space is consumed by this > > operation, but the space accounting is adjusted. Also note that > > the promoted clone must not have any conflicting snapshot names > > of its own. The _rename_ subcommand can be used to rename any > > conflicting snapshots. > > in the EXAMPLES section (after ''Example 8: Creating a clone''): > > Example 9: Promoting a Clone > > > > The following commands illustrate how to test out changes to a > > filesystem, and then replace the original filesystem with the > > changed one, using clones, clone promotion, and renaming. > > > > # zfs create pool/project/production > > <populate /pool/project/production with data> > > # zfs snapshot pool/project/production at today > > # zfs clone pool/project/production at today pool/project/beta > > <make changes to /pool/project/beta and test them> > > # zfs promote pool/project/beta > > # zfs rename pool/project/production pool/project/legacy > > # zfs rename pool/project/beta pool/project/production > > <once the legacy version is no longer needed, it can be > > destroyed> > > # zfs destroy pool/project/legacy > > 6. Resources and Schedule > 6.4. Steering Committee requested information > 6.4.1. Consolidation C-team Name: > ON > 6.5. ARC review type: FastTrack > > ----- End forwarded message ----- > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Matthew Ahrens
2006-May-10 18:05 UTC
[zfs-discuss] fwd: ZFS Clone Promotion [PSARC/2006/303 Timeout: 05/12/2006]
On Wed, May 10, 2006 at 09:10:10AM -0700, Edward Pilatowicz wrote:> out of curiousity, how are properties handled?I think you''re confusing[*] the "clone origin filesystem" and the "parent filesystem". The parent filesystem is the one that is above it in the filesystem namespace, from which it inherits properties. The clone origin is the snapshot from which the clone was created, and its primary influence is that the clone origin can''t be destroyed. The "clone origin filesystem" is the filesystem that contains the clone origin, which also can not be destroyed. Let''s take an example: # zfs create pool/project/production # zfs snapshot pool/project/production at today # zfs clone pool/project/production at today pool/project/beta # zfs clone pool/project/production at today pool/test/foo/clone FS PARENT CLONE ORIGIN FS (snap) pool/project/production pool/project -none- pool/project/beta pool/project pool/project/production (@today) pool/test/foo/clone pool/test/foo pool/project/production (@today) So, pool/project/production and pool/project/beta inherit their properties from pool/project, and pool/test/foo/clone inherits its properties from pool/test/foo. pool/project/production at today (and thus pool/project/production) can''t be destroyed. # zfs promote pool/project/beta FS PARENT CLONE ORIGIN FS pool/project/production pool/project pool/project/beta (@today) pool/project/beta pool/project -none- pool/test/foo/clone pool/test/foo pool/project/beta (@today) Now, the inheritance is still the same: So, pool/project/production and pool/project/beta inherit their properties from pool/project, and pool/test/foo/clone inherits its properties from pool/test/foo. pool/project/beta at today (and thus pool/project/beta) can''t be destroyed, but pool/project/production can be destroyed. And to answer your questions directly:> for example if you have a fs with compression disabled, you snapshot > it, you clone it, and you enable compression on the clone, and then > you promote the clone. will compressions be enabled on the new parent?The properties on the clone do not change when it is promoted. (At least, not the editable ones; the space accounting will change since some of the clone origin''s snapshots are moved to the promoted fs.)> and what about other clones that have properties that are inherited from > the parent? will they all now have compression enabled as well?Any other clones will inherit their properties from their *parent*, not their clone origin, so ''zfs promote'' will not change that either. Note that any snapshots that are moved to the promoted filesystem *will* inherit their properties from their new filesystem. However, the only inheritable properties which affect snapshots are ''devices'', ''exec'', and ''setuid'' (see 6420135, "zfs(1m) should display properties of snapshots that affect their behavior"). Also, note that you can change a filesystem''s parent, and thus where it inherits properties from, by using the ''zfs rename'' subcommand. --matt [*] I may be the cause of some of this confusion, since there are really two trees here and sometimes I''ll call the clone origin the "parent" or "clone parent". But the documentation at least should be consistent, let me know if this is not the case.
Edward Pilatowicz
2006-May-10 18:15 UTC
[zfs-discuss] fwd: ZFS Clone Promotion [PSARC/2006/303 Timeout: 05/12/2006]
thanks for the detailed explanation. this all makes much more sense to me now. turns out my confusion was due to a general lack of understanding about how property inheritance works. (i had assumed that it was always based off of "clone origin filesystem" rather than the the "parent filesystem".) ed On Wed, May 10, 2006 at 11:05:14AM -0700, Matthew Ahrens wrote:> On Wed, May 10, 2006 at 09:10:10AM -0700, Edward Pilatowicz wrote: > > out of curiousity, how are properties handled? > > I think you''re confusing[*] the "clone origin filesystem" and the > "parent filesystem". The parent filesystem is the one that is above it > in the filesystem namespace, from which it inherits properties. The > clone origin is the snapshot from which the clone was created, and its > primary influence is that the clone origin can''t be destroyed. The > "clone origin filesystem" is the filesystem that contains the clone > origin, which also can not be destroyed. > > Let''s take an example: > > # zfs create pool/project/production > # zfs snapshot pool/project/production at today > # zfs clone pool/project/production at today pool/project/beta > # zfs clone pool/project/production at today pool/test/foo/clone > > FS PARENT CLONE ORIGIN FS (snap) > pool/project/production pool/project -none- > pool/project/beta pool/project pool/project/production (@today) > pool/test/foo/clone pool/test/foo pool/project/production (@today) > > So, pool/project/production and pool/project/beta inherit their > properties from pool/project, and pool/test/foo/clone inherits its > properties from pool/test/foo. pool/project/production at today (and thus > pool/project/production) can''t be destroyed. > > # zfs promote pool/project/beta > > FS PARENT CLONE ORIGIN FS > pool/project/production pool/project pool/project/beta (@today) > pool/project/beta pool/project -none- > pool/test/foo/clone pool/test/foo pool/project/beta (@today) > > Now, the inheritance is still the same: So, pool/project/production and > pool/project/beta inherit their properties from pool/project, and > pool/test/foo/clone inherits its properties from pool/test/foo. > pool/project/beta at today (and thus pool/project/beta) can''t be destroyed, > but pool/project/production can be destroyed. > > And to answer your questions directly: > > > for example if you have a fs with compression disabled, you snapshot > > it, you clone it, and you enable compression on the clone, and then > > you promote the clone. will compressions be enabled on the new parent? > > The properties on the clone do not change when it is promoted. (At > least, not the editable ones; the space accounting will change since > some of the clone origin''s snapshots are moved to the promoted fs.) > > > and what about other clones that have properties that are inherited from > > the parent? will they all now have compression enabled as well? > > Any other clones will inherit their properties from their *parent*, not > their clone origin, so ''zfs promote'' will not change that either. > > Note that any snapshots that are moved to the promoted filesystem *will* > inherit their properties from their new filesystem. However, the only > inheritable properties which affect snapshots are ''devices'', ''exec'', and > ''setuid'' (see 6420135, "zfs(1m) should display properties of snapshots > that affect their behavior"). > > Also, note that you can change a filesystem''s parent, and thus where it > inherits properties from, by using the ''zfs rename'' subcommand. > > --matt > > [*] I may be the cause of some of this confusion, since there are really > two trees here and sometimes I''ll call the clone origin the "parent" or > "clone parent". But the documentation at least should be consistent, > let me know if this is not the case.
George Wilson
2006-May-11 12:33 UTC
[zfs-discuss] fwd: ZFS Clone Promotion [PSARC/2006/303 Timeout: 05/12/2006]
Matt, This is really cool! One thing that I can think of that would be nice to have is the ability to ''promote'' and ''sync''. In other words, just prior to promoting the clone, bring any files that are newer on the original parent up-to-date on the clone. I suspect you could utilize zfs diffs (CR# 6370738) to provide this functionality and apply the diffs. BTW, would there have to be any special handling for the top level parent filesystem associated with the pool? Thanks, George Matthew Ahrens wrote:> FYI folks, I have implemented "clone promotion", also known as "clone > swap" or "clone pivot", as described in this bug report: > > 6276916 support for "clone swap" > > Look for it in an upcoming release... > > Here is a copy of PSARC case which is currently under review. > > 1. Introduction > 1.1. Project/Component Working Name: > ZFS Clone Promotion > 1.2. Name of Document Author/Supplier: > Author: Matt Ahrens > 1.3 Date of This Document: > 06 May, 2006 > 4. Technical Description > ZFS provides the ability to create read-only snapshots of any filesystem, > and to create writeable clones of any snapshot. Suppose that F is a > filesystem, S is a snapshot of F, and C is a clone of S. Topologically, > F and C are peers: that is, S is a common origin point from which F and C > diverge. F and C differ only in how their space is accounted and where > they appear in the namespace. > > After using a clone to explore some alternate reality (e.g. to test a patch), > it''s often desirable to ''promote'' the clone to ''main'' filesystem status -- > that is, to swap F and C in the namespace. This is what ''zfs promote'' does. > > Here are man page changes: > > in the SYNOPSIS section (after ''zfs clone''): >> zfs promote <clone filesystem> > > in the DESCRIPTION - Clones section (only last paragraph is added): > Clones > A clone is a writable volume or file system whose initial > contents are the same as another dataset. As with snapshots, > creating a clone is nearly instantaneous, and initially con- > sumes no additional space. > > Clones can only be created from a snapshot. When a snapshot > is cloned, it creates an implicit dependency between the > parent and child. Even though the clone is created somewhere > else in the dataset hierarchy, the original snapshot cannot > be destroyed as long as a clone exists. The "origin" pro- > perty exposes this dependency, and the destroy command lists > any such dependencies, if they exist. > >> The clone parent-child dependency relationship can be reversed by >> using the _promote_ subcommand. This causes the "origin" >> filesystem to become a clone of the specified filesystem, which >> makes it possible to destroy the filesystem that the clone was >> created from. > > in the SUBCOMMANDS section (after ''zfs clone''): >> zfs promote <clone filesystem> >> >> Promotes a clone filesystem to no longer be dependent on its >> "origin" snapshot. This makes it possible to destroy the >> filesystem that the clone was created from. The dependency >> relationship is reversed, so that the "origin" filesystem >> becomes a clone of the specified filesystem. >> >> The snaphot that was cloned, and any snapshots previous to this >> snapshot will now be owned by the promoted clone. The space >> they use will move from the "origin" filesystem to the promoted >> clone, so is must have enough space available to accommodate >> these snapshots. Note: no new space is consumed by this >> operation, but the space accounting is adjusted. Also note that >> the promoted clone must not have any conflicting snapshot names >> of its own. The _rename_ subcommand can be used to rename any >> conflicting snapshots. > > in the EXAMPLES section (after ''Example 8: Creating a clone''): >> Example 9: Promoting a Clone >> >> The following commands illustrate how to test out changes to a >> filesystem, and then replace the original filesystem with the >> changed one, using clones, clone promotion, and renaming. >> >> # zfs create pool/project/production >> <populate /pool/project/production with data> >> # zfs snapshot pool/project/production at today >> # zfs clone pool/project/production at today pool/project/beta >> <make changes to /pool/project/beta and test them> >> # zfs promote pool/project/beta >> # zfs rename pool/project/production pool/project/legacy >> # zfs rename pool/project/beta pool/project/production >> <once the legacy version is no longer needed, it can be >> destroyed> >> # zfs destroy pool/project/legacy > > 6. Resources and Schedule > 6.4. Steering Committee requested information > 6.4.1. Consolidation C-team Name: > ON > 6.5. ARC review type: FastTrack > > ----- End forwarded message ----- > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Darren J Moffat
2006-May-11 13:10 UTC
[zfs-discuss] fwd: ZFS Clone Promotion [PSARC/2006/303 Timeout: 05/12/2006]
George Wilson wrote:> Matt, > > This is really cool! One thing that I can think of that would be nice to > have is the ability to ''promote'' and ''sync''. In other words, just prior > to promoting the clone, bring any files that are newer on the original > parent up-to-date on the clone. I suspect you could utilize zfs diffs > (CR# 6370738) to provide this functionality and apply the diffs.I''m really confused why would you want to do that ? -- Darren J Moffat
George Wilson
2006-May-11 14:15 UTC
[zfs-discuss] fwd: ZFS Clone Promotion [PSARC/2006/303 Timeout: 05/12/2006]
This would be comparable to what live upgrade does with its sync option. With lu, certain files get synced to the newly activated BE just prior to booting it up. (see /etc/lu/synclist) Let''s take a filesystem which contains both static application data as well as constantly changing files such as logs, data or configuration files. And let''s assume that such a filesystem is cloned with the intention of upgrading the application version. This new application version could undergo several weeks of testing meaning that certain files may have diverged from the real production data. So just prior to promoting it into production you may want to sync up any certain files which have been updated in the original parent. Here''s where the need for the ''zfs diffs'' comes in. You could also use other tools like rsync but it would be nice to provide something similar to what is live upgrade does so that it happens as part of the promotion. A best practice would be to keep the application data and config/logging data separately. This would avoid the need for this feature. Thanks, George Darren J Moffat wrote:> George Wilson wrote: >> Matt, >> >> This is really cool! One thing that I can think of that would be nice >> to have is the ability to ''promote'' and ''sync''. In other words, just >> prior to promoting the clone, bring any files that are newer on the >> original parent up-to-date on the clone. I suspect you could utilize >> zfs diffs (CR# 6370738) to provide this functionality and apply the >> diffs. > > I''m really confused why would you want to do that ? >
Darren J Moffat
2006-May-11 14:38 UTC
[zfs-discuss] fwd: ZFS Clone Promotion [PSARC/2006/303 Timeout: 05/12/2006]
George Wilson wrote:> This would be comparable to what live upgrade does with its sync option. > With lu, certain files get synced to the newly activated BE just prior > to booting it up. (see /etc/lu/synclist)even in that file there are three different policies: OVERWRITE, APPEND, PREPEND. Note also the comment: # It is important to fully understand that adding other files not # listed here could cause a system to become unbootable.> Let''s take a filesystem which contains both static application data as > well as constantly changing files such as logs, data or configuration > files. And let''s assume that such a filesystem is cloned with the > intention of upgrading the application version. This new application > version could undergo several weeks of testing meaning that certain > files may have diverged from the real production data. So just prior to > promoting it into production you may want to sync up any certain files > which have been updated in the original parent. Here''s where the need > for the ''zfs diffs'' comes in. You could also use other tools like rsync > but it would be nice to provide something similar to what is live > upgrade does so that it happens as part of the promotion.What would the output of zfs diffs be ? I see two possible outputs. 1) the list of files that differ between the snapshot used to create the clone and now. 2) the "changes" that ZFS needs to do to make them the same. How would you apply these diffs ? How do you select which files to apply and which not to ? For example you want the log files to be "merged" some how but you certainly don''t want the binaries to be merged. For many applications you can''t just blindly copy files around while they are running - they don''t like it and it leads to application layer data corruption. I fully support a zfs diffs concept but I don''t understand why this is in any way tide to clone promotion as a zfs command.> A best practice would be to keep the application data and config/logging > data separately. This would avoid the need for this feature.Agreed completely and with ZFS anything other than that is IMO poor planning ZFS data sets are cheap! -- Darren J Moffat
Nicolas Williams
2006-May-11 14:58 UTC
[zfs-discuss] fwd: ZFS Clone Promotion [PSARC/2006/303 Timeout: 05/12/2006]
6370738 zfs diffs filesystems
Nicolas Williams
2006-May-11 15:12 UTC
ZFS diffs (Re: [zfs-discuss] fwd: ZFS Clone Promotion [PSARC/2006/303 Timeout: 05/12/2006])
On Thu, May 11, 2006 at 03:38:59PM +0100, Darren J Moffat wrote:> What would the output of zfs diffs be ?My original conception was: - dnode # + changed blocks - + some naming hints so that one could quickly find changed dnodes in clones I talked about this with Bill Moore and he came up with something much better: a diffs filesystem, where one could traverse diffs between snapshots as though they were filesystems, with file diffs represented as holes (or something like that). 6370738 zfs diffs filesystems> How would you apply these diffs ? > How do you select which files to apply and which not to ?ZFS can''t handle conflict resolution -- that''s up to whoever uses this facility.> I fully support a zfs diffs concept but I don''t understand why this is > in any way tide to clone promotion as a zfs command.Me either!> >A best practice would be to keep the application data and config/logging > >data separately. This would avoid the need for this feature. > > Agreed completely and with ZFS anything other than that is IMO poor > planning ZFS data sets are cheap!Same here. Nico --
Bill Sommerfeld
2006-May-11 15:15 UTC
[zfs-discuss] fwd: ZFS Clone Promotion [PSARC/2006/303 Timeout: 05/12/2006]
On Thu, 2006-05-11 at 10:38, Darren J Moffat wrote:> George Wilson wrote: > > This would be comparable to what live upgrade does with its sync option. > > With lu, certain files get synced to the newly activated BE just prior > > to booting it up. (see /etc/lu/synclist) > > even in that file there are three different policies: > > OVERWRITE, APPEND, PREPEND.This situation is analogous to the "merge with common ancestor" operations performed on source code by most SCM systems; with a named snapshot as the clone base, the ancestor is preserved and can easily be retrieved. there''s a real opportunity here to enhance packaging class action scripts to allow for a file-format-specific three-way merge when conflicting changes are detected on both "branches". for editable files, packaging squirrels away an unmodified copy so there are actually four or five different versions which might conceivably provide input to different stages of an upgrade. the merge ladder looks like: 0 ---dev-----> 1 | | v v 2 ---upgrade-> 4 | | v v 3 ---sync----> 5 key: 0) old release unmodified; (preserved in old BE pkging) 1) new release unmodified; (preserved in new BE pkging) 2) running system at time of LU copy (preserved via zfs snapshot) 3) running system at time of cutover (old BE contents) 4) upgraded system (new BE contents, pre-sync) 5) system after luactivate+reboot (new BE contents, post-sync) "dev" : solaris development (change to source packages) "upgrade": package upgrade as part of luupgrade "sync": post-LU resynch to catch any changes from operation. - Bill
Nicolas Williams
2006-May-11 15:30 UTC
[zfs-discuss] fwd: ZFS Clone Promotion [PSARC/2006/303 Timeout: 05/12/2006]
On Thu, May 11, 2006 at 11:15:12AM -0400, Bill Sommerfeld wrote:> This situation is analogous to the "merge with common ancestor" > operations performed on source code by most SCM systems; with a named > snapshot as the clone base, the ancestor is preserved and can easily be > retrieved.Yes, and in general it''s hard to automate. For specific files one may know what to do and how to automate the process (think acr).> there''s a real opportunity here to enhance packaging class action > scripts to allow for a file-format-specific three-way merge when > conflicting changes are detected on both "branches". > > for editable files, packaging squirrels away an unmodified copy so there > are actually four or five different versions which might conceivably > provide input to different stages of an upgrade.This would be wonderful. Maybe we could enhance packaging''s notion of class action scripts.
George Wilson
2006-May-11 16:33 UTC
[zfs-discuss] fwd: ZFS Clone Promotion [PSARC/2006/303 Timeout: 05/12/2006]
Darren J Moffat wrote:> How would you apply these diffs ? > How do you select which files to apply and which not to ? > For example you want the log files to be "merged" some how but you > certainly don''t want the binaries to be merged. >This would have to be a decision by the user when the sync takes place. They would have to know the application.> For many applications you can''t just blindly copy files around while > they are running - they don''t like it and it leads to application layer > data corruption.The assumption is that if you are promoting a filesystem then the application would no longer be running. I don''t believe that the original proposal accounted for doing a live application promotion, right?> > I fully support a zfs diffs concept but I don''t understand why this is > in any way tide to clone promotion as a zfs command.Yes, having a the zfs diffs concept external to promote is a good concept but having promote automatically take advantage of it is an ease of use enhancement. That''s what the suggestion was all about.> >> A best practice would be to keep the application data and >> config/logging data separately. This would avoid the need for this >> feature. > > Agreed completely and with ZFS anything other than that is IMO poor > planning ZFS data sets are cheap!This would be great but I''m not sure we can dictate this since application vendors will install often in whatever hierarchy they chose.