As I understand it, the snapshot of a set is used as a reference by the clone. So the clone is initially a set of pointers to the snapshot. That''s why it is so fast to create. How can I "separate" it from the snapshot ? (so that df -k or zfs list will display for a 48G drive pool/fs1 4G 40G pool/clone 4G 40G instead of pool/fs1 4G 44G pool/clone 4G 44G ) I hope I am clear enough :/ Thanks Marlanne This message posted from opensolaris.org
Marlanne DeLaSource wrote:> As I understand it, the snapshot of a set is used as a reference by the clone. > > So the clone is initially a set of pointers to the snapshot. That''s why it is so fast to create. > > How can I "separate" it from the snapshot ? (so that df -k or zfs list will display for a 48G drive > pool/fs1 4G 40G > pool/clone 4G 40G > > instead of > pool/fs1 4G 44G > pool/clone 4G 44G ) > > I hope I am clear enough :/There is no way to "separate" a clone from its origin snapshot. I think the numbers you''re posting are: FS REFD AVAIL pool/fs1 4G 40G pool/clone 4G 40G So you want it to say that less space is available than really is? Perhaps what you want is to set a reservation on the clone for its initial size, so that you will be guaranteed to have enough space to overwrite its initial contents with new contents of the same size? --matt
On Fri, 2006-09-01 at 06:03 -0700, Marlanne DeLaSource wrote:> As I understand it, the snapshot of a set is used as a reference by > the clone. > > So the clone is initially a set of pointers to the snapshot. That''s > why it is so fast to create. > > How can I "separate" it from the snapshot ? (so that df -k or zfs list > will display for a 48G drive > > pool/fs1 4G 40G > pool/clone 4G 40G > > instead of > pool/fs1 4G 44G > pool/clone 4G 44G ) > > I hope I am clear enoughYou''re quite clear about the end state you''re looking for, but not about why you might want to do this... You could conceivably put another full copy of the data into the pool by using zfs send piped to zfs receive instead of zfs clone. You might also want to take a look at "zfs promote", which allows the clone to take over primary ownership of the snapshot (and changing the snapshot''s parent to be a clone). - Bill
James, I noticed your link to the ZFS Admin Guide is out of date because I appended the date in the pdf filename. This doesn''t work because when I update the guide once or month or so, you wouldn''t get the latest version. So, I simplified this by renaming it as zfsadmin.pdf The month/year is on the title page. Cindy On 8/29/06, Noel Dellofano <Noel dot Dellofano at sun dot com> wrote: > Hey everybody, > I''d like to announce the addition of a "ZFS Links" page on the > Opensolaris ZFS community page. If you have any links to articles > that pertain to ZFS that you find useful or should be shared with the > community as a whole, please let us know and we''ll add it to the page. > > http://www.opensolaris.org/os/community/zfs/links/ > you are welcome to use any or all of the links included in this blog entry http://uadmin.blogspot.com/2006/06/interested-in-zfs.html James Dickens uadmin.blogspot.com > thanks, > Noel > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris dot org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris dot org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks for all your answers. The initial idea was to make a dataset/snapshot and clone (fast) and then separate the clone from its snapshot. The clone could be then used as a new independant dataset. The send/receive subcommands are probably the only way to duplicate a dataset. This message posted from opensolaris.org
Marlanne DeLaSource wrote:> Thanks for all your answers. > > The initial idea was to make a dataset/snapshot and clone (fast) and then separate the clone from its snapshot. The clone could be then used as a new independant dataset. > > The send/receive subcommands are probably the only way to duplicate a dataset.I''m still not sure I understand what about clones makes you not want to use them. What do you mean by "separate the clone from its snapshot"? Is it that you want to destroy the filesystem that the clone was created from? To do that you can use ''zfs promote''. Is it that you want to guarantee space availability to overwrite it? To do that you can use ''zfs set reservation''. --matt
> > The initial idea was to make a dataset/snapshot and > clone (fast) and then separate the clone from its > snapshot. The clone could be then used as a new > independant dataset. > > > > The send/receive subcommands are probably the only > way to duplicate a dataset. > > I''m still not sure I understand what about clones makes you not want to > use them. What do you mean by "separate the clone from its snapshot"? > Is it that you want to destroy the filesystem that > the clone was created from? To do that you can use ''zfs promote''. Is it > that you want to guarantee space availability to overwrite it? To do > that you can use ''zfs set reservation''. > > --mattI didn''t ask the original question, but I have a scenario where I want to use clone as well and encounter a (designed?) behaviour I am trying to understand. I create a filesystem A with ZFS and modify it to a point where I create a snapshot A at 1. Then I clone that snapshot to create a new filesystem B. I seem to have two filesystem "entities" I can make independant modifications and snapshots with/on/from. The problem I am running into is that when modifying A and wanting to rollback to the snapshot A at 1 I can''t do that as long as the clone B is mounted. Is this a case where I would benefit from the ability to sperate the clone? Or is this something not possible with ZFS? Thanks for any answers This message posted from opensolaris.org
On 9/1/06, Matthew Ahrens <Matthew.Ahrens at sun.com> wrote:> Marlanne DeLaSource wrote: > > Thanks for all your answers. > > > > The initial idea was to make a dataset/snapshot and clone (fast) and then separate the clone from its snapshot. The clone could be then used as a new independant dataset. > > > > The send/receive subcommands are probably the only way to duplicate a dataset. > > I''m still not sure I understand what about clones makes you not want to > use them. What do you mean by "separate the clone from its snapshot"? > Is it that you want to destroy the filesystem that the clone was created > from? To do that you can use ''zfs promote''. Is it that you want to > guarantee space availability to overwrite it? To do that you can use > ''zfs set reservation''.A couple scenarios from environments that I work in, using "legacy" file systems and volume managers: 1) Various test copies need to be on different spindles to remove any perceived or real performance impact imposed by one or the other. Arguably by having the IO activity spread across all the spindles there would be fewer bottlenecks. However, if you are trying to simulate the behavior of X production spindles, doing so with 1.3 X or 2 X spindles is not a proper comparison. Hence being wasteful and getting suboptimal performance may be desirable. If you don''t understand that logic, you haven''t worked in a big enough company or studied Dilbert enough. :) 2) One of the copies of the data needs to be portable to another system while the original stays put. This could be done to refresh non-production instances from production, to perform backups in such a way that it doesn''t put load on the production spindles, networks, etc. Mike -- Mike Gerdts http://mgerdts.blogspot.com/
Jan Hendrik Mangold wrote:> I didn''t ask the original question, but I have a scenario where I > want to use clone as well and encounter a (designed?) behaviour I am > trying to understand. > > I create a filesystem A with ZFS and modify it to a point where I > create a snapshot A at 1. Then I clone that snapshot to create a new > filesystem B. I seem to have two filesystem "entities" I can make > independant modifications and snapshots with/on/from. > > The problem I am running into is that when modifying A and wanting to > rollback to the snapshot A at 1 I can''t do that as long as the clone B > is mounted. > > Is this a case where I would benefit from the ability to sperate the > clone? Or is this something not possible with ZFS?Hmm, actually this is unexpected; you shouldn''t have to unmount the clone to do the rollback on the origin filesystem. I think that our command-line tool is simply being a bit overzealous. I''ve filed bug 6472202 to track this issue; it should be pretty straightforward to fix. Thanks for bringing this to our attention! --matt
Mike Gerdts wrote:> A couple scenarios from environments that I work in, using "legacy" > file systems and volume managers: > > 1) Various test copies need to be on different spindles to remove any > perceived or real performance impact imposed by one or the other. > Arguably by having the IO activity spread across all the spindles > there would be fewer bottlenecks. However, if you are trying to > simulate the behavior of X production spindles, doing so with 1.3 X > or 2 X spindles is not a proper comparison. Hence being wasteful and > getting suboptimal performance may be desirable. If you don''t > understand that logic, you haven''t worked in a big enough company or > studied Dilbert enough. :)Here it makes sense to be using X spindles. However, using a clone filesystem will perform the same as a non-clone filesystem. So if you have enough space on those X spindles for the clone, I don''t think there''s any need for additional "separation". Of course, this may not eliminate imagined performance difference (eg, your Dilbert reference :-), in which case you can simply use ''zfs send | zfs recv'' to send the snapshot to a suitably-isolated pool/machine.> 2) One of the copies of the data needs to be portable to another > system while the original stays put. This could be done to refresh > non-production instances from production, to perform backups in such a > way that it doesn''t put load on the production spindles, networks, > etc.This is a case where you should be using multiple pools (possibly on the same host), and using ''zfs send | zfs recv'' between them. In some cases, you may be able to attach the storage to the destination machine and use the network to move the data, eg. ''zfs send | ssh dest zfs recv''. --matt
Hey Matt were you able to reproduce this? I am using the straight S10U2 bits. I can give you access to the system, if you want. One last piece of information should be that my pools are created of files, due to lack of disks for experimenting. On Sep 18, 2006, at 10:04 PM, Matthew Ahrens wrote:> Jan Hendrik Mangold wrote: >> I didn''t ask the original question, but I have a scenario where I >> want to use clone as well and encounter a (designed?) behaviour I am >> trying to understand. >> I create a filesystem A with ZFS and modify it to a point where I >> create a snapshot A at 1. Then I clone that snapshot to create a new >> filesystem B. I seem to have two filesystem "entities" I can make >> independant modifications and snapshots with/on/from. >> The problem I am running into is that when modifying A and wanting to >> rollback to the snapshot A at 1 I can''t do that as long as the clone B >> is mounted. >> Is this a case where I would benefit from the ability to sperate the >> clone? Or is this something not possible with ZFS? > > Hmm, actually this is unexpected; you shouldn''t have to unmount the > clone to do the rollback on the origin filesystem. I think that > our command-line tool is simply being a bit overzealous. I''ve > filed bug 6472202 to track this issue; it should be pretty > straightforward to fix. > > Thanks for bringing this to our attention! > --matt-- Jan Hendrik Mangold Sun Microsystems 650-585-5484 (x81371) "idle hands are the developers workshop" -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20060919/855522b1/attachment.html>
Jan Hendrik Mangold wrote:> Hey Matt > > were you able to reproduce this? I am using the straight S10U2 bits. I > can give you access to the system, if you want. One last piece of > information should be that my pools are created of files, due to lack of > disks for experimenting.Yep, this is 100% reproducible. --matt