I''m defining "zpool split" as the ability to divide a pool into 2 separate pools, each with identical FSes. The typical use case would be to split a N disk mirrored pool into a N-1 pool and a 1 disk pool, and then transport the 1 disk pool to another machine. While contemplating "zpool split" functionality, I wondered whether we really want such a feature because 1) SVM allows it and admins are used to it. or 2) We can''t do what we want using zfs send |zfs recv Right now, it''s looking mightly close to 1. Comments appreciated as always. Thanks. :) -- Regards, Jeremy
Jeremy Teo wrote:> I''m defining "zpool split" as the ability to divide a pool into 2 > separate pools, each with identical FSes. The typical use case would > be to split a N disk mirrored pool into a N-1 pool and a 1 disk pool, > and then transport the 1 disk pool to another machine.Can you pick another name for this please because that name has already been suggested for zfs(1) where the argument is a directory in an existing ZFS file system and the result is that the directory becomes a new ZFS file system while retaining its contents.> While contemplating "zpool split" functionality, I wondered whether we > really want such a feature because > > 1) SVM allows it and admins are used to it. > > or > > 2) We can''t do what we want using zfs send |zfs recv > > Right now, it''s looking mightly close to 1.What problem are you actually trying to solve ? Just because you did something this way in the SVM world doesn''t mean that there should be a 1:1 in the ZFS world - there might be but it doesn''t follow that there should be. -- Darren J Moffat
> While contemplating "zpool split" functionality, I > wondered whether we > really want such a feature because > > 1) SVM allows it and admins are used to it. > or > 2) We can''t do what we want using zfs send |zfs recvI don''t think this is an either/or scenario. There are simply too many times (and reasons for) we want to split a mirror (and yes, we''ll have to look at another term since "split" is already used) and send the one half elsewhere, data intact, or otherwise work with a clean copy of the data without touching the original (that is, the other mirror(s)). Rainer This message posted from opensolaris.org
On Tue, Jan 23, 2007 at 04:49:38PM +0000, Darren J Moffat wrote:> Jeremy Teo wrote: > >I''m defining "zpool split" as the ability to divide a pool into 2 > >separate pools, each with identical FSes. The typical use case would > >be to split a N disk mirrored pool into a N-1 pool and a 1 disk pool, > >and then transport the 1 disk pool to another machine. > > Can you pick another name for this please because that name has already > been suggested for zfs(1) where the argument is a directory in an > existing ZFS file system and the result is that the directory becomes a > new ZFS file system while retaining its contents.But zpool(1M) and zfs(1) do such different things that I wouldn''t be confused by it. However, an option to detach seems much better to me, as in detach a mirror from a vdev such that the detached mirror turns into an imported or exported pool.> What problem are you actually trying to solve ? Just because you did > something this way in the SVM world doesn''t mean that there should be a > 1:1 in the ZFS world - there might be but it doesn''t follow that there > should be.I think Jeremy must mean detach a mirror and treat the detached device as new pool. Nico --
Nicolas Williams wrote:> On Tue, Jan 23, 2007 at 04:49:38PM +0000, Darren J Moffat wrote: >> Jeremy Teo wrote: >>> I''m defining "zpool split" as the ability to divide a pool into 2 >>> separate pools, each with identical FSes. The typical use case would >>> be to split a N disk mirrored pool into a N-1 pool and a 1 disk pool, >>> and then transport the 1 disk pool to another machine. >> Can you pick another name for this please because that name has already >> been suggested for zfs(1) where the argument is a directory in an >> existing ZFS file system and the result is that the directory becomes a >> new ZFS file system while retaining its contents. > > But zpool(1M) and zfs(1) do such different things that I wouldn''t be > confused by it. However, an option to detach seems much better to me, > as in detach a mirror from a vdev such that the detached mirror turns > into an imported or exported pool.but there is no overlap in subcommands today and I don''t think that creating an overlap when the functionality is fundamentally different is a good idea.>> What problem are you actually trying to solve ? Just because you did >> something this way in the SVM world doesn''t mean that there should be a >> 1:1 in the ZFS world - there might be but it doesn''t follow that there >> should be. > > I think Jeremy must mean detach a mirror and treat the detached device > as new pool.I think so too and to be honest I was actually very surprised to find out that detach didn''t do that. -- Darren J Moffat
On 23/01/07, Darren J Moffat <Darren.Moffat at sun.com> wrote:> Can you pick another name for this please because that name has already > been suggested for zfs(1) where the argument is a directory in an > existing ZFS file system and the result is that the directory becomes a > new ZFS file system while retaining its contents.Sorry to jump in on the thread, but - that''s an excellent feature addition, look forward to it. Will it be accompanied by a ''zfs join''? -- Rasputin :: Jack of All Trades - Master of Nuns http://number9.hellooperator.net/
...such that a snapshot (cloned if need be) won''t do what you want? This message posted from opensolaris.org
On 25/01/07, Adam Leventhal <ahl at eng.sun.com> wrote:> On Wed, Jan 24, 2007 at 08:52:47PM +0000, Dick Davies wrote: > > that''s an excellent feature addition, look forward to it. > > Will it be accompanied by a ''zfs join''? > > Out of curiosity, what will you (or anyone else) use this for? If the idea > is to copy datasets to a new pool, why not use zfs send/receive?To clarify, I''m talking about ''zfs split'' as in breaking /tank/export/home into /tank/export/home/user1 , /tank/export/home/user2, etc. The ''zfs join'' is just an undo to help me out when I''ve been overzealous, every directory in my system is a filesystem, and I have more automated snapshots than I can stand... -- Rasputin :: Jack of All Trades - Master of Nuns http://number9.hellooperator.net/
> ...such that a snapshot (cloned if need be) won''t do > what you want?Nope. We''re talking about taking a whole disk in a mirror and doing something else with it, without touching the data on the other parts of the mirror. Rainer This message posted from opensolaris.org
Darren J Moffat wrote:> > What problem are you actually trying to solve ? Just because you did > something this way in the SVM world doesn''t mean that there should be > a 1:1 in the ZFS world - there might be but it doesn''t follow that > there should be. >If I had to guess... I want to quickly share an independent set of my data with a second host. In the past we''ve done this with taking half a mirror, making it an independent entity, and then allowing the new entity to be mounted by a second host.
Dick Davies wrote:> On 25/01/07, Adam Leventhal <ahl at eng.sun.com> wrote: >> On Wed, Jan 24, 2007 at 08:52:47PM +0000, Dick Davies wrote: >> > that''s an excellent feature addition, look forward to it. >> > Will it be accompanied by a ''zfs join''? >> >> Out of curiosity, what will you (or anyone else) use this for? If the >> idea >> is to copy datasets to a new pool, why not use zfs send/receive? > > To clarify, I''m talking about ''zfs split'' as in > breaking /tank/export/home into /tank/export/home/user1 , > /tank/export/home/user2, etc. > > The ''zfs join'' is just an undo to help me out when I''ve been > overzealous, every > directory in my system is a filesystem, and I have more automated > snapshots than I can stand...Yep. There was a previous thread on this which resulted in: 6400399 want "zfs split" There is no plan to implement anything like "zfs join". FYI, the RFE discussed in this thread is: 5097228 provide ''zpool split'' to create new pool by breaking all mirrors I agree that using the same names would be confusing, but this is a minor issue compared with the effort to implement either of these :-) --matt