Thomas Walker
2009-Jul-28 11:55 UTC
[zfs-discuss] How to "mirror" an entire zfs pool to another pool
We are upgrading to new storage hardware. We currently have a zfs pool with the old storage volumes. I would like to create a new zfs pool, completely separate, with the new storage volumes. I do not want to just replace the old volumes with new volumes in the pool we are currently using. I don''t see a way to create a mirror of a pool. Note, I''m not talking about a mirrored-pool, meaning mirrored drives inside the pool. I want to mirror pool1 to pool2. Snapshots and clones do not seem to be what I want as they only work inside a given pool. I have looked at Sun Network Data Replicator (SNDR) but that doesn''t seem to be what I want either as the physical volumes in the new pool may be a different size than in the old pool. Does anyone know how to do this? My only idea at the moment is to create the new pool, create new filesystems and then use rsync from the old filesystems to the new filesystems, but it seems like there should be a way to mirror or replicate the pool itself rather than doing it at the filesystem level. Thomas Walker -- This message posted from opensolaris.org
michael schuster
2009-Jul-28 12:01 UTC
[zfs-discuss] How to "mirror" an entire zfs pool to another pool
Thomas Walker wrote:> We are upgrading to new storage hardware. We currently have a zfs pool > with the old storage volumes. I would like to create a new zfs pool, > completely separate, with the new storage volumes. I do not want to > just replace the old volumes with new volumes in the pool we are > currently using. I don''t see a way to create a mirror of a pool. Note, > I''m not talking about a mirrored-pool, meaning mirrored drives inside > the pool. I want to mirror pool1 to pool2. Snapshots and clones do not > seem to be what I want as they only work inside a given pool. I have > looked at Sun Network Data Replicator (SNDR) but that doesn''t seem to be > what I want either as the physical volumes in the new pool may be a > different size than in the old pool. > > Does anyone know how to do this? My only idea at the moment is to > create the new pool, create new filesystems and then use rsync from the > old filesystems to the new filesystems, but it seems like there should > be a way to mirror or replicate the pool itself rather than doing it at > the filesystem level.have you looked at what ''zfs send'' can do? Michael -- Michael Schuster http://blogs.sun.com/recursion Recursion, n.: see ''Recursion''
Darren J Moffat
2009-Jul-28 12:03 UTC
[zfs-discuss] How to "mirror" an entire zfs pool to another pool
Thomas Walker wrote:> We are upgrading to new storage hardware. We currently have a zfs pool with the old storage volumes. I would like to create a new zfs pool, completely separate, with the new storage volumes. I do not want to just replace the old volumes with new volumes in the pool we are currently using. I don''t see a way to create a mirror of a pool. Note, I''m not talking about a mirrored-pool, meaning mirrored drives inside the pool. I want to mirror pool1 to pool2. Snapshots and clones do not seem to be what I want as they only work inside a given pool. I have looked at Sun Network Data Replicator (SNDR) but that doesn''t seem to be what I want either as the physical volumes in the new pool may be a different size than in the old pool. > > Does anyone know how to do this? My only idea at the moment is to create the new pool, create new filesystems and then use rsync from the old filesystems to the new filesystems, but it seems like there should be a way to mirror or replicate the pool itself rather than doing it at the filesystem level.You can do this by attaching the new disks one by one to the old ones. This is only going to work if your new storage pool has exactly the same number (at the same size or larger) disks. For example you have 12 500G drives and your new storage is 12 1TB drivers. That will work. For each drive in the old pool do: zpool attach <poolname> <olddrive> <newdrive> When you have done that and the resilver has completed then you can zpool detach all the old drives. If your existing storage is all ready mirrored this still works you just do the detach twice to get off the old pool. On the other hand if you have 12 500G drives and your new storage is 6 1TB drives then you can''t do that via mirroring you need to use zfs send and recv eg: zpool create newpool .... zfs snapshot -r oldpool at sendit zfs send -R oldpool at sendit | zfs recv -vFd newpool That will work providing the data will fit and unlike rsync it will preserve all your snapshots and you don''t have to recreate the new filesystems. -- Darren J Moffat
Thomas Walker
2009-Jul-28 13:38 UTC
[zfs-discuss] How to "mirror" an entire zfs pool to another pool
> zpool create newpool .... > zfs snapshot -r oldpool at sendit > zfs send -R oldpool at sendit | zfs recv -vFd newpoolI think this is probably something like what I want, the problem is I''m not really "getting it" yet. If you could explain just what is happening here in an example. Let''s say I have this setup; oldpool = 10 x 500GB volumes, with two mounted filesystems; fs1 and fs2 I create newpool = 12 x 1TB volumes using new storage hardware. newpool thus has a lot more capacity than oldpool, but not the same number of physical volumes or the same size volumes. I want to replicate oldpool and thus oldpool/fs1 and oldpool/fs2 on newpool/fs1 and newpool/fs2. And I want to do this in a way that allows me to "switch over" from oldpool to newpool on a day that is scheduled with the customers and then take oldpool away. So on Monday I take a snapshot of oldpool, like you say; zfs snapshot -r oldpool at sendit And I send/recv it to newpool; zfs send -R oldpool at sendit | zfs recv -vFd newpool At this point does all of that data, say 3TB or so, start copying over to the newpool? How do I monitor the progress of the transfer? Once that initial copy is done, on say Wednesday, how do I then do a final "sync" from oldpool to newpool to pick up any changes that occurred since the first snapshot on Monday. I assume that for this final snapshot I would unmount the filesystems to prevent any changes by the customer. Sorry I''m being dense here, I think I sort of get it but I don''t have the whole picture. Thomas Walker -- This message posted from opensolaris.org
Darren J Moffat
2009-Jul-28 13:54 UTC
[zfs-discuss] How to "mirror" an entire zfs pool to another pool
> I think this is probably something like what I want, the problem is I''m > not really "getting it" yet. If you could explain just what is > happening here in an example. Let''s say I have this setup; > > oldpool = 10 x 500GB volumes, with two mounted filesystems; fs1 and fs2 > > I create newpool = 12 x 1TB volumes using new storage hardware. > newpool thus has a lot more capacity than oldpool, but not the same > number of physical volumes or the same size volumes.That is fine because the zfs send | zfs recv copies the data across.> I want to replicate oldpool and thus oldpool/fs1 and oldpool/fs2 on > newpool/fs1 and newpool/fs2. And I want to do this in a way that > allows me to "switch over" from oldpool to newpool on a day that is > scheduled with the customers and then take oldpool away.So depending on the volume of data change you might need to do the snapshot and send several times.> So on Monday I take a snapshot of oldpool, like you say; > > zfs snapshot -r oldpool at sendit > > And I send/recv it to newpool; > > zfs send -R oldpool at sendit | zfs recv -vFd newpool > > At this point does all of that data, say 3TB or so, start copying over > to the newpool?Everything in all the oldpool datasets that was written upto the time the @sendit snapshot was created will be. > How do I monitor the progress of the transfer? Once Unfortunately there is no easy way to do that just now. When the ''zfs recv'' finishes is it is done.> that initial copy is done, on say Wednesday, how do I then do a final > "sync" from oldpool to newpool to pick up any changes that occurred > since the first snapshot on Monday.Do almost the same again eg: zfs snapshot -r oldpool at wednesday zfs send -R -i oldpool at monday oldpool at wednesday | zfs recv -vFd newpool > I assume that for this final> snapshot I would unmount the filesystems to prevent any changes by the > customer.That is very good idea, the filesystem does *not* need to be mounted for the zfs send to work. Once the last send is finished do: zpool export oldpool If you want to actually rename newpool back to the oldpool name do this: zpool export newpool zpool import newpool oldpool> Sorry I''m being dense here, I think I sort of get it but I don''t have > the whole picture.You are very close, there is some more info in the zfs(1M) man page. -- Darren J Moffat
Thomas Walker
2009-Jul-28 14:32 UTC
[zfs-discuss] How to "mirror" an entire zfs pool to another pool
I think you''ve given me enough information to get started on a test of the procedure. Thanks very much. Thomas Walker -- This message posted from opensolaris.org
Gaƫtan Lehmann
2009-Jul-28 16:19 UTC
[zfs-discuss] How to "mirror" an entire zfs pool to another pool
Le 28 juil. 09 ? 15:54, Darren J Moffat a ?crit :> > How do I monitor the progress of the transfer? Once > > Unfortunately there is no easy way to do that just now. When the > ''zfs recv'' finishes is it is done.I''ve just found pv (pipe viewer) today (http://www.ivarch.com/programs/pv.shtml ) which is packaged in /contrib (http://pkg.opensolaris.org/contrib/p5i/0/pv.p5i ). You can do zfs send -R oldpool at sendit | pv -s 3T | zfs recv -vFd newpool and you''ll see a message like that: 8GO 0:00:05 [5,71GO/s] [=> ] 7% ETA 0:00:58 A nice and simple way to get a progress report! Ga?tan -- Ga?tan Lehmann Biologie du D?veloppement et de la Reproduction INRA de Jouy-en-Josas (France) tel: +33 1 34 65 29 66 fax: 01 34 65 29 09 http://voxel.jouy.inra.fr http://www.itk.org http://www.mandriva.org http://www.bepo.fr -------------- next part -------------- A non-text attachment was scrubbed... Name: PGP.sig Type: application/pgp-signature Size: 203 bytes Desc: Ceci est une signature ?lectronique PGP URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090728/b61a805c/attachment.bin>