fran
2006-May-03 05:50 UTC
[zfs-discuss] ant tool for migrate from ufs/svm to ZFS and pools
Hi Exists (or It will exists) any metoth or tool for migrate a UFS/SVM filesystems with soft partitions to ZFS filesystems with pools? Any ideas for migrate a instaled base: Solaris 10 UFS/Solaris Volme Manager to Solaris 10 ZFS or only we have a backup-recovery option? Thanks This message posted from opensolaris.org
James C. McPherson
2006-May-03 06:03 UTC
[zfs-discuss] ant tool for migrate from ufs/svm to ZFS and pools
fran wrote:> Exists (or It will exists) any metoth or tool for migrate a UFS/SVM > filesystems with soft partitions to ZFS filesystems with pools? > Any ideas for migrate a instaled base: Solaris 10 UFS/Solaris Volme > Manager to Solaris 10 ZFS or only we have a backup-recovery option?Hi Fran, no, there is no such tool right now. A few people (including myself) have talked about how such a tool might be designed, but I don''t think there''s been any activity to date. best regards, James C. McPherson -- Solaris Datapath Engineering Data Management Group Sun Microsystems
grant beattie
2006-May-03 07:27 UTC
[zfs-discuss] ant tool for migrate from ufs/svm to ZFS and pools
On Wed, May 03, 2006 at 04:03:18PM +1000, James C. McPherson wrote:> >Exists (or It will exists) any metoth or tool for migrate a UFS/SVM > >filesystems with soft partitions to ZFS filesystems with pools? > >Any ideas for migrate a instaled base: Solaris 10 UFS/Solaris Volme > >Manager to Solaris 10 ZFS or only we have a backup-recovery option? > > Hi Fran, > no, there is no such tool right now. A few people (including myself) > have talked about how such a tool might be designed, but I don''t think > there''s been any activity to date.on a related note, I noticed that ufsrestore currently does not know how to write ACLs onto ZFS filesystems. this is pretty much essential for any environment that makes use of ACLs in order to migrate to ZFS. I don''t know if this is considered a bug, or if there''s an RFE for it, but I guess there should be one :) grant.
Bill Sommerfeld
2006-May-03 14:11 UTC
[zfs-discuss] any tool for migration from ufs/svm to ZFS and pools
This doesn''t really answer your question directly but could probably help anyone planning a UFS->ZFS migration... I conducted a UFS/SVM -> ZFS over the weekend for a file/build server used by about 30 developers which had roughly 180GB allocated in a ~400GB UFS/SVM partition. The server is a v40z with 5 internal and 12 external 72GB 10krpm drives. We had planned in advance for ZFS when we bought and set up this server, so a bunch of the disks were reserved for later ZFS use. I had a pool set up and migrated myself about two months ago (during which time I shook out a few bugs in ZFS..). After shutting down the system and remounting the SVM device read-only, I wound up running multiple instances of "rsync" in parallel (one per developer subdir) to move the bits. I chose rsync because if the transfer were to have been interrupted, I could restart it from where it left off without wasting most of the work already done. (Preserving/converting acls wasn''t a consideration). However, be aware that rsync does a complete tree walk of the source before it starts moving any bits (but then, so does ufsdump..) The source partition was on 6 72GB drives configured as an svm concatenation of two three-disk-wide stripes (why? it started life as a 3-disk stripe and then grew...) The destination was a pool consisting of a single 5-disk raid-z group. six instances of rsync seemed to saturate the source -- it appeared from watching iostat that the main limiting factor was the ability of the first stripe group to perform random i/o -- the first three disks of the source saturated at around 160 io''s/second (which is pretty much what you''d expect for a 10krpm drive). It took around 8-9 hours to move all the files. After migration, the 180GB as seen by UFS ended up occupying around 120GB in the pool (after compression and a 4:5 raid-z expansion). ZFS cites a compression ratio of around 2.10x (which was in line with what I expected based on the early trials I conducted). Based strictly on the raid-z and compression ratios I would have predicted slightly lower usage in the pool, but I''m not complaining. After the migration I did a final ufsdump backup of the read-only UFS source file system to a remote file; ufsdump took around the same amount of time as the parallel rsyncs. After a day spent listening for screams, I then unmounted it, metaclear''ed it, and then added another two 5-disk raid-z groups to the pool based on the free disks available. Since then I''ve been collecting hourly data on the allocation load-balacing to see how long it will be before allocation balances out across the three groups.. - Bill
Mark Shellenbaum
2006-May-03 14:21 UTC
[zfs-discuss] ant tool for migrate from ufs/svm to ZFS and pools
grant beattie wrote:> On Wed, May 03, 2006 at 04:03:18PM +1000, James C. McPherson wrote: > > >>>Exists (or It will exists) any metoth or tool for migrate a UFS/SVM >>>filesystems with soft partitions to ZFS filesystems with pools? >>>Any ideas for migrate a instaled base: Solaris 10 UFS/Solaris Volme >>>Manager to Solaris 10 ZFS or only we have a backup-recovery option? >> >>Hi Fran, >>no, there is no such tool right now. A few people (including myself) >>have talked about how such a tool might be designed, but I don''t think >>there''s been any activity to date. > > > on a related note, I noticed that ufsrestore currently does not know > how to write ACLs onto ZFS filesystems. this is pretty much essential > for any environment that makes use of ACLs in order to migrate to ZFS. > > I don''t know if this is considered a bug, or if there''s an RFE for it, > but I guess there should be one :) >I just opened bug 6421216 -Mark
Bev Crair
2006-May-03 15:07 UTC
[zfs-discuss] any tool for migration from ufs/svm to ZFS and pools
Bill -- thanks for the write up! Note that we''re working on a best-practices document about how to migrate file systems/volumes to ZFS. It didn''t make it for the launch yesterday, though. Bev. Bill Sommerfeld wrote:> This doesn''t really answer your question directly but could probably > help anyone planning a UFS->ZFS migration... > > I conducted a UFS/SVM -> ZFS over the weekend for a file/build server > used by about 30 developers which had roughly 180GB allocated in a > ~400GB UFS/SVM partition. The server is a v40z with 5 internal and 12 > external 72GB 10krpm drives. > > We had planned in advance for ZFS when we bought and set up this server, > so a bunch of the disks were reserved for later ZFS use. > I had a pool set up and migrated myself about two months ago (during > which time I shook out a few bugs in ZFS..). > > After shutting down the system and remounting the SVM device read-only, > I wound up running multiple instances of "rsync" in parallel (one per > developer subdir) to move the bits. I chose rsync because if the > transfer were to have been interrupted, I could restart it from where it > left off without wasting most of the work already done. > (Preserving/converting acls wasn''t a consideration). > > However, be aware that rsync does a complete tree walk of the source > before it starts moving any bits (but then, so does ufsdump..) > > The source partition was on 6 72GB drives configured as an svm > concatenation of two three-disk-wide stripes (why? it started life as a > 3-disk stripe and then grew...) > > The destination was a pool consisting of a single 5-disk raid-z group. > > six instances of rsync seemed to saturate the source -- it appeared from > watching iostat that the main limiting factor was the ability of the > first stripe group to perform random i/o -- the first three disks of the > source saturated at around 160 io''s/second (which is pretty much what > you''d expect for a 10krpm drive). It took around 8-9 hours to move all > the files. > > After migration, the 180GB as seen by UFS ended up occupying around > 120GB in the pool (after compression and a 4:5 raid-z expansion). > > ZFS cites a compression ratio of around 2.10x (which was in line with > what I expected based on the early trials I conducted). Based strictly > on the raid-z and compression ratios I would have predicted slightly > lower usage in the pool, but I''m not complaining. > > After the migration I did a final ufsdump backup of the read-only UFS > source file system to a remote file; ufsdump took around the same amount > of time as the parallel rsyncs. > > After a day spent listening for screams, I then unmounted it, > metaclear''ed it, and then added another two 5-disk raid-z groups to the > pool based on the free disks available. > > Since then I''ve been collecting hourly data on the allocation > load-balacing to see how long it will be before allocation balances out > across the three groups.. > > - Bill > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >