Is it possible to convert/upgrade a file system that is currently under the control of Solaris Volume Manager to ZFS? Thanks This message posted from opensolaris.org
Not automagically. You''ll need to do a dump/restore or copy from one to the other. ----- Original Message ---- From: Dan Christensen <sundmc at sun.com> To: zfs-discuss at opensolaris.org Sent: Thursday, November 16, 2006 5:52:51 PM Subject: [zfs-discuss] SVM - UFS Upgrade Is it possible to convert/upgrade a file system that is currently under the control of Solaris Volume Manager to ZFS? Thanks This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ____________________________________________________________________________________ Sponsored Link Mortgage rates near 39yr lows. $310k for $999/mo. Calculate new payment! www.LowerMyBills.com/lre -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20061116/212f482b/attachment.html>
> Is it possible to convert/upgrade a file system that is currently > under the control of Solaris Volume Manager to ZFS?SVM or not doesn''t really matter. There''s no method for converting an existing filesystem to ZFS in place. You''ll have to populate the ZFS pool after allocating storage to it. -- Darren Dunham ddunham at taos.com Senior Technical Consultant TAOS http://www.taos.com/ Got some Dr Pepper? San Francisco, CA bay area < This line left intentionally blank to confuse you. >
This is the Migration Problem: given a dataset on a on-ZFS file system, what is the safest and easiest way to move it to a ZFS pool. There are two and a half cases: 1. You need to reuse the existing storage. 1.5 You have some extra storage, but not enough for 2 copies of all your data. 2. You can create the zpool on new storage. Clearly, case 1 is hardest, and we currently have no automated tool that will do this. Backup/Destroy existing file systems/Create new pools and file systems/Restore will work, but (a) you are offline for a period of time, and (b) you have to really trust your backup and restore software. Case 2 can often be solved using rdist. You may have to quiesce your file systems for a while. Case 1.5 can usually be solved with a lot of hacking around with partial versions of case 2. A lot depends on how your existing data is organized. We have a number of clever ideas for how to automate case 2, and some ideas for case 1.5, but no bandwidth to implement them now. Note that in all cases, it really pays to give some thought up front to how you want your ZFS pools and file systems to be organized. ZFS removes many of the arbitrary constraints that may have governed your existing structure; free yourself from those constraints. I''m curious to hear of any migration success stories - or not - that folks on this alias have experienced. You can send them to me and I''ll summarize to the alias. Thanks, Fred Darren Dunham wrote:>> Is it possible to convert/upgrade a file system that is currently >> under the control of Solaris Volume Manager to ZFS? > > SVM or not doesn''t really matter. There''s no method for converting an > existing filesystem to ZFS in place. > > You''ll have to populate the ZFS pool after allocating storage to it. >
On Thu, 2006-11-16 at 16:08 -0800, Fred Zlotnick wrote:> I''m curious to hear of any migration success stories - or not - that > folks on this alias have experienced. You can send them to me and > I''ll summarize to the alias.I sent one to this list some months ago. To recap, I used a variant of case 2: When I set up the original SVM+UFS filesystem, I knew a zfs migration was coming so I held back sufficient storage to permit me to create the first raidz group. rsync worked nicely to copy the bits. once the move was complete, the SVM+UFS filesystem was taken apart and the underlying disks added to the pool. It took a few months before usage levelled out between the first raidz group and the ones added later. - Bill
Bill Sommerfeld wrote:> On Thu, 2006-11-16 at 16:08 -0800, Fred Zlotnick wrote: >> I''m curious to hear of any migration success stories - or not - that >> folks on this alias have experienced. You can send them to me and >> I''ll summarize to the alias. > > I sent one to this list some months ago. > > To recap, I used a variant of case 2: When I set up the original SVM+UFS > filesystem, I knew a zfs migration was coming so I held back sufficient > storage to permit me to create the first raidz group. > > rsync worked nicely to copy the bits. once the move was complete, the > SVM+UFS filesystem was taken apart and the underlying disks added to the > pool. > > It took a few months before usage levelled out between the first raidz > group and the ones added later.I did a similar thing. I originally had an SVM mirror, 6 disks in each side of the mirror with a UFS file system on it. I broke the mirror and created a 6 disk raidz out of that for the new ZFS pool. After creating a ZFS file system for each of the old top level directories (which represented users build areas) in the ZFS pool, I used rsync to copy the data from the UFS filesystem to the new ZFS ones. The users were warned that this was going to happen. When the first rsync was done I used lockfs to write lock the original UFS file system then ran rsync again to pickup the changes that happened during the first pass. I then remounted the original UFS file system read-only and left it that way for a week while the users worked on their new ZFS file systems. I then destroyed the original SVM+UFS config and added those 6 disks into the ZFS pool as a second 6 disk raidz group. -- Darren J Moffat