So I had an E450 running Solaris 8 with VxVM encapsulated root disk. I upgraded it to Solaris 10 ZFS root using this method: - Unencapsulate the root disk - Remove VxVM components from the second disk - Live Upgrade from 8 to 10 on the now-unused second disk - Boot to the new Solaris 10 install - Create a ZFS pool on the now-unused first disk - Use Live Upgrade to migrate root filesystems to the ZFS pool - Add the now-unused second disk to the ZFS pool as a mirror Now my E450 is running Solaris 10 5/09 with ZFS root, and all the same users, software, and configuration that it had previously. That is pretty slick in itself. But the server itself is dog slow and more than half the disks are failing, and maybe I want to clone the server on new(er) hardware. With ZFS, this should be a lot simpler than it used to be, right? A new server has new hardware, new disks with different names and different sizes. But that doesn''t matter anymore. There''s a procedure in the ZFS manual to recover a corrupted server by using zfs receive to reinstall a copy of the boot environment into a newly created pool on the same server. But what if I used zfs send to save a recursive snapshot of my root pool on the old server, booted my new server (with the same architecture) from the DVD in single user mode and created a ZFS pool on its local disks, and did zfs receive to install the boot environments there? The filesystems don''t care about the underlying disks. The pool hides the disk specifics. There''s no vfstab to edit. Off the top of my head, all I can think to have to change is the network interfaces. And that change is as simple as "cd /etc ; mv hostname.hme0 hostname.qfe0" or whatever. Is there anything else I''m not thinking of? -- This message posted from opensolaris.org
On Thu, Jun 18, 2009 at 10:56 AM, Dave Ringkor<no-reply at opensolaris.org> wrote:> But what if I used zfs send to save a recursive snapshot of my root pool on the old server, booted my new server (with the same architecture) from the DVD in single user mode and created a ZFS pool on its local disks, and did zfs receive to install the boot environments there? ?The filesystems don''t care about the underlying disks. ?The pool hides the disk specifics. ?There''s no vfstab to edit.http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide#ZFS_Root_Pool_Recovery -- Fajar
Hi Dave, Until the ZFS/flash support integrates into an upcoming Solaris 10 release, I don''t think we have an easy way to clone a root pool/dataset from one system to another system because system specific info is still maintained. Your manual solution sounds plausible but probably won''t work because of the system specific info. Here are some options: 1. Wait for the ZFS/flash support in an upcoming Solaris 10 release. You can track CR 6690473 for this support. 2. Review interim solutions that involves UFS to ZFS migration but might give you some ideas: http://blogs.sun.com/scottdickson/entry/flashless_system_cloning_with_zfs http://blogs.sun.com/scottdickson/entry/a_much_better_way_to 3. Do an initial installation of your new server with a two-disk mirrored root pool. Set up a separate pool for data/applications. Snapshot data from the E450 and send/receive over to the data/app pool on the new server. Cindy Dave Ringkor wrote:> So I had an E450 running Solaris 8 with VxVM encapsulated root disk. I upgraded it to Solaris 10 ZFS root using this method: > > - Unencapsulate the root disk > - Remove VxVM components from the second disk > - Live Upgrade from 8 to 10 on the now-unused second disk > - Boot to the new Solaris 10 install > - Create a ZFS pool on the now-unused first disk > - Use Live Upgrade to migrate root filesystems to the ZFS pool > - Add the now-unused second disk to the ZFS pool as a mirror > > Now my E450 is running Solaris 10 5/09 with ZFS root, and all the same users, software, and configuration that it had previously. That is pretty slick in itself. But the server itself is dog slow and more than half the disks are failing, and maybe I want to clone the server on new(er) hardware. > > With ZFS, this should be a lot simpler than it used to be, right? A new server has new hardware, new disks with different names and different sizes. But that doesn''t matter anymore. There''s a procedure in the ZFS manual to recover a corrupted server by using zfs receive to reinstall a copy of the boot environment into a newly created pool on the same server. But what if I used zfs send to save a recursive snapshot of my root pool on the old server, booted my new server (with the same architecture) from the DVD in single user mode and created a ZFS pool on its local disks, and did zfs receive to install the boot environments there? The filesystems don''t care about the underlying disks. The pool hides the disk specifics. There''s no vfstab to edit. > > Off the top of my head, all I can think to have to change is the network interfaces. And that change is as simple as "cd /etc ; mv hostname.hme0 hostname.qfe0" or whatever. Is there anything else I''m not thinking of?
Cindy, my question is about what "system specific info" is maintained that would need to be changed? To take my example, my E450, "homer", has disks that are failing and it''s a big clunky server anyway, and management wants to decommission it. But we have an old 220R racked up doing nothing, and it''s not scheduled for disposal. What would be wrong with this: 1) Create a recursive snapshot of the root pool on homer. 2) zfs send this snapshot to a file on some NFS server. 3) Boot my 220R (same architecture as the E450) into single user mode from a DVD. 4) Create a zpool on the 220R''s local disks. 5) zfs receive the snapshot created in step 2 to the new pool. 6) Set the bootfs property. 7) Reboot the 220R. Now my 220R comes up as "homer", with its IP address, users, root pool filesystems, any software that was installed in the old homer''s root pool, etc. Since ZFS filesystems don''t care about the underlying disk structure -- they only care about the pool, and I''ve already created a pool for them on the 220R using the disks it has, there shouldn''t be any storage-type "system specific into" to change, right? And sure, the 220R might have a different number and speed of CPUs, and more or less RAM than the E450 had. But when you upgrade a server in place you don''t have to manually configure the CPUs or RAM, and how is this different? The only thing I can think of that I might need to change, in order to bring up my 220R and have it "be" homer, is the network interfaces, from hme to bge or whatever. And that''s a simple config setting. I don''t care about Flash. Actually, if you wanted to provision new servers based on a golden image like you can with Flash, couldn''t you just take a recursive snapshot of a zpool as above, "receive" it in an empty zpool on another server, set your bootfs, and do a sys-unconfig? So my big question is, with a server on ZFS root, what "system specific info" would still need to be changed? -- This message posted from opensolaris.org
The device tree for your 250 might be different, so you may need to hack the path_to_inst and /devices and /dev to make it boot sucessfully. On Jun 20, 2009, at 10:18 AM, Dave Ringkor <no-reply at opensolaris.org> wrote:> Cindy, my question is about what "system specific info" is > maintained that would need to be changed? To take my example, my > E450, "homer", has disks that are failing and it''s a big clunky > server anyway, and management wants to decommission it. But we have > an old 220R racked up doing nothing, and it''s not scheduled for > disposal. > > What would be wrong with this: > 1) Create a recursive snapshot of the root pool on homer. > 2) zfs send this snapshot to a file on some NFS server. > 3) Boot my 220R (same architecture as the E450) into single user > mode from a DVD. > 4) Create a zpool on the 220R''s local disks. > 5) zfs receive the snapshot created in step 2 to the new pool. > 6) Set the bootfs property. > 7) Reboot the 220R. > > Now my 220R comes up as "homer", with its IP address, users, root > pool filesystems, any software that was installed in the old homer''s > root pool, etc. > > Since ZFS filesystems don''t care about the underlying disk structure > -- they only care about the pool, and I''ve already created a pool > for them on the 220R using the disks it has, there shouldn''t be any > storage-type "system specific into" to change, right? And sure, the > 220R might have a different number and speed of CPUs, and more or > less RAM than the E450 had. But when you upgrade a server in place > you don''t have to manually configure the CPUs or RAM, and how is > this different? > > The only thing I can think of that I might need to change, in order > to bring up my 220R and have it "be" homer, is the network > interfaces, from hme to bge or whatever. And that''s a simple config > setting. > > I don''t care about Flash. Actually, if you wanted to provision new > servers based on a golden image like you can with Flash, couldn''t > you just take a recursive snapshot of a zpool as above, "receive" it > in an empty zpool on another server, set your bootfs, and do a sys- > unconfig? > > So my big question is, with a server on ZFS root, what "system > specific info" would still need to be changed? > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Sat, Jun 20, 2009 at 9:18 AM, Dave Ringkor<no-reply at opensolaris.org> wrote:> What would be wrong with this: > 1) Create a recursive snapshot of the root pool on homer. > 2) zfs send this snapshot to a file on some NFS server. > 3) Boot my 220R (same architecture as the E450) into single user mode from a DVD. > 4) Create a zpool on the 220R''s local disks. > 5) zfs receive the snapshot created in step 2 to the new pool. > 6) Set the bootfs property. > 7) Reboot the 220R. > > Now my 220R comes up as "homer", with its IP address, users, root pool filesystems, any software that was installed in the old homer''s root pool, etc.No, your 220R will most likely be unbootable. Because you haven''t run installboot. See the link I sent earlier. Other than that, the steps should work out fine. I''ve only tested it on two servers of the same type though (both are T2000). -- Fajar
Dave, If I knew I would tell you, which is the problem. :-) I see a good follow-up about device links, but probably more is lurking. I generally don''t trust anything I haven''t tested myself, and I know that the manual process hasn''t always worked. I think Scott Dickson''s instructions would have a higher success rate. Maybe this is why the request to get Flash working with ZFS has been a high priority with our customers. Cindy ----- Original Message ----- From: Dave Ringkor <no-reply at opensolaris.org> Date: Friday, June 19, 2009 8:20 pm Subject: Re: [zfs-discuss] Server Cloning With ZFS? To: zfs-discuss at opensolaris.org> Cindy, my question is about what "system specific info" is maintained > that would need to be changed? To take my example, my E450, "homer", > has disks that are failing and it''s a big clunky server anyway, and > management wants to decommission it. But we have an old 220R racked > up doing nothing, and it''s not scheduled for disposal. > > What would be wrong with this: > 1) Create a recursive snapshot of the root pool on homer. > 2) zfs send this snapshot to a file on some NFS server. > 3) Boot my 220R (same architecture as the E450) into single user mode > from a DVD. > 4) Create a zpool on the 220R''s local disks. > 5) zfs receive the snapshot created in step 2 to the new pool. > 6) Set the bootfs property. > 7) Reboot the 220R. > > Now my 220R comes up as "homer", with its IP address, users, root pool > filesystems, any software that was installed in the old homer''s root > pool, etc. > > Since ZFS filesystems don''t care about the underlying disk structure > -- they only care about the pool, and I''ve already created a pool for > them on the 220R using the disks it has, there shouldn''t be any > storage-type "system specific into" to change, right? And sure, the > 220R might have a different number and speed of CPUs, and more or less > RAM than the E450 had. But when you upgrade a server in place you > don''t have to manually configure the CPUs or RAM, and how is this different? > > The only thing I can think of that I might need to change, in order to > bring up my 220R and have it "be" homer, is the network interfaces, > from hme to bge or whatever. And that''s a simple config setting. > > I don''t care about Flash. Actually, if you wanted to provision new > servers based on a golden image like you can with Flash, couldn''t you > just take a recursive snapshot of a zpool as above, "receive" it in an > empty zpool on another server, set your bootfs, and do a sys-unconfig? > > So my big question is, with a server on ZFS root, what "system > specific info" would still need to be changed? > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss