When I set up my opensolaris system at home, I just grabbed a 160 GB drive that I had sitting around to use for the rpool. Now I''m thinking of moving the rpool to another disk, probably ssd, and I don''t really want to shell out the money for two 160 GB drives. I''m currently using ~ 18GB in the rpool, so any of the ssd ''boot drives'' being sold are large enough. I know I can''t attach a device that much smaller to the rpool however. Would it be possible to do the following? 1. Attach the new drives. 2. Reboot from LiveCD. 3. zpool create new_rpool on the ssd 4. zfs send all datasets from rpool to new_rpool 5. installgrub /boot/grub/stage1 /boot/grub/stage2 on the ssd 6. zfs export the rpool and new_rpool 7. ''zfs import new_rpool rpool'' (This should rename it to rpool, right?) 8. shutdown and disconnect the old rpool drive This should work, right? I plan to test it on a VirtualBox instance first, but does anyone see a problem with the general steps I''ve laid out? -B -- Brandon High : bhigh at freaks.com
On 04/17/10 11:41 AM, Brandon High wrote:> When I set up my opensolaris system at home, I just grabbed a 160 GB > drive that I had sitting around to use for the rpool. > > Now I''m thinking of moving the rpool to another disk, probably ssd, > and I don''t really want to shell out the money for two 160 GB drives. > I''m currently using ~ 18GB in the rpool, so any of the ssd ''boot > drives'' being sold are large enough. I know I can''t attach a device > that much smaller to the rpool however. > > Would it be possible to do the following? > 1. Attach the new drives. > 2. Reboot from LiveCD. > 3. zpool create new_rpool on the ssd > 4. zfs send all datasets from rpool to new_rpool > 5. installgrub /boot/grub/stage1 /boot/grub/stage2 on the ssd > 6. zfs export the rpool and new_rpool > 7. ''zfs import new_rpool rpool'' (This should rename it to rpool, right?) > 8. shutdown and disconnect the old rpool drive > > This should work, right? I plan to test it on a VirtualBox instance > first, but does anyone see a problem with the general steps I''ve laid > out? > >It should work. You aren''t changing your current rpool (and you could probably import it read only for the copy), so it''s there if things go tits up. -- Ian.
On 04/16/10 07:41 PM, Brandon High wrote:> 1. Attach the new drives. > 2. Reboot from LiveCD. > 3. zpool create new_rpool on the ssdIs step 2 actually necessary? Couldn''t you create a new BE # beadm create old_rpool # beadm activate old_rpool # reboot # beadm delete rpool It''s the same number of steps but saves the bother of making a zpool version compatible live cd. Also, how attached are you the pool name rpool? I have systems with root pools called spool, tpool, etc., even one rpool-1 (because the text installer detected an earlier rpool on an iscsi volume I was overwriting) and they all seem to work fine. Actually. my preferred method (if you really want the new pool to be called rpool) would be to do the 4 step rename on the ssd after all the other steps are done and you''ve sucessfully booted it. Then you always have the untouched old disk in case you mess up. Also, (gurus please correct here), you might need to change step 3 to something like # zpool create -f -o failmode=continue -R /mnt -m legacy rpool <ssd> in which case you can recv to it without rebooting at all, and #zpool set bootfs =... You might also consider where you want swap to be and make sure that vfstab is correct on the old disk now that the root pool has a different name. There was detailed documentation on how to zfs send/recv root pools on the Sun ZFS documentation site, but right now it doesn''t seem to be Googleable. I''m not sure your original set of steps will work without at least doing the above two. You might need to check to be sure the ssd has an SMI label. AFAIK the "official" syntax for installing the MBR is # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/<ssd> Finally, you should check or delete /etc/zfs/zpool.cache because it will likely be incorrect on the ssd after recv''ing the snapshot. HTH -- Frank
On 04/16/10 08:57 PM, Frank Middleton wrote:> AFAIK the "official" syntax for installing the MBR is > # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk > /dev/rdsk/<ssd>Sorry, that''s for SPARC. You had the installgrub down correctly...
On Fri, Apr 16, 2010 at 5:57 PM, Frank Middleton <f.middleton at apogeect.com> wrote:> Is step 2 actually necessary? Couldn''t you create a new BE > > # beadm create old_rpool > # beadm activate old_rpool > # reboot > # beadm delete rpoolRight now, my boot environments are named after the build it''s running. I''m guessing that by ''rpool'' you mean the current BE above. bhigh at basestar:~$ beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- snv_129 - - 1.47M static 2009-12-14 13:33 snv_133 NR / 16.37G static 2010-02-23 18:54 So what you''re suggesting is creating a new BE and booting to that to do the send | recv? Why would I want to destroy my current BE?> You might also consider where you want swap to be and make sure > that vfstab is correct on the old disk now that the root pool has > a different name. There was detailed documentation on how to zfsThat''s the main reason for making sure the rpool is names rpool, so I don''t have to chase down any references to the old name.> send/recv root pools on the Sun ZFS documentation site, but right > now it doesn''t seem to be Googleable. I''m not sure your original > set of steps will work without at least doing the above two.I figure that by booting to a live cd / live usb, the pool will not be in use, so there shouldn''t be any special steps involved. I''ll try out a few variations on a VM and see how it goes. -B -- Brandon High : bhigh at freaks.com
On 04/16/10 09:53 PM, Brandon High wrote:> Right now, my boot environments are named after the build it''s > running. I''m guessing that by ''rpool'' you mean the current BE above.No, I didn''t :-(. Please ignore that part - too much caffeine :-).> I figure that by booting to a live cd / live usb, the pool will not be > in use, so there shouldn''t be any special steps involved.Might be the easiest way. But I''ve never found having a different name for the root pool to be a problem. The lack, until recently, of a bootable CD for SPARC may have something to do with living with different names. Makes it easier to recv snapshots from different hosts and architectures, too.> I''ll try out a few variations on a VM and see how it goes.You''ll need to do the zfs create with legacy mount option, and set the bootfs property. Otherwise it looks like you are on the right path. Cheers -- Frank
Hi Brandon, I think I''ve done a similar migration before by creating a second root pool, and then create a new BE in the new root pool, like this: # zpool create rpool2 mirror disk-1 disk2 # lucreate -n newzfsBE -p rpool2 # luactivate newzfsBE # installgrub ... <reboot to newzfsBE> I don''t think LU cares that the disks in the new pool are smaller, obviously they need to be large enough to contain the BE. Thanks, Cindy On 04/16/10 17:41, Brandon High wrote:> When I set up my opensolaris system at home, I just grabbed a 160 GB > drive that I had sitting around to use for the rpool. > > Now I''m thinking of moving the rpool to another disk, probably ssd, > and I don''t really want to shell out the money for two 160 GB drives. > I''m currently using ~ 18GB in the rpool, so any of the ssd ''boot > drives'' being sold are large enough. I know I can''t attach a device > that much smaller to the rpool however. > > Would it be possible to do the following? > 1. Attach the new drives. > 2. Reboot from LiveCD. > 3. zpool create new_rpool on the ssd > 4. zfs send all datasets from rpool to new_rpool > 5. installgrub /boot/grub/stage1 /boot/grub/stage2 on the ssd > 6. zfs export the rpool and new_rpool > 7. ''zfs import new_rpool rpool'' (This should rename it to rpool, right?) > 8. shutdown and disconnect the old rpool drive > > This should work, right? I plan to test it on a VirtualBox instance > first, but does anyone see a problem with the general steps I''ve laid > out? > > -B >
On Mon, Apr 19, 2010 at 7:42 AM, Cindy Swearingen <cindy.swearingen at oracle.com> wrote:> I don''t think LU cares that the disks in the new pool are smaller, > obviously they need to be large enough to contain the BE.It doesn''t look like OpenSolaris includes LU, at least on x86-64. Anyhow, wouldn''t the method you mention fail because zfs would use the wront partition table for booting? basestar:~$ lucreate -bash: lucreate: command not found bhigh at basestar:~$ man lucreate No manual entry for lucreate. bhigh at basestar:~$ pkgsearch lucreate -bash: pkgsearch: command not found bhigh at basestar:~$ pkg search lucreate bhigh at basestar:~$ pkg search SUNWluu bhigh at basestar:~$ I think I remember someone posting a method to copy the boot drive''s layout with prtvtoc and fmthard, but I don''t remember the exact syntax. -B -- Brandon High : bhigh at freaks.com
On Mon, Apr 19, 2010 at 4:21 PM, Brandon High <bhigh at freaks.com> wrote:> I think I remember someone posting a method to copy the boot drive''s layout with prtvtoc and fmthard, but I don''t remember the exact syntax.Apparently Google and the man pages know the answer. prtvtoc /dev/rdsk/c5t0d0s2 | fmthard -s - /dev/rdsk/c6t0d0s2 With drives of different sizes, I expect I may need to change the input to fmthard though. There''s also a note in the man page: "On x86 systems, fdisk(1M) must be run on the drive before fmthard." -B -- Brandon High : bhigh at freaks.com
On Apr 19, 2010, at 4:33 PM, Brandon High wrote:> On Mon, Apr 19, 2010 at 4:21 PM, Brandon High <bhigh at freaks.com> wrote: >> I think I remember someone posting a method to copy the boot drive''s layout with prtvtoc and fmthard, but I don''t remember the exact syntax. > > Apparently Google and the man pages know the answer. > > prtvtoc /dev/rdsk/c5t0d0s2 | fmthard -s - /dev/rdsk/c6t0d0s2 > > With drives of different sizes, I expect I may need to change the > input to fmthard though. There''s also a note in the man page: "On x86 > systems, fdisk(1M) must be run on the drive before fmthard."IMHO, this is a virus. It is a lazy way for you to copy a vtoc to another disk. If the other disk is of a different size (as in sectors), then you''ve wasted either your time or your disk space. A better idea is to learn how to use the format(1m) command to manage your disk partitions and slices. -- richard ZFS storage and performance consulting at http://www.RichardElling.com ZFS training on deduplication, NexentaStor, and NAS performance Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com
I have certainly moved a root pool from one disk to another, with the same basic process, ie: - fuss with fdisk and SMI labels (sigh) - zpool create - snapshot, send and recv - installgrub - swap disks I looked over the "root pool recovery" section in the Best Practices guide at the time, it has details of all these steps. In my case, it was to move to a larger disk (in my laptop) rather than a smaller, but as long as it all fits it won''t matter. (I did it this way, instead of by attach and detach of mirror, in order to go through dedup and upgrade checksums, and also to get comfort with the process for some time when I''m really doing a recovery.) -- Dan. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 194 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100420/9d110aed/attachment.bin>
Yes, I apologize. I didn''t notice you were running the OpenSolaris release. What I outlined below would work on a Solaris 10 system. I wonder if beadm supports a similar migration. I will find out and let you know. Thanks, Cindy On 04/19/10 17:22, Brandon High wrote:> On Mon, Apr 19, 2010 at 7:42 AM, Cindy Swearingen > <cindy.swearingen at oracle.com> wrote: >> I don''t think LU cares that the disks in the new pool are smaller, >> obviously they need to be large enough to contain the BE. > > It doesn''t look like OpenSolaris includes LU, at least on x86-64. > Anyhow, wouldn''t the method you mention fail because zfs would use the > wront partition table for booting? > > basestar:~$ lucreate > -bash: lucreate: command not found > bhigh at basestar:~$ man lucreate > No manual entry for lucreate. > bhigh at basestar:~$ pkgsearch lucreate > -bash: pkgsearch: command not found > bhigh at basestar:~$ pkg search lucreate > bhigh at basestar:~$ pkg search SUNWluu > bhigh at basestar:~$ > > I think I remember someone posting a method to copy the boot drive''s > layout with prtvtoc and fmthard, but I don''t remember the exact > syntax. > > -B >
Brandon, You can use the OpenSolaris beadm command to migrate a ZFS BE over to another root pool, but you will also need to perform some manual migration steps, such as - copy over your other rpool datasets - recreate swap and dump devices - install bootblocks - update BIOS and GRUB entries to boot from new root pool The BE recreation gets you part of the way and its fast, anyway. Thanks, Cindy !. Create the second root pool. # zpool create rpool2 c5t1d0s0 2. Create the new BE in the second root pool. # beadm create -p rpool2 osol2BE 3. Activate the new BE. # beadm activate 4. Install the boot blocks. 5. Test that the system boots from the second root pool. 6. Update BIOS and GRUB to boot from new pool. On 04/20/10 08:36, Cindy Swearingen wrote:> Yes, I apologize. I didn''t notice you were running the OpenSolaris > release. What I outlined below would work on a Solaris 10 system. > > I wonder if beadm supports a similar migration. I will find out > and let you know. > > Thanks, > > Cindy > > On 04/19/10 17:22, Brandon High wrote: >> On Mon, Apr 19, 2010 at 7:42 AM, Cindy Swearingen >> <cindy.swearingen at oracle.com> wrote: >>> I don''t think LU cares that the disks in the new pool are smaller, >>> obviously they need to be large enough to contain the BE. >> >> It doesn''t look like OpenSolaris includes LU, at least on x86-64. >> Anyhow, wouldn''t the method you mention fail because zfs would use the >> wront partition table for booting? >> >> basestar:~$ lucreate >> -bash: lucreate: command not found >> bhigh at basestar:~$ man lucreate >> No manual entry for lucreate. >> bhigh at basestar:~$ pkgsearch lucreate >> -bash: pkgsearch: command not found >> bhigh at basestar:~$ pkg search lucreate >> bhigh at basestar:~$ pkg search SUNWluu >> bhigh at basestar:~$ >> >> I think I remember someone posting a method to copy the boot drive''s >> layout with prtvtoc and fmthard, but I don''t remember the exact >> syntax. >> >> -B >> > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, Apr 20, 2010 at 12:55:10PM -0600, Cindy Swearingen wrote:> You can use the OpenSolaris beadm command to migrate a ZFS BE over > to another root pool, but you will also need to perform some manual > migration steps, such as > - copy over your other rpool datasets > - recreate swap and dump devices > - install bootblocks > - update BIOS and GRUB entries to boot from new root poolI''ve also found it handy to use different names for each rpool. Sometimes it''s handy to boot an image that''s entirely on a removable disk, for example, and move that between hosts. The last thing you want is a name clash or confusion about which pool is which. In addition to the "import name" of the pool, there''s another name that needs to be changed. This is the "boot name" of the pool; it''s the name grub looks for in the "findroot(pool_rpool,...)" line. That name is found in the root fs of the pool, in ./etc/bootsign (so typically mounted at /poolname/etc/bootsign). -- Dan. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 194 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100421/89ffd178/attachment.bin>
On Fri, Apr 16, 2010 at 4:41 PM, Brandon High <bhigh at freaks.com> wrote:> When I set up my opensolaris system at home, I just grabbed a 160 GB > drive that I had sitting around to use for the rpool.Just to follow up, after testing in Virtualbox, my initial plan is very close to what worked. This is what I did: 1. Shutdown the system and attach the new drives. 2. Reboot from LiveCD or USB installer. 3. Run ''format'' to set up the new drive(s). 4. zpool create -f -R /mnt/rpool_new rpool_new ${NEWDRIVE_DEV}s0 5. zpool import -o ro -R /mnt/rpool_old -f rpool 6. zfs send all datasets from rpool to rpool_new 7. installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/${NEWDRIVE_DEV}s0 on the ssd 8. zfs mount rpool_new/ROOT/snv_133 and delete /mnt/rpool_new/etc/zfs/zpool.cache 9. zfs export the rpool and rpool_new 10. ''zpool import -R /mnt/rpool new_rpool rpool'' to rename the pool. (Not needed except to be OCD) 11. ''zpool export rpool'' 12. Disconnect the original drive and boot from your new root. After that, it just worked. I tested it again with a physical box that boots off of USB thumb drives as well. The only caveat with that is you must use ''format -e'' to partition the thumb drives. Oh, and wait a LONG time, because most flash drives are really, really slow. You could also do this from a non-LiveCD environment, but the name rpool may already be in use. If you move the new drive to the original''s port, you don''t need to delete the zpool.cache. It would be nice if there was a boot flag you could use to ignore the zpool.cache so you don''t have to boot into another environment when the device moves. Another benefit of doing the above is that you can enable compression and dedup on the rpool prior to the send, which gives you creamy compressed dedup goodness on your entire rpool. No matter how tempting, don''t use gzip-9 compression. I learned the hard way that grub doesn''t support it. -B -- Brandon High : bhigh at freaks.com