Johan Hartzenberg
2008-Aug-08 14:38 UTC
[zfs-discuss] [install-discuss] lucreate into New ZFS pool
Hello, Since I''ve got my disk partitioning sorted out now, I want to move my BE from the old disk to the new disk. I created a new zpool, named RPOOL for distinction with the existing "rpool". I then did lucreate -p RPOOL -n new95 This completed without error, the log is at the bottom of this mail. I have not yet dared to run luactivate. I also have not yet dared set the ACTIVE flag on any partitions on the new disk (I had some interesting times with that previously). Before I complete these steps to set the active partition and run luactivate, I have a few questions: 1. I somehow doubt that the lucreate process installed a boot block on the new disk... How can I confirm this? Or is luactivate supposed to do this? 2. There are a number of open issues still with ZFS root. I saw some notes pertaining to leaving the first cylinder of the disk out from the root pool slice. What is that all about? 3. I have a remnant of the lucreate process in my mounts ... (which prevents, for example lumount and previously caused problems with luactivate) 4. I see the vdev for dump got created in the new pool, but not for swap? Is this to be expected? 5. There were notes about errors which were recorded in /tmp/lucopy.errors ... I''ve rebooted my machine since, so I can''t review those any more.... I guess I need to run the lucreate again to see if it happens again and to be able to read those logs before they get lost again. 6. Since SHARED is an entirely independent pool, and since the purpose of this lucreate is to move root from one disk to another, I don''t see why lucreate needed to make snapshots of the zone! 7. Despite the messages that the grub menu have been distributed and populated successfully, the new boot environment have not been added to the grub menu list. My experience though is that this happens during luactivate, so I''m not concerned about this just yet. Below is some bits showing the current status of the system: $ zfs list -r RPOOL NAME USED AVAIL REFER MOUNTPOINT RPOOL 7.97G 24.0G 26.5K /RPOOL RPOOL/ROOT 6.47G 24.0G 18K /RPOOL/ROOT RPOOL/ROOT/new95 6.47G 24.0G 6.47G /.alt.new95 RPOOL/dump 1.50G 25.5G 16K - /RPOOL/boot/grub $ /RPOOL/boot/grub $ /RPOOL/boot/grub $ lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- snv_94 yes no no yes - snv_95 yes yes yes no - new95 yes no no yes - /RPOOL/boot/grub $ luumount new95 ERROR: boot environment <new95> is not mounted $ zfs list -r RPOOL NAME USED AVAIL REFER MOUNTPOINT RPOOL 7.97G 24.0G 26.5K /RPOOL RPOOL/ROOT 6.47G 24.0G 18K /RPOOL/ROOT RPOOL/ROOT/new95 6.47G 24.0G 6.47G /.alt.new95 RPOOL/dump 1.50G 25.5G 16K - $ lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- snv_94 yes no no yes - snv_95 yes yes yes no - new95 yes no no yes - Thank you, _Johan =======For what it is worth, below is the log of the lucreate session. /dev/dsk $ zpool create -f RPOOL c0d0s0 /dev/dsk $ timex lucreate -p RPOOL -n new95 Checking GRUB menu... System has findroot enabled GRUB Analyzing system configuration. Comparing source boot environment <snv_95> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment <new95>. Source boot environment is <snv_95>. Creating boot environment <new95>. Creating file systems on boot environment <new95>. Creating <zfs> file system for </> in zone <global> on <RPOOL/ROOT/new95>. Populating file systems on boot environment <new95>. Checking selection integrity. Integrity check OK. Populating contents of mount point </>. Copying. WARNING: The file </tmp/lucopy.errors.3488> contains a list of <2> potential problems (issues) that were encountered while populating boot environment <new95>. INFORMATION: You must review the issues listed in </tmp/lucopy.errors.3488> and determine if any must be resolved. In general, you can ignore warnings about files that were skipped because they did not exist or could not be opened. You cannot ignore errors such as directories or files that could not be created, or file systems running out of disk space. You must manually resolve any such problems before you activate boot environment <new95>. Creating shared file system mount points. Creating snapshot for <SHARED/zones/sp1> on <SHARED/zones/sp1 at new95>. Creating clone for <SHARED/zones/sp1 at new95> on <SHARED/zones/sp1-new95>. Creating compare databases for boot environment <new95>. Creating compare database for file system </>. Updating compare databases on boot environment <new95>. Updating compare databases on boot environment <snv_94>. Making boot environment <new95> bootable. Updating bootenv.rc on ABE <new95>. Saving existing file </boot/grub/menu.lst> in top level dataset for BE <snv_94> as <mount-point>//boot/grub/menu.lst.prev. File </boot/grub/menu.lst> propagation successful Copied GRUB menu from PBE to ABE No entry for BE <new95> in GRUB menu Population of boot environment <new95> successful. Creation of boot environment <new95> successful. real 35:48.77 user 2:38.00 sys 6:12.22 -- Any sufficiently advanced technology is indistinguishable from magic. Arthur C. Clarke Afrikaanse Stap Website: http://www.bloukous.co.za My blog: http://initialprogramload.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080808/e73f3e91/attachment.html>
Johan Hartzenberg
2008-Aug-10 08:33 UTC
[zfs-discuss] [install-discuss] lucreate into New ZFS pool
My upgrade has been completed - Comments interleaved below. On Fri, Aug 8, 2008 at 4:38 PM, Johan Hartzenberg <jhartzen at gmail.com>wrote:> Hello, > > Since I''ve got my disk partitioning sorted out now, I want to move my BE > from the old disk to the new disk. > > I created a new zpool, named RPOOL for distinction with the existing > "rpool". > I then did lucreate -p RPOOL -n new95 > > This completed without error, the log is at the bottom of this mail. > > I have not yet dared to run luactivate. I also have not yet dared set the > ACTIVE flag on any partitions on the new disk (I had some interesting times > with that previously). Before I complete these steps to set the active > partition and run luactivate, I have a few questions: > > 1. I somehow doubt that the lucreate process installed a boot block on the > new disk... How can I confirm this? Or is luactivate supposed to do this? >This was properly taken care of by luactivate.> > 2. There are a number of open issues still with ZFS root. I saw some notes > pertaining to leaving the first cylinder of the disk out from the root pool > slice. What is that all about? >I can''t find the references to this. I found this while reading up on ZFS root mirroring, but can''t find it again. At any rate, wheatever the issue was it seems to not affect me.> > 3. I have a remnant of the lucreate process in my mounts ... (which > prevents, for example lumount and previously caused problems with > luactivate) >I had to do the lucreate 3 times before it worked. After the first time I had the stuck mount points. This caused some files from the zone to be copied directly into /.alt.*, which caused the lumount and luactivate to fail. It took me two attempts to clean out everything manually because ludelete also refuses to delete a BE which it can not mount.> > 4. I see the vdev for dump got created in the new pool, but not for swap? > Is this to be expected? >On the second and third attempts lucreate did in fact create the SWAP vdev.> > 5. There were notes about errors which were recorded in /tmp/lucopy.errors > ... I''ve rebooted my machine since, so I can''t review those any more.... I > guess I need to run the lucreate again to see if it happens again and to be > able to read those logs before they get lost again. >These did not recur. Note however that between the second and third attempts I removed the zone, so I performed the "upgrade" without any zones configured.> > 6. Since SHARED is an entirely independent pool, and since the purpose of > this lucreate is to move root from one disk to another, I don''t see why > lucreate needed to make snapshots of the zone! >And this became a non-issue as I completed the zone with no zones configured.> > 7. Despite the messages that the grub menu have been distributed and > populated successfully, the new boot environment have not been added to the > grub menu list. My experience though is that this happens during > luactivate, so I''m not concerned about this just yet. >This also became a non-issue on subsequent runs. -- Any sufficiently advanced technology is indistinguishable from magic. Arthur C. Clarke Afrikaanse Stap Website: http://www.bloukous.co.za My blog: http://initialprogramload.blogspot.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080810/21b3555b/attachment.html>