I''ve been testing the ZFS root recovery using 10u6 and have come across a very odd problem. When following this procedure I the disk I am setting up my rpool on keeps reverting to an EFI label. http://docs.sun.com/app/docs/doc/819-5461/ghzur?l=en&a=view Here is what the exact steps I am doing;> boot net -s# mount -F nfs remote-system:/ipool/snapshots /mnt # format -e (Change the label to SMI and check after) # prtvtoc /dev/rdsk/c1t0d0s0 * /dev/rdsk/c1t0d0s0 partition map * * Dimensions: * 512 bytes/sector * 424 sectors/track * 24 tracks/cylinder * 10176 sectors/cylinder * 14089 cylinders * 14087 accessible cylinders * * Flags: * 1: unmountable * 10: read-only * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 2 5 01 0 143349312 143349311 6 4 00 0 143349312 143349311 # zpool create -f -o failmode=continue -R /a -m legacy -o cachefile=/etc/zfs/zpool.cache rpool c1t0d0 # cat /mnt/rpool.2406 | zfs receive -Fdu rpool # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 5.55G 61.4G 96K /a/rpool rpool at 2406 0 - 96K - rpool/ROOT 4.40G 61.4G 20K legacy rpool/ROOT at 2406 0 - 20K - rpool/ROOT/beroot 4.40G 61.4G 4.40G /a rpool/ROOT/beroot at 2406 0 - 4.40G - rpool/dump 1.00G 61.4G 1.00G - rpool/dump at 2406 0 - 1.00G - rpool/export 147M 61.4G 147M /a/export rpool/export at 2406 0 - 147M - rpool/export/home 20K 61.4G 20K /a/export/home rpool/export/home at 2406 0 - 20K - rpool/swap 16K 61.4G 16K - rpool/swap at 2406 0 - 16K - # zpool set bootfs=rpool/ROOT/beroot rpool cannot set property for ''rpool'': property ''bootfs'' not supported on EFI labeled devices # prtvtoc /dev/rdsk/c1t0d0s0 * /dev/rdsk/c1t0d0s0 partition map * * Dimensions: * 512 bytes/sector * 143374738 sectors * 143374671 accessible sectors * * Flags: * 1: unmountable * 10: read-only * * Unallocated space: * First Sector Last * Sector Count Sector * 34 222 255 * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 4 00 256 143358065 143358320 8 11 00 143358321 16384 143374704 # If I destory the rpool and go back into format -e the label is set to EFI as confirmed by the vtoc above! I''m stumped by this one. Is this a known problem with 10u6? Cheers. -- This message posted from opensolaris.org
I''ve discovered the source of the problem. zpool create -f -o failmode=continue -R /a -m legacy -o cachefile=/etc/zfs/zpool.cache rpool c1t0d0 It seems a root pool must only be created on a slice. Therefore zpool create -f -o failmode=continue -R /a -m legacy -o cachefile=/etc/zfs/zpool.cache rpool c1t0d0s0 will work. I''ve been reading through some of the ZFS root installation stuff and can''t find a note that explicitly states this although a bit of bing''ing and I found a thread that confirmed this. -- This message posted from opensolaris.org
Richard Elling
2010-Jun-25 14:41 UTC
[zfs-discuss] ZFS root recovery SMI/EFI label weirdness
On Jun 25, 2010, at 4:44 AM, Sean . wrote:> I''ve discovered the source of the problem. > > zpool create -f -o failmode=continue -R /a -m legacy -o cachefile=/etc/zfs/zpool.cache rpool c1t0d0 > > It seems a root pool must only be created on a slice. Therefore > > zpool create -f -o failmode=continue -R /a -m legacy -o cachefile=/etc/zfs/zpool.cache rpool c1t0d0s0 > > will work. I''ve been reading through some of the ZFS root installation stuff and can''t find a note that explicitly states this although a bit of bing''ing and I found a thread that confirmed this.See the ZFS Administration Guide section on Creating a ZFS Root Pool, first bullet + Disks used for the root pool must have a VTOC (SMI) label and the pool must be created with disk slices -- richard -- Richard Elling richard at nexenta.com +1-760-896-4422 ZFS and NexentaStor training, Rotterdam, July 13-15, 2010 http://nexenta-rotterdam.eventbrite.com/
Cindy Swearingen
2010-Jun-25 14:58 UTC
[zfs-discuss] ZFS root recovery SMI/EFI label weirdness
Sean, If you review the doc section you included previously, you will see that all the root pool examples include slice 0. The slice is a long-standing boot requirement and is described in the boot chapter, in this section: http://docs.sun.com/app/docs/doc/819-5461/ggrko?l=en&a=view ZFS Storage Pool Configuration Requirements The pool must exist either on a disk slice or on disk slices that are mirrored. Thanks, Cindy On 06/25/10 05:44, Sean . wrote:> I''ve discovered the source of the problem. > > zpool create -f -o failmode=continue -R /a -m legacy -o cachefile=/etc/zfs/zpool.cache rpool c1t0d0 > > It seems a root pool must only be created on a slice. Therefore > > zpool create -f -o failmode=continue -R /a -m legacy -o cachefile=/etc/zfs/zpool.cache rpool c1t0d0s0 > > will work. I''ve been reading through some of the ZFS root installation stuff and can''t find a note that explicitly states this although a bit of bing''ing and I found a thread that confirmed this.