Hello,
A question on putting ZFS on EMC pseuo-devices:
I have a T1000 where we were given 100 GB of SAN space from EMC:
# format < /dev/null
Searching for disks...done
AVAILABLE DISK SELECTIONS:
       0. c0t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
          /pci at 7c0/pci at 0/pci at 8/scsi at 2/sd at 0,0
       1. c0t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
          /pci at 7c0/pci at 0/pci at 8/scsi at 2/sd at 1,0
       2. c1t5006016030602568d0 <DGC-RAID5-0219 cyl 51198 alt 2 hd 256 sec
16>
          /pci at 780/lpfc at 0/fp at 0,0/ssd at w5006016030602568,0
       3. c1t5006016830602568d0 <DGC-RAID5-0219 cyl 51198 alt 2 hd 256 sec
16>
          /pci at 780/lpfc at 0/fp at 0,0/ssd at w5006016830602568,0
       4. emcpower0a <DGC-RAID5-0219 cyl 51198 alt 2 hd 256 sec 16>
          /pseudo/emcp at 0
Specify disk (enter its number):
# powermt display dev=all
Pseudo name=emcpower0a
CLARiiON ID=APM00052300875 [.HOSTNAME.]
Logical device ID=60060160B1221300084781BEAFAADD11 [LUN 87]
state=alive; policy=BasicFailover; priority=0; queued-IOs=0
Owner: default=SP A, current=SP A       Array failover mode: 1
=============================================================================----------------
Host ---------------   - Stor -   -- I/O Path -  -- Stats
---
###  HW Path                I/O Paths    Interf.   Mode    State  Q-IOs
Errors
=============================================================================3073
pci at 780/lpfc at 0/fp at 0,0     c1t5006016030602568d0s0 SP A0     active
alive      0      0
3073 pci at 780/lpfc at 0/fp at 0,0     c1t5006016830602568d0s0 SP B0     active
alive      0      0
When I tried to create a pool on the straight device I got an error:
# zpool create ldom-sparc-111 emcpower0a
cannot open ''/dev/dsk/emcpower0a'': I/O error
#  zpool create ldom-sparc-111 emcpower0a
[...]
open("/dev/zfs", O_RDWR)                        = 3
open("/etc/mnttab", O_RDONLY)                   = 4
open("/etc/dfs/sharetab", O_RDONLY)             Err#2 ENOENT
stat64("/dev/dsk/emcpower0as2", 0xFFBFB2D8)     Err#2 ENOENT
stat64("/dev/dsk/emcpower0a", 0xFFBFB2D8)       = 0
brk(0x000B2000)                                 = 0
open("/dev/dsk/emcpower0a", O_RDONLY)           Err#5 EIO
fstat64(2, 0xFFBF9F90)                          = 0
cannot open ''write(2, " c a n n o t   o p e n  ".., 13) = 13
/dev/dsk/emcpower0awrite(2, " / d e v / d s k / e m c".., 19)   = 19
'': write(2, " '' :  ", 3)                            
= 3
I/O errorwrite(2, " I / O   e r r o r", 9)              = 9
write(2, "\n", 1)                               = 1
close(3)                                        = 0
llseek(4, 0, SEEK_CUR)                          = 0
close(4)                                        = 0
brk(0x000C2000)                                 = 0
_exit(1)
I then put a label on it, and things work fine:
Current partition table (original):
Total disk cylinders available: 51198 + 2 (reserved cylinders)
Part      Tag    Flag     Cylinders         Size            Blocks
  0        usr    wm       0 - 51174       99.95GB    (51175/0/0) 209612800
  1 unassigned    wu       0                0         (0/0/0)             0
  2     backup    wu       0 - 51197      100.00GB    (51198/0/0) 209707008
  3 unassigned    wm       0                0         (0/0/0)             0
  4 unassigned    wm       0                0         (0/0/0)             0
  5 unassigned    wm       0                0         (0/0/0)             0
  6 unassigned    wm       0                0         (0/0/0)             0
  7 unassigned    wm       0                0         (0/0/0)             0
# zpool status
  pool: ldom-sparc-111
 state: ONLINE
 scrub: none requested
config:
        NAME          STATE     READ WRITE CKSUM
        ldom-sparc-111  ONLINE       0     0     0
          emcpower0a  ONLINE       0     0     0
errors: No known data errors
We have another T1000 with SAN space as well, and I don''t remember
having
to label the disk (though I could be mis-remembering):
Total disk sectors available: 524271582 + 16384 (reserved sectors)
Part      Tag    Flag     First Sector         Size         Last Sector
  0        usr    wm                34      249.99GB          524271582
  1 unassigned    wm                 0           0               0
  2 unassigned    wm                 0           0               0
  3 unassigned    wm                 0           0               0
  4 unassigned    wm                 0           0               0
  5 unassigned    wm                 0           0               0
  6 unassigned    wm                 0           0               0
  8   reserved    wm         524271583        8.00MB          524287966
# zpool status
  pool: ldom-sparc-110
 state: ONLINE
 scrub: none requested
config:
        NAME          STATE     READ WRITE CKSUM
        ldom-sparc-110  ONLINE       0     0     0
          emcpower0a  ONLINE       0     0     0
errors: No known data errors
Any way to find out why there are differences?
Thanks for any info.
-- 
David Magda <dmagda at ee.ryerson.ca>
Vimes pulled out his watch and stared at it. It was turning out to be one
of those days...the sort that you got every day.
                           -- Terry Pratchett, _The_Fifth_Elephant_
David Magda
2008-Nov-06  16:18 UTC
[zfs-discuss] ZFS and VTOC/EFI labelling mystery (was: ZFS on emcpower0a and labels)
Answering myself because I''ve gotten things to work, but it''s
a mystery as
to why they''re working (I have a Sun case number if anyone at Sun.com
is
interested).
Steps:
   1. Try to create a pool on a pseudo-device:
        # zpool create mypool emcpower0a
      This receives an I/O error (see previous message).
   2. Create a pool on the LUN using the traditional device name:
        # zpool create mypool c1tFOO...
   3. Destroy the pool:
        # zpool destroy mypool
   4. Go back to the EMC PowerPath pseudo-device and create the pool:
        # zpool create mypool emcpower0a
      This now works.
The only difference I can see is that before the emcpower0a device had
slices that were numbered 0-7 according to format(1M). Now it has slices
0-6, and a slice numbered 8 that is "reserved":
AVAILABLE DISK SELECTIONS:
       0. c0t0d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
          /pci at 7c0/pci at 0/pci at 8/scsi at 2/sd at 0,0
       1. c0t1d0 <SUN72G cyl 14087 alt 2 hd 24 sec 424>
          /pci at 7c0/pci at 0/pci at 8/scsi at 2/sd at 1,0
       2. c1t5006016030602568d0 <DGC-RAID 5-0219-100.00GB>
          /pci at 780/lpfc at 0/fp at 0,0/ssd at w5006016030602568,0
       3. c1t5006016830602568d0 <DGC-RAID 5-0219-100.00GB>
          /pci at 780/lpfc at 0/fp at 0,0/ssd at w5006016830602568,0
       4. emcpower0a <DGC-RAID 5-0219-100.00GB>
          /pseudo/emcp at 0
Specify disk (enter its number): 4
selecting emcpower0a
[disk formatted]
FORMAT MENU:
[...]
format> p
PARTITION MENU:
[...]
partition> p
Current partition table (original):
Total disk sectors available: 209698782 + 16384 (reserved sectors)
Part      Tag    Flag     First Sector         Size         Last Sector
  0        usr    wm                34       99.99GB          209698782
  1 unassigned    wm                 0           0               0
  2 unassigned    wm                 0           0               0
  3 unassigned    wm                 0           0               0
  4 unassigned    wm                 0           0               0
  5 unassigned    wm                 0           0               0
  6 unassigned    wm                 0           0               0
  8   reserved    wm         209698783        8.00MB          209715166
Also, if I select disks 2 or 3 in the format(1M) menu I get a warning that
the device is part of a ZFS pool and that I should see zpool(1M).
>From the "ZFS Administrators Guide", partition 8 seems to indicate
that an
EFI label is now being used on the LUN. Furthermore the Admin Guide says:
> To use whole disks, the disks must be named using the standard Solaris
> convention, such as /dev/dsk/cXtXdXsX. Some third-party drivers use a
> different naming convention or place disks in a location other than the
> /dev/dsk directory. To use these disks, you must manually label the disk
> and provide a slice to ZFS.
I''m not manually labeling the disk, but things are now working on the
pseudo-device.
Is this expected behaviour? Is there a reason why ZFS cannot access the
pseudo-device in a "raw" manner, even though /dev/dsk/emcpower0a
exists
(see truss(1) output in previous message)?
The Sun support guys deal more in the break-fix aspect of this, and not as
much as the "why?" part, but I asked them to follow up internally if
they
can. I figured I''d post publicly in case a greater audience may find
some
of this useful (and it''ll be in the archives for anyone doing future
searches).
Thanks for any info.
Hi David, Don''t know whether my info is still helpfull, but here it is anyway. Had the same problem and solved it using the format -e command. When you then enter the label option, you will get two options. format> label [0] SMI Label [1] EFI Label Specify Label type[0]: Choose zero and your disk will be a "SUN" disk again. Grtz, Philip. -- This message posted from opensolaris.org