Please reply to david.curtis at sun.com ******** Background / configuration ************** zpool will not create a storage pool on fibre channel storage. I''m attached to an IBM SVC using the IBMsdd driver. I have no problem using SVM metadevices and UFS on these devices. List steps to reproduce the problem(if applicable): Build Solaris 10 Update 2 server Attach to an external storage array via IBM SVC Load lpfc driver (6.02h) Load IBMsdd software (1.6.1.0-2) Attempt to use zpool create to make a storage pool: # zpool create -f extdisk vpath1c internal error: unexpected error 22 at line 446 of ../common/libzfs_pool.c ********* reply to customer ******************** It looks like you have an additional unwanted software layer between Solaris and the disk hardware. Currently ZFS needs to access the physical device to work correctly. Something like: # zpool create -f extdisk c5t0d0 c5t1d0 ...... Let me know if this works for you. ************* follow-up question from customer ************ Yes, using the c#t#d# disks work, but anyone using fibre-channel storage on somethink like IBM Shark or EMC Clariion will want multiple paths to disk using either IBMsdd, EMCpower or Solaris native MPIO. Does ZFS work with any of these fibre channel multipathing drivers? Thanks for any assistance you can provide. -- David Curtis - TSE Sun Microsystems 303-272-6628 Enterprise Services david.curtis at sun.com OS / Installation Support Monday to Friday 9:00 AM to 6:00 PM Mountain
This suggests that there is some kind of bug in the layered storage software. ZFS doesn''t do anything special to the underlying storage device; it merely relies on a few ldi_*() routines. I would try running the following dtrace script: #!/usr/sbin/dtrace -s vdev_disk_open:return, ldi_open_by_name:return, ldi_open_by_path:return, ldi_get_size:return { trace(arg1); } And then re-run your ''zpool create'' command. That will at at least get us pointed in the right direction. - Eric On Wed, Jul 26, 2006 at 09:47:03AM -0600, David Curtis wrote:> Please reply to david.curtis at sun.com > > ******** Background / configuration ************** > > zpool will not create a storage pool on fibre channel storage. I''m > attached to an IBM SVC using the IBMsdd driver. I have no problem using > SVM metadevices and UFS on these devices. > > List steps to reproduce the problem(if applicable): > Build Solaris 10 Update 2 server > Attach to an external storage array via IBM SVC > Load lpfc driver (6.02h) > Load IBMsdd software (1.6.1.0-2) > Attempt to use zpool create to make a storage pool: > # zpool create -f extdisk vpath1c > internal error: unexpected error 22 at line 446 of ../common/libzfs_pool.c > > ********* reply to customer ******************** > > It looks like you have an additional unwanted software layer between > Solaris and the disk hardware. Currently ZFS needs to access the > physical device to work correctly. Something like: > > # zpool create -f extdisk c5t0d0 c5t1d0 ...... > > Let me know if this works for you. > > ************* follow-up question from customer ************ > > Yes, using the c#t#d# disks work, but anyone using fibre-channel storage > on somethink like IBM Shark or EMC Clariion will want multiple paths to > disk using either IBMsdd, EMCpower or Solaris native MPIO. Does ZFS > work with any of these fibre channel multipathing drivers? > > > Thanks for any assistance you can provide. > -- > > David Curtis - TSE Sun Microsystems > 303-272-6628 Enterprise Services > david.curtis at sun.com OS / Installation Support > Monday to Friday 9:00 AM to 6:00 PM Mountain > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
zfs should work fine with disks under the control of solaris mpxio. i don''t know about any of the other multipathing solutions. if you''re trying to use a device that''s controlled by another multipathing solution, you might want to try specifying the full path to the device, ex: zpool create -f extdisk /dev/foo2/vpath1c ed On Wed, Jul 26, 2006 at 09:47:03AM -0600, David Curtis wrote:> Please reply to david.curtis at sun.com > > ******** Background / configuration ************** > > zpool will not create a storage pool on fibre channel storage. I''m > attached to an IBM SVC using the IBMsdd driver. I have no problem using > SVM metadevices and UFS on these devices. > > List steps to reproduce the problem(if applicable): > Build Solaris 10 Update 2 server > Attach to an external storage array via IBM SVC > Load lpfc driver (6.02h) > Load IBMsdd software (1.6.1.0-2) > Attempt to use zpool create to make a storage pool: > # zpool create -f extdisk vpath1c > internal error: unexpected error 22 at line 446 of ../common/libzfs_pool.c > > ********* reply to customer ******************** > > It looks like you have an additional unwanted software layer between > Solaris and the disk hardware. Currently ZFS needs to access the > physical device to work correctly. Something like: > > # zpool create -f extdisk c5t0d0 c5t1d0 ...... > > Let me know if this works for you. > > ************* follow-up question from customer ************ > > Yes, using the c#t#d# disks work, but anyone using fibre-channel storage > on somethink like IBM Shark or EMC Clariion will want multiple paths to > disk using either IBMsdd, EMCpower or Solaris native MPIO. Does ZFS > work with any of these fibre channel multipathing drivers? > > > Thanks for any assistance you can provide. > -- > > David Curtis - TSE Sun Microsystems > 303-272-6628 Enterprise Services > david.curtis at sun.com OS / Installation Support > Monday to Friday 9:00 AM to 6:00 PM Mountain > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Eric, Here is what the customer gets trying to create the pool using the software alias: (I added all the ldi_open''s to the script) # zpool create -f extdisk vpath1c # ./dtrace.script dtrace: script ''./dtrace.script'' matched 6 probes CPU ID FUNCTION:NAME 0 7233 ldi_open_by_vp:return 0 0 15801 ldi_open_by_dev:return 0 0 7233 ldi_open_by_vp:return 0 0 15801 ldi_open_by_dev:return 0 0 7233 ldi_open_by_vp:return 0 0 15801 ldi_open_by_dev:return 0 0 7233 ldi_open_by_vp:return 0 0 15801 ldi_open_by_dev:return 0 0 7233 ldi_open_by_vp:return 0 0 15801 ldi_open_by_dev:return 0 0 7233 ldi_open_by_vp:return 0 0 15801 ldi_open_by_dev:return 0 0 7233 ldi_open_by_vp:return 0 0 15801 ldi_open_by_dev:return 0 0 7233 ldi_open_by_vp:return 0 0 15801 ldi_open_by_dev:return 0 0 7233 ldi_open_by_vp:return 0 0 17817 ldi_open_by_name:return 0 0 16191 ldi_get_size:return -1 0 44942 vdev_disk_open:return 22 Thanks, David Eric Schrock wrote On 07/26/06 10:03 AM,:> This suggests that there is some kind of bug in the layered storage > software. ZFS doesn''t do anything special to the underlying storage > device; it merely relies on a few ldi_*() routines. I would try running > the following dtrace script: > > #!/usr/sbin/dtrace -s > > vdev_disk_open:return, > ldi_open_by_name:return, > ldi_open_by_path:return, > ldi_get_size:return > { > trace(arg1); > } > > And then re-run your ''zpool create'' command. That will at at least get > us pointed in the right direction. > > - Eric > > On Wed, Jul 26, 2006 at 09:47:03AM -0600, David Curtis wrote: > >>Please reply to david.curtis at sun.com >> >>******** Background / configuration ************** >> >>zpool will not create a storage pool on fibre channel storage. I''m >>attached to an IBM SVC using the IBMsdd driver. I have no problem using >>SVM metadevices and UFS on these devices. >> >> List steps to reproduce the problem(if applicable): >>Build Solaris 10 Update 2 server >>Attach to an external storage array via IBM SVC >>Load lpfc driver (6.02h) >>Load IBMsdd software (1.6.1.0-2) >>Attempt to use zpool create to make a storage pool: >># zpool create -f extdisk vpath1c >>internal error: unexpected error 22 at line 446 of ../common/libzfs_pool.c >> >>********* reply to customer ******************** >> >>It looks like you have an additional unwanted software layer between >>Solaris and the disk hardware. Currently ZFS needs to access the >>physical device to work correctly. Something like: >> >># zpool create -f extdisk c5t0d0 c5t1d0 ...... >> >>Let me know if this works for you. >> >>************* follow-up question from customer ************ >> >>Yes, using the c#t#d# disks work, but anyone using fibre-channel storage >>on somethink like IBM Shark or EMC Clariion will want multiple paths to >>disk using either IBMsdd, EMCpower or Solaris native MPIO. Does ZFS >>work with any of these fibre channel multipathing drivers? >> >> >>Thanks for any assistance you can provide. >>-- >> >>David Curtis - TSE Sun Microsystems >>303-272-6628 Enterprise Services >>david.curtis at sun.com OS / Installation Support >>Monday to Friday 9:00 AM to 6:00 PM Mountain >> >>_______________________________________________ >>zfs-discuss mailing list >>zfs-discuss at opensolaris.org >>http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > -- > Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock-- David Curtis - TSE Sun Microsystems 303-272-6628 Enterprise Services david.curtis at sun.com OS / Installation Support Monday to Friday 9:00 AM to 6:00 PM Mountain
So it does look like something''s messed up here. Before we pin this down as a driver bug, we should double check that we are indeed opening what we think we''re opening, and try to track down why ldi_get_size is failing. Try this: #!/usr/sbin/dtrace -s ldi_open_by_name:entry { trace(stringof(args[0])); } ldi_prop_exists:entry { trace(stringof(args[2])); } ldi_prop_exists:return { trace(arg1); } ldi_get_otyp:return { trace(arg1); } - Eric On Wed, Jul 26, 2006 at 12:49:35PM -0600, David Curtis wrote:> Eric, > > Here is what the customer gets trying to create the pool using the > software alias: (I added all the ldi_open''s to the script) > # zpool create -f extdisk vpath1c > > # ./dtrace.script > dtrace: script ''./dtrace.script'' matched 6 probes > CPU ID FUNCTION:NAME > 0 7233 ldi_open_by_vp:return 0 > 0 15801 ldi_open_by_dev:return 0 > 0 7233 ldi_open_by_vp:return 0 > 0 15801 ldi_open_by_dev:return 0 > 0 7233 ldi_open_by_vp:return 0 > 0 15801 ldi_open_by_dev:return 0 > 0 7233 ldi_open_by_vp:return 0 > 0 15801 ldi_open_by_dev:return 0 > 0 7233 ldi_open_by_vp:return 0 > 0 15801 ldi_open_by_dev:return 0 > 0 7233 ldi_open_by_vp:return 0 > 0 15801 ldi_open_by_dev:return 0 > 0 7233 ldi_open_by_vp:return 0 > 0 15801 ldi_open_by_dev:return 0 > 0 7233 ldi_open_by_vp:return 0 > 0 15801 ldi_open_by_dev:return 0 > 0 7233 ldi_open_by_vp:return 0 > 0 17817 ldi_open_by_name:return 0 > 0 16191 ldi_get_size:return -1 > 0 44942 vdev_disk_open:return 22-- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Eric, Here is the output: # ./dtrace2.dtr dtrace: script ''./dtrace2.dtr'' matched 4 probes CPU ID FUNCTION:NAME 0 17816 ldi_open_by_name:entry /dev/dsk/vpath1c 0 16197 ldi_get_otyp:return 0 0 15546 ldi_prop_exists:entry Nblocks 0 15547 ldi_prop_exists:return 0 0 15546 ldi_prop_exists:entry nblocks 0 15547 ldi_prop_exists:return 0 0 15546 ldi_prop_exists:entry Size 0 15547 ldi_prop_exists:return 0 0 15546 ldi_prop_exists:entry size 0 15547 ldi_prop_exists:return 0 Thanks, David Eric Schrock wrote On 07/26/06 01:01 PM,:> So it does look like something''s messed up here. Before we pin this > down as a driver bug, we should double check that we are indeed opening > what we think we''re opening, and try to track down why ldi_get_size is > failing. Try this: > > #!/usr/sbin/dtrace -s > > ldi_open_by_name:entry > { > trace(stringof(args[0])); > } > > ldi_prop_exists:entry > { > trace(stringof(args[2])); > } > > ldi_prop_exists:return > { > trace(arg1); > } > > ldi_get_otyp:return > { > trace(arg1); > } > > - Eric > > > On Wed, Jul 26, 2006 at 12:49:35PM -0600, David Curtis wrote: > >>Eric, >> >>Here is what the customer gets trying to create the pool using the >>software alias: (I added all the ldi_open''s to the script) >># zpool create -f extdisk vpath1c >> >># ./dtrace.script >>dtrace: script ''./dtrace.script'' matched 6 probes >>CPU ID FUNCTION:NAME >> 0 7233 ldi_open_by_vp:return 0 >> 0 15801 ldi_open_by_dev:return 0 >> 0 7233 ldi_open_by_vp:return 0 >> 0 15801 ldi_open_by_dev:return 0 >> 0 7233 ldi_open_by_vp:return 0 >> 0 15801 ldi_open_by_dev:return 0 >> 0 7233 ldi_open_by_vp:return 0 >> 0 15801 ldi_open_by_dev:return 0 >> 0 7233 ldi_open_by_vp:return 0 >> 0 15801 ldi_open_by_dev:return 0 >> 0 7233 ldi_open_by_vp:return 0 >> 0 15801 ldi_open_by_dev:return 0 >> 0 7233 ldi_open_by_vp:return 0 >> 0 15801 ldi_open_by_dev:return 0 >> 0 7233 ldi_open_by_vp:return 0 >> 0 15801 ldi_open_by_dev:return 0 >> 0 7233 ldi_open_by_vp:return 0 >> 0 17817 ldi_open_by_name:return 0 >> 0 16191 ldi_get_size:return -1 >> 0 44942 vdev_disk_open:return 22 > > > -- > Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock-- David Curtis - TSE Sun Microsystems 303-272-6628 Enterprise Services david.curtis at sun.com OS / Installation Support Monday to Friday 9:00 AM to 6:00 PM Mountain
On Wed, Jul 26, 2006 at 02:11:44PM -0600, David Curtis wrote:> Eric, > > Here is the output: > > # ./dtrace2.dtr > dtrace: script ''./dtrace2.dtr'' matched 4 probes > CPU ID FUNCTION:NAME > 0 17816 ldi_open_by_name:entry /dev/dsk/vpath1c > 0 16197 ldi_get_otyp:return 0 > 0 15546 ldi_prop_exists:entry Nblocks > 0 15547 ldi_prop_exists:return 0 > 0 15546 ldi_prop_exists:entry nblocks > 0 15547 ldi_prop_exists:return 0 > 0 15546 ldi_prop_exists:entry Size > 0 15547 ldi_prop_exists:return 0 > 0 15546 ldi_prop_exists:entry size > 0 15547 ldi_prop_exists:return 0 >OK, this definitely seems to be a driver bug. I''m no driver expert, but it seems that exporting none of the above properties is a problem - ZFS has no idea how big this disk is! Perhaps someone more familiar with the DDI/LDI interfaces can explain the appropriate way to implement these on the driver end. But at this point its safe to say that ZFS isn''t doing anything wrong. The layered driver is exporting a device in /dev/dsk, but not exporting basic information (such as the size or number of blocks) that ZFS (and potentially the rest of Solaris) needs to interact with the device. - Eric -- Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock
Does format show these drives to be available and containing a non-zero size? Eric Schrock wrote:> On Wed, Jul 26, 2006 at 02:11:44PM -0600, David Curtis wrote: > >> Eric, >> >> Here is the output: >> >> # ./dtrace2.dtr >> dtrace: script ''./dtrace2.dtr'' matched 4 probes >> CPU ID FUNCTION:NAME >> 0 17816 ldi_open_by_name:entry /dev/dsk/vpath1c >> 0 16197 ldi_get_otyp:return 0 >> 0 15546 ldi_prop_exists:entry Nblocks >> 0 15547 ldi_prop_exists:return 0 >> 0 15546 ldi_prop_exists:entry nblocks >> 0 15547 ldi_prop_exists:return 0 >> 0 15546 ldi_prop_exists:entry Size >> 0 15547 ldi_prop_exists:return 0 >> 0 15546 ldi_prop_exists:entry size >> 0 15547 ldi_prop_exists:return 0 >> >> > > OK, this definitely seems to be a driver bug. I''m no driver expert, but > it seems that exporting none of the above properties is a problem - ZFS > has no idea how big this disk is! Perhaps someone more familiar with > the DDI/LDI interfaces can explain the appropriate way to implement > these on the driver end. > > But at this point its safe to say that ZFS isn''t doing anything wrong. > The layered driver is exporting a device in /dev/dsk, but not exporting > basic information (such as the size or number of blocks) that ZFS (and > potentially the rest of Solaris) needs to interact with the device. > > - Eric > > -- > Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
zfs depends on ldi_get_size(), which depends on the device being accessed exporting one of the properties below. i guess the the devices generated by IBMsdd and/or EMCpower/or don''t generate these properties. ed On Wed, Jul 26, 2006 at 01:53:31PM -0700, Eric Schrock wrote:> On Wed, Jul 26, 2006 at 02:11:44PM -0600, David Curtis wrote: > > Eric, > > > > Here is the output: > > > > # ./dtrace2.dtr > > dtrace: script ''./dtrace2.dtr'' matched 4 probes > > CPU ID FUNCTION:NAME > > 0 17816 ldi_open_by_name:entry /dev/dsk/vpath1c > > 0 16197 ldi_get_otyp:return 0 > > 0 15546 ldi_prop_exists:entry Nblocks > > 0 15547 ldi_prop_exists:return 0 > > 0 15546 ldi_prop_exists:entry nblocks > > 0 15547 ldi_prop_exists:return 0 > > 0 15546 ldi_prop_exists:entry Size > > 0 15547 ldi_prop_exists:return 0 > > 0 15546 ldi_prop_exists:entry size > > 0 15547 ldi_prop_exists:return 0 > > > > OK, this definitely seems to be a driver bug. I''m no driver expert, but > it seems that exporting none of the above properties is a problem - ZFS > has no idea how big this disk is! Perhaps someone more familiar with > the DDI/LDI interfaces can explain the appropriate way to implement > these on the driver end. > > But at this point its safe to say that ZFS isn''t doing anything wrong. > The layered driver is exporting a device in /dev/dsk, but not exporting > basic information (such as the size or number of blocks) that ZFS (and > potentially the rest of Solaris) needs to interact with the device. > > - Eric > > -- > Eric Schrock, Solaris Kernel Development http://blogs.sun.com/eschrock > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> > ************* follow-up question from customer > ************ > > Yes, using the c#t#d# disks work, but anyone using fibre-channel storage > on somethink like IBM Shark or EMC Clariion will want multiple paths to > disk using either IBMsdd, EMCpower or Solaris native MPIO. Does ZFS > work with any of these fibre channel multipathing drivers?As a side node: EMCpower does work with ZFS: # pkginfo -l EMCpower [...] VERSION: 4.5.0_b169 # zpool create test emcpower3c warning: device in use checking failed: No such device # zfs list test NAME USED AVAIL REFER MOUNTPOINT test 76K 8.24G 24.5K /test ZFS does not re-label the disk though, so you have to create an EFI label through some other means. MPxIO works out-of-the-box. This message posted from opensolaris.org