cedric briner
2007-Feb-26 14:17 UTC
[zfs-discuss] zfs and iscsi: cannot open <device>: I/O error
hello, I''m trying to consolidate my HDs in a cheap but (I hope) reliable manner. To do so, I was thinking to use zfs over iscsi. Unfortunately, I''m having some issue with it, when I do: # iscsi server (nexenta alpha 5) #------------ svcadm enable iscsitgt iscsitadm delete target --lun 0 vol-1 iscsitadm list target # empty iscsitadm create target -b /dev/dsk/c0d0s5 vol-1 iscsitadm list target # not empty Target: vol-1 iSCSI Name: iqn.1986-03.com.sun:02:662bd119-1660-6141-cea7-dd799d53b254.vol-1 Connections: 0 #iscsi client (solaris 5.10, up-to-date) #------------ iscsiadm add discovery-address 10.194.67.111 # (iscsi server) iscsiadm modify discovery --sendtargets enable iscsiadm list discovery-address # not empty iscsiadm list target # not empty Target: iqn.1986-03.com.sun:02:662bd119-1660-6141-cea7-dd799d53b254.vol-1 Alias: vol-1 TPGT: 1 ISID: 4000002a0000 Connections: 1 devfsadm -i iscsi # to create the device on sf3 iscsiadm list target -Sv| egrep ''OS Device|Peer|Alias'' # not empty Alias: vol-1 IP address (Peer): 10.194.67.111:3260 OS Device Name: /dev/rdsk/c1t0100004005A267C100002A0045E2F524d0s2 zpool create tank c1t0100004005A267C100002A0045E2F524d0s2 cannot open ''/dev/dsk/c1t0100004005A267C100002A0045E2F524d0s2'': I/O error #----------------- The error was produced when using the type mode ``disk'''' for iscsi. I''ve follow the advise of Roch:to try the different type of iscsi: disk|raw|tape but unfortunately the only type who accepts the: ``iscsitadm create target -b /dev/dsk/c0d0s5'''' is the type disk which doesn''t work. Any idea of what I could do to improve this. thanks in advance Ced. -- Cedric BRINER Geneva - Switzerland
On 2/26/07, cedric briner <work at infomaniak.ch> wrote:> hello, > > I''m trying to consolidate my HDs in a cheap but (I hope) reliable > manner. To do so, I was thinking to use zfs over iscsi. > > Unfortunately, I''m having some issue with it, when I do: > > # iscsi server (nexenta alpha 5) > #------------ > svcadm enable iscsitgt > iscsitadm delete target --lun 0 vol-1 > iscsitadm list target # empty > iscsitadm create target -b /dev/dsk/c0d0s5 vol-1 > iscsitadm list target # not empty > Target: vol-1 > iSCSI Name: > iqn.1986-03.com.sun:02:662bd119-1660-6141-cea7-dd799d53b254.vol-1 > Connections: 0 > > #iscsi client (solaris 5.10, up-to-date) > #------------ > > iscsiadm add discovery-address 10.194.67.111 # (iscsi server) > iscsiadm modify discovery --sendtargets enable > iscsiadm list discovery-address # not empty > iscsiadm list target # not empty > Target: > iqn.1986-03.com.sun:02:662bd119-1660-6141-cea7-dd799d53b254.vol-1 > Alias: vol-1 > TPGT: 1 > ISID: 4000002a0000 > Connections: 1 > > devfsadm -i iscsi # to create the device on sf3 > iscsiadm list target -Sv| egrep ''OS Device|Peer|Alias'' # not empty > Alias: vol-1 > IP address (Peer): 10.194.67.111:3260 > OS Device Name: > /dev/rdsk/c1t0100004005A267C100002A0045E2F524d0s2 > > > zpool create tank c1t0100004005A267C100002A0045E2F524d0s2 > cannot open ''/dev/dsk/c1t0100004005A267C100002A0045E2F524d0s2'': I/O error > > #----------------- > The error was produced when using the type mode ``disk'''' for iscsi. I''ve > follow the advise of Roch:to try the different type of iscsi: > disk|raw|tape > > but unfortunately the only type who accepts the: > ``iscsitadm create target -b /dev/dsk/c0d0s5'''' is the type disk which > doesn''t work. > > Any idea of what I could do to improve this.Does the device "/dev/dsk/c1t0100004005A267C100002A0045E2F524d0s2" exist (you can run `ls /dev/dsk/c1t0100004005A267C100002A0045E2F524d0s2'' to see)? What happens if you try to access the device after running devfsadm -C? - Ryan -- UNIX Administrator http://prefetch.net
cedric briner
2007-Feb-26 15:39 UTC
[zfs-discuss] zfs and iscsi: cannot open <device>: I/O error
>> devfsadm -i iscsi # to create the device on sf3>> iscsiadm list target -Sv| egrep ''OS Device|Peer|Alias'' # not empty >> Alias: vol-1 >> IP address (Peer): 10.194.67.111:3260 >> OS Device Name: >> /dev/rdsk/c1t0100004005A267C100002A0045E2F524d0s2 this is where my confusion began. I don''t know what is the device c1t0....4d0s2 for ? I mean what does it represents? I''ve found that the ``OS Device Name'''' (c1t0....4d0s2) is created after the invocation: devfsadm -i iscsi # to create the device on sf3 but no way, this is not a device that you can use. you can find the device only with the command: format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <IC35L120AVV207-0 cyl 59129 alt 2 hd 16 sec 255> /pci at 1f,0/ide at d/dad at 0,0 1. c0t2d0 <IC35L120- VNC602A6G9E2T-0001-115.04GB> /pci at 1f,0/ide at d/dad at 2,0 2. c1t0100004005A267C100002A0045E308D2d0 <SUN-SOLARIS-1-6.68GB> /scsi_vhci/ssd at g0100004005a267c100002a0045e308d2 and then if you create the zpool with: zpool create tank c1t0100004005A267C100002A0045E308D2d0 it works !! BUT.. BUT... and re-BUT Since this, and with all this virtualization... how can I link a device name on my iscsi''s client with the device name on my iscsi''server. Because, Imagine that you are in my situation where I want to have (let''s say) 4 iscsi''server with at maximum 16 disks attached by iscsi''server. And that you have at least 2 iscsi''s client which will consolidate this space with zfs. And suddenly, you can see with zpool that a disk is dead. So I have to be able to replace this disk and so for this, I have to know on which one of the 4 machine it resides and which disk it is. so does some of you knows a little bit about this ? Ced. -- Cedric BRINER Geneva - Switzerland
Rick McNeal
2007-Mar-05 17:28 UTC
[zfs-discuss] zfs and iscsi: cannot open <device>: I/O error
If you have questions about iSCSI, I would suggest sending them to storage-discuss at opensolaris.org. I read that mail list a little more often, so you''ll get a quicker response. On Feb 26, 2007, at 8:39 AM, cedric briner wrote:> >> devfsadm -i iscsi # to create the device on sf3 > >> iscsiadm list target -Sv| egrep ''OS Device|Peer|Alias'' # not empty > >> Alias: vol-1 > >> IP address (Peer): 10.194.67.111:3260 > >> OS Device Name: > >> /dev/rdsk/c1t0100004005A267C100002A0045E2F524d0s2 > this is where my confusion began. > I don''t know what is the device c1t0....4d0s2 for ? I mean what > does it represents? >Normally the "OS Device Name:" would be exactly the same name that you would see when you run format. I don''t know why you''re seeing two different names. What version of Solaris are you running on the initiator? The device names contain the Globally Unique IDentifier (GUID). The main benefit is that if you have multiple Solaris machines which can attach to the same device the pathname will be consistent across the machines.> I''ve found that the ``OS Device Name'''' (c1t0....4d0s2) is created > after the invocation: > devfsadm -i iscsi # to create the device on sf3 > > but no way, this is not a device that you can use. > you can find the device only with the command: > format > Searching for disks...done > > > AVAILABLE DISK SELECTIONS: > 0. c0t0d0 <IC35L120AVV207-0 cyl 59129 alt 2 hd 16 sec 255> > /pci at 1f,0/ide at d/dad at 0,0 > 1. c0t2d0 <IC35L120- VNC602A6G9E2T-0001-115.04GB> > /pci at 1f,0/ide at d/dad at 2,0 > 2. c1t0100004005A267C100002A0045E308D2d0 <SUN-SOLARIS-1-6.68GB> > /scsi_vhci/ssd at g0100004005a267c100002a0045e308d2 > > and then if you create the zpool with: > zpool create tank c1t0100004005A267C100002A0045E308D2d0 > it works !! > > > BUT.. BUT... and re-BUT > Since this, and with all this virtualization... how can I link a > device name on my iscsi''s client with the device name on my > iscsi''server. >Look at the "Alias" value which is reported by the initiator. You can use that to find the device on the storage array. This assumes that you don''t create duplicate "Alias" strings of course.> Because, Imagine that you are in my situation where I want to have > (let''s say) 4 iscsi''server with at maximum 16 disks attached by > iscsi''server. And that you have at least 2 iscsi''s client which > will consolidate this space with zfs. And suddenly, you can see > with zpool that a disk is dead. So I have to be able to replace > this disk and so for this, I have to know on which one of the 4 > machine it resides and which disk it is. > > > so does some of you knows a little bit about this ? >If you post iSCSI related questions to storage-discuss you''ll find many people who''ve been using both the initiator and target and are quite knowledgeable. Also, the Solaris iSCSI developers read the storage-discuss list more frequently than this one.> Ced. > -- > > Cedric BRINER > Geneva - Switzerland > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss---- Rick McNeal "If ignorance is bliss, this lesson would appear to be a deliberate attempt on your part to deprive me of happiness, the pursuit of which is my unalienable right according to the Declaration of Independence. I therefore assert my patriotic prerogative not to know this material. I''ll be out on the playground." -- Calvin