Tano
2008-Oct-15 04:39 UTC
[zfs-discuss] HELP! SNV_97,98,99 zfs with iscsitadm and VMWare!
I''m not sure if this is a problem with the iscsitarget or zfs. I''d greatly appreciate it if it gets moved to the proper list. Well I''m just about out of ideas on what might be wrong.. Quick history: I installed OS 2008.05 when it was SNV_86 to try out ZFS with VMWare. Found out that multilun''s were being treated as multipaths so waited till SNV_94 came out to fix the issues with VMWARE and iscsitadm/zfs shareiscsi=on. I Installed OS2008.05 on a virtual machine as a test bed, pkg image-update to SNV_94 a month ago, made some thin provisioned partitions, shared them with iscsitadm and mounted on VMWare without any problems. Ran storage VMotion and all went well. So with this success I purchased a Dell 1900 with a PERC 5/i controller 6 x 15K SAS DRIVEs with ZFS RAIDZ1 configuration. I shared the zfs partitions and mounted them on VMWare. Everything is great till I have to write to the disks. It won''t write! Steps I took creating the disks 1) Installed mega_sas drivers. 2) zpool create tank raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0 3) zfs create -V 1TB tank/disk1 4) zfs create -V 1TB tank/disk2 5) iscsitadm create target -b /dev/zvol/rdsk/tank/disk1 LABEL1 6) iscsitadm create target -b /dev/zvol/rdsk/tank/disk2 LABEL2 Now both drives are lun 0 but with uniqu VMHBA device identifiers. SO they are detected as seperate drives. I then redid (deleted) step 5 and 6 and changed it too 5) iscsitadm create target -u 0 -b /dev/zvol/rdsk/tank/disk1 LABEL1 6) iscsitadm create target -u 1 -b /dev/zvol/rdsk/tank/disk2 LABEL1 VMWARE discovers the seperate LUNs on the Device identifier, but still unable to write to the iscsi luns. Why is it that the steps I''ve conducted in SNV_94 work but in SNV_97,98, or 99 don''t. Any ideas?? any log files I can check? I am still an ignorant linux user so I only know to look in /var/log :) -- This message posted from opensolaris.org
Ryan Arneson
2008-Oct-16 17:31 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
Tano wrote:> I''m not sure if this is a problem with the iscsitarget or zfs. I''d greatly appreciate it if it gets moved to the proper list. > > Well I''m just about out of ideas on what might be wrong.. > > Quick history: > > I installed OS 2008.05 when it was SNV_86 to try out ZFS with VMWare. Found out that multilun''s were being treated as multipaths so waited till SNV_94 came out to fix the issues with VMWARE and iscsitadm/zfs shareiscsi=on. > > I Installed OS2008.05 on a virtual machine as a test bed, pkg image-update to SNV_94 a month ago, made some thin provisioned partitions, shared them with iscsitadm and mounted on VMWare without any problems. Ran storage VMotion and all went well. > > So with this success I purchased a Dell 1900 with a PERC 5/i controller 6 x 15K SAS DRIVEs with ZFS RAIDZ1 configuration. I shared the zfs partitions and mounted them on VMWare. Everything is great till I have to write to the disks. > > It won''t write! >What''s the error exactly? What step are you performing to get the error? Creating the vmfs3 filesystem? Accessing the mountpoint?> > Steps I took creating the disks > > 1) Installed mega_sas drivers. > 2) zpool create tank raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0 > 3) zfs create -V 1TB tank/disk1 > 4) zfs create -V 1TB tank/disk2 > 5) iscsitadm create target -b /dev/zvol/rdsk/tank/disk1 LABEL1 > 6) iscsitadm create target -b /dev/zvol/rdsk/tank/disk2 LABEL2 > > Now both drives are lun 0 but with uniqu VMHBA device identifiers. SO they are detected as seperate drives. > > I then redid (deleted) step 5 and 6 and changed it too > > 5) iscsitadm create target -u 0 -b /dev/zvol/rdsk/tank/disk1 LABEL1 > 6) iscsitadm create target -u 1 -b /dev/zvol/rdsk/tank/disk2 LABEL1 > > VMWARE discovers the seperate LUNs on the Device identifier, but still unable to write to the iscsi luns. > > Why is it that the steps I''ve conducted in SNV_94 work but in SNV_97,98, or 99 don''t. > > Any ideas?? any log files I can check? I am still an ignorant linux user so I only know to look in /var/log :) >The relevant errors from /var/log/vmkernel on the ESX server would be helpful. Also, iscsitadm list target -v Also, I blogged a bit on OpenSolaris iSCSI & VMware ESX ....I was using b98 on a X4500. http://blogs.sun.com/rarneson/entry/zfs_clones_iscsi_and_vmware> -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-- Ryan Arneson Sun Microsystems, Inc. 303-223-6264 ryan.arneson at sun.com http://blogs.sun.com/rarneson
Tano
2008-Oct-16 20:49 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
Thank you Ryan for your response, I have included all the information you requested in line to this document: I will be testing SNV_86 again to see whether the problem persists, maybe it''s my hardware. I will confirm that soon enough. On Thu, October 16, 2008 10:31 am, Ryan Arneson wrote:> Tano wrote: >> I''m not sure if this is a problem with the iscsitarget or zfs. I''d greatly >> appreciate it if it gets moved to the proper list. >> >> Well I''m just about out of ideas on what might be wrong.. >> >> Quick history: >> >> I installed OS 2008.05 when it was SNV_86 to try out ZFS with VMWare. Found >> out that multilun''s were being treated as multipaths so waited till SNV_94 >> came out to fix the issues with VMWARE and iscsitadm/zfs shareiscsi=on. >> >> I Installed OS2008.05 on a virtual machine as a test bed, pkg image-update >> to SNV_94 a month ago, made some thin provisioned partitions, shared them >> with iscsitadm and mounted on VMWare without any problems. Ran storage >> VMotion and all went well. >> >> So with this success I purchased a Dell 1900 with a PERC 5/i controller 6 x >> 15K SAS DRIVEs with ZFS RAIDZ1 configuration. I shared the zfs partitions >> and mounted them on VMWare. Everything is great till I have to write to the >> disks. >> >> It won''t write! >> > > What''s the error exactly?>From the VMWARE Infrastructure front end, everything looks like is in order.I "Send Targets" to the iscsi IP, then rescan the HBA and it detects all the LUNs and Targets.> What step are you performing to get the error? Creating the vmfs3 > filesystem? Accessing the mountpoint?The error occurs when attempting to write large data sets to the mount point. Formatting the drive VMFS3 works, manually copying 5 megabytes of data to the Target works. Running cp -a of the VM folder or cold VM migration will hang the infrastructure client and the ESX host lags. No timeouts of any sort will occur. I waited up to an hour.> >> >> Steps I took creating the disks >> >> 1) Installed mega_sas drivers. >> 2) zpool create tank raidz c5t0d0 c5t1d0 c5t2d0 c5t3d0 c5t4d0 c5t5d0 >> 3) zfs create -V 1TB tank/disk1 >> 4) zfs create -V 1TB tank/disk2 >> 5) iscsitadm create target -b /dev/zvol/rdsk/tank/disk1 LABEL1 >> 6) iscsitadm create target -b /dev/zvol/rdsk/tank/disk2 LABEL2 >> >> Now both drives are lun 0 but with uniqu VMHBA device identifiers. SO they >> are detected as seperate drives. >> >> I then redid (deleted) step 5 and 6 and changed it too >> >> 5) iscsitadm create target -u 0 -b /dev/zvol/rdsk/tank/disk1 LABEL1 >> 6) iscsitadm create target -u 1 -b /dev/zvol/rdsk/tank/disk2 LABEL1 >> >> VMWARE discovers the seperate LUNs on the Device identifier, but still >> unable to write to the iscsi luns. >> >> Why is it that the steps I''ve conducted in SNV_94 work but in SNV_97,98, or >> 99 don''t. >> >> Any ideas?? any log files I can check? I am still an ignorant linux user so >> I only know to look in /var/log :) >> > The relevant errors from /var/log/vmkernel on the ESX server would be > helpful. >So I weeded out the best that I could the logs from /var/log/vmkernel. Basically everytime I initiated a command from vmware I captured the logs. I have broken down what I was doing at what point in the logs. Again the complete breakdown of both systems: [b]VMware ESX 3.5 Update 2[/b] [root at vmware-860-1 log]# uname -a Linux vmware-860-1.ucr.edu 2.4.21-57.ELvmnix #1 Tue Aug 12 17:28:03 PDT 2008 i686 i686 i386 GNU/Linux [root at vmware-860-1 log]# arch i686 [b]Opensolaris:[/b] Dell Poweredge 1900 PERC 5/i 6 disk 450GB each SAS 15kRPM Broadcomm BNX driver: no conflicts. Quadcore 1600 Mhz 1066 FSB 8 GB RAM root at iscsi-sas:~# uname -a SunOS iscsi-sas 5.11 snv_99 i86pc i386 i86pc Solaris root at iscsi-sas:~# isainfo -v 64-bit amd64 applications ssse3 cx16 mon sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications ssse3 ahf cx16 mon sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu root at iscsi-sas:~# zpool status -v pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c3t0d0s0 ONLINE 0 0 0 c3t1d0 ONLINE 0 0 0 errors: No known data errors pool: vdrive state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM vdrive ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c5t0d0 ONLINE 0 0 0 c5t1d0 ONLINE 0 0 0 c5t2d0 ONLINE 0 0 0 c5t3d0 ONLINE 0 0 0 c5t4d0 ONLINE 0 0 0 c5t5d0 ONLINE 0 0 0 errors: No known data errors root at iscsi-sas:~# root at iscsi-sas:~# zfs create -V 750G vdrive/LUNA root at iscsi-sas:~# zfs create -V 1250G vdrive/LUNB root at iscsi-sas:~# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 5.12G 140G 56.5K /rpool rpool/ROOT 5.11G 140G 18K /rpool/ROOT rpool/ROOT/opensolaris 3.07G 140G 2.90G legacy rpool/ROOT/opensolaris-1 2.04G 140G 3.22G /tmp/tmp4qmK4_ rpool/ROOT/opensolaris-1/opt 216K 140G 3.60M /tmp/tmp4qmK4_/opt rpool/ROOT/opensolaris/opt 3.61M 140G 3.61M /opt rpool/export 14.3M 140G 19K /export rpool/export/home 14.3M 140G 14.2M /export/home vdrive 1.95T 39.5G 29.9K /vdrive vdrive/LUNA 750G 790G 26.6K - vdrive/LUNB 1.22T 1.26T 26.6K - root at iscsi-sas:~# iscsitadm create target -b /dev/zvol/rdsk/vdrive/LUNA LUNA root at iscsi-sas:~# iscsitadm create target -b /dev/zvol/rdsk/vdrive/LUNB LUNB root at iscsi-sas:~# iscsitadm list target -v Target: luna iSCSI Name: iqn.1986-03.com.sun:02:de84a16a-aa8e-c51f-ad5c-a198289b0bc6.luna Connections: 0 ACL list: TPGT list: LUN information: LUN: 0 GUID: 0 VID: SUN PID: SOLARIS Type: disk Size: 750G Backing store: /dev/zvol/rdsk/vdrive/LUNA Status: online Target: lunb iSCSI Name: iqn.1986-03.com.sun:02:764f3402-8be3-ee7f-ee04-d0052421c44a.lunb Connections: 0 ACL list: TPGT list: LUN information: LUN: 0 GUID: 0 VID: SUN PID: SOLARIS Type: disk Size: 1.2T Backing store: /dev/zvol/rdsk/vdrive/LUNB Status: online root at iscsi-sas:~# Logs before attempting to access the ISCSI target daemon from VMWARE [root at vmware-860-1 log]# tail -f /var/log/vmkernel Oct 16 04:37:36 vmware-860-1 vmkernel: 0:00:02:07.345 cpu1:1033)Config: 414: "HostLocalSwapDirEnabled" = 0, Old Value: 0, (Status: 0x0) Oct 16 04:47:18 vmware-860-1 vmkernel: 0:00:11:48.805 cpu1:1035)BC: 814: FileIO failed with 0x0xbad0006(Limit exceeded) [b]Once I connect VMWare to the iscsi target on Opensolaris:[/b] Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.951 cpu0:1035)iSCSI: bus 0 target 35 = iqn.1986-03.com.sun:02:764f3402-8be3-ee7f-ee04-d0052421c44a.lunb Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.951 cpu0:1035)iSCSI: bus 0 target 35 portal 0 = address 138.23.117.32 port 3260 group 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.951 cpu0:1035)LinStubs: 2045: fn=0x9bb1d8 arg=0x0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.951 cpu0:1035)World: vm 1068: 900: Starting world driver with flags 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.951 cpu1:1068)LinStubs: 2114: fn=0x9bb1d8 arg=0x0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.951 cpu0:1035)LinStubs: 2045: fn=0x9b186c arg=0x9a03f90 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.951 cpu0:1035)World: vm 1069: 900: Starting world driver with flags 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.951 cpu0:1035)LinStubs: 2045: fn=0x9b7cd0 arg=0x9a03f90 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.951 cpu0:1035)World: vm 1070: 900: Starting world driver with flags 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.952 cpu0:1070)LinStubs: 2114: fn=0x9b7cd0 arg=0x9a03f90 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.952 cpu0:1070)iSCSI: bus 0 target 35 trying to establish session 0x9a03f90 to portal 0, address 138.23.117.32 port 3260 group 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.952 cpu0:1069)LinStubs: 2114: fn=0x9b186c arg=0x9a03f90 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.962 cpu0:1070)iSCSI: bus 0 target 35 established session 0x9a03f90 #1 to portal 0, address 138.23.117.32 port 3260 group 1, alias lunb Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.971 cpu0:1035)iSCSI: bus 0 target 36 = iqn.1986-03.com.sun:02:764f3402-8be3-ee7f-ee04-d0052421c44a.lunb Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.971 cpu0:1035)iSCSI: bus 0 target 36 portal 0 = address 0.0.0.0 port 3260 group 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.971 cpu0:1035)LinStubs: 2045: fn=0x9b186c arg=0x9a180a0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.971 cpu0:1035)World: vm 1071: 900: Starting world driver with flags 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.971 cpu0:1035)LinStubs: 2045: fn=0x9b7cd0 arg=0x9a180a0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.971 cpu0:1035)World: vm 1072: 900: Starting world driver with flags 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.971 cpu1:1072)LinStubs: 2114: fn=0x9b7cd0 arg=0x9a180a0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.971 cpu1:1072)iSCSI: bus 0 target 36 trying to establish session 0x9a180a0 to portal 0, address 0.0.0.0 port 3260 group 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.971 cpu1:1071)LinStubs: 2114: fn=0x9b186c arg=0x9a180a0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.971 cpu1:1072)iSCSI: session 0x9a180a0 to iqn.1986-03.com.sun:02:764f3402-8be3-ee7f-ee04-d0052421c44a.lunb failed to connect, rc -5, I/O error Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.971 cpu1:1072)iSCSI: session 0x9a180a0 connect failed at 606098 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.971 cpu1:1072)iSCSI: session 0x9a180a0 to iqn.1986-03.com.sun:02:764f3402-8be3-ee7f-ee04-d0052421c44a.lunb waiting 1 seconds before next login attempt Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.991 cpu0:1035)iSCSI: bus 0 target 37 = iqn.1986-03.com.sun:02:764f3402-8be3-ee7f-ee04-d0052421c44a.lunb Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.991 cpu0:1035)iSCSI: bus 0 target 37 portal 0 = address 0.0.0.0 port 3260 group 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.991 cpu0:1035)LinStubs: 2045: fn=0x9b186c arg=0x9a2c1b0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.991 cpu0:1035)World: vm 1073: 900: Starting world driver with flags 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.991 cpu0:1035)LinStubs: 2045: fn=0x9b7cd0 arg=0x9a2c1b0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.991 cpu0:1035)World: vm 1074: 900: Starting world driver with flags 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.992 cpu0:1074)LinStubs: 2114: fn=0x9b7cd0 arg=0x9a2c1b0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.992 cpu0:1074)iSCSI: bus 0 target 37 trying to establish session 0x9a2c1b0 to portal 0, address 0.0.0.0 port 3260 group 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.992 cpu0:1073)LinStubs: 2114: fn=0x9b186c arg=0x9a2c1b0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.992 cpu0:1074)iSCSI: session 0x9a2c1b0 to iqn.1986-03.com.sun:02:764f3402-8be3-ee7f-ee04-d0052421c44a.lunb failed to connect, rc -5, I/O error Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.992 cpu0:1074)iSCSI: session 0x9a2c1b0 connect failed at 606100 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:00.992 cpu0:1074)iSCSI: session 0x9a2c1b0 to iqn.1986-03.com.sun:02:764f3402-8be3-ee7f-ee04-d0052421c44a.lunb waiting 1 seconds before next login attempt Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.011 cpu0:1034)iSCSI: bus 0 target 38 = iqn.1986-03.com.sun:02:de84a16a-aa8e-c51f-ad5c-a198289b0bc6.luna Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.011 cpu0:1034)iSCSI: bus 0 target 38 portal 0 = address 138.23.117.32 port 3260 group 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.011 cpu0:1034)LinStubs: 2045: fn=0x9b186c arg=0x9a402c0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.011 cpu0:1034)World: vm 1075: 900: Starting world driver with flags 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.011 cpu1:1075)LinStubs: 2114: fn=0x9b186c arg=0x9a402c0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.012 cpu0:1034)LinStubs: 2045: fn=0x9b7cd0 arg=0x9a402c0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.012 cpu0:1034)World: vm 1076: 900: Starting world driver with flags 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.012 cpu1:1076)LinStubs: 2114: fn=0x9b7cd0 arg=0x9a402c0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.012 cpu1:1076)iSCSI: bus 0 target 38 trying to establish session 0x9a402c0 to portal 0, address 138.23.117.32 port 3260 group 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.021 cpu1:1076)iSCSI: bus 0 target 38 established session 0x9a402c0 #1 to portal 0, address 138.23.117.32 port 3260 group 1, alias luna Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.031 cpu0:1034)iSCSI: bus 0 target 39 = iqn.1986-03.com.sun:02:de84a16a-aa8e-c51f-ad5c-a198289b0bc6.luna Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.031 cpu0:1034)iSCSI: bus 0 target 39 portal 0 = address 0.0.0.0 port 3260 group 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.031 cpu0:1034)LinStubs: 2045: fn=0x9b186c arg=0x9a543d0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.031 cpu0:1034)World: vm 1077: 900: Starting world driver with flags 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.032 cpu0:1034)LinStubs: 2045: fn=0x9b7cd0 arg=0x9a543d0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.032 cpu0:1034)World: vm 1078: 900: Starting world driver with flags 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.032 cpu0:1078)LinStubs: 2114: fn=0x9b7cd0 arg=0x9a543d0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.032 cpu0:1078)iSCSI: bus 0 target 39 trying to establish session 0x9a543d0 to portal 0, address 0.0.0.0 port 3260 group 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.032 cpu0:1077)LinStubs: 2114: fn=0x9b186c arg=0x9a543d0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.032 cpu0:1078)iSCSI: session 0x9a543d0 to iqn.1986-03.com.sun:02:de84a16a-aa8e-c51f-ad5c-a198289b0bc6.luna failed to connect, rc -5, I/O error Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.032 cpu0:1078)iSCSI: session 0x9a543d0 connect failed at 606104 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.032 cpu0:1078)iSCSI: session 0x9a543d0 to iqn.1986-03.com.sun:02:de84a16a-aa8e-c51f-ad5c-a198289b0bc6.luna waiting 1 seconds before next login attempt Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.051 cpu0:1034)iSCSI: bus 0 target 40 = iqn.1986-03.com.sun:02:de84a16a-aa8e-c51f-ad5c-a198289b0bc6.luna Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.051 cpu0:1034)iSCSI: bus 0 target 40 portal 0 = address 0.0.0.0 port 3260 group 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.051 cpu0:1034)LinStubs: 2045: fn=0x9b186c arg=0x9a684e0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.051 cpu0:1034)World: vm 1079: 900: Starting world driver with flags 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.052 cpu1:1079)LinStubs: 2114: fn=0x9b186c arg=0x9a684e0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.052 cpu0:1034)LinStubs: 2045: fn=0x9b7cd0 arg=0x9a684e0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.052 cpu0:1034)World: vm 1080: 900: Starting world driver with flags 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.052 cpu1:1080)LinStubs: 2114: fn=0x9b7cd0 arg=0x9a684e0 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.052 cpu1:1080)iSCSI: bus 0 target 40 trying to establish session 0x9a684e0 to portal 0, address 0.0.0.0 port 3260 group 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.052 cpu1:1080)iSCSI: session 0x9a684e0 to iqn.1986-03.com.sun:02:de84a16a-aa8e-c51f-ad5c-a198289b0bc6.luna failed to connect, rc -5, I/O error Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.052 cpu1:1080)iSCSI: session 0x9a684e0 connect failed at 606106 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.052 cpu1:1080)iSCSI: session 0x9a684e0 to iqn.1986-03.com.sun:02:de84a16a-aa8e-c51f-ad5c-a198289b0bc6.luna waiting 1 seconds before next login attempt Oct 16 06:16:31 vmware-860-1 vmkernel: 0:01:41:01.973 cpu1:1072)iSCSI: bus 0 target 36 trying to establish session 0x9a180a0 to portal 0, address 0.0.0.0 port 3260 group 1 Oct 16 06:16:31 vmware-860-1 vmkernel: 0:01:41:01.973 cpu1:1072)iSCSI: session 0x9a180a0 to iqn.1986-03.com.sun:02:764f3402-8be3-ee7f-ee04-d0052421c44a.lunb failed to connect, rc -5, I/O error Oct 16 06:16:31 vmware-860-1 vmkernel: 0:01:41:01.973 cpu1:1072)iSCSI: session 0x9a180a0 connect failed at 606198 Oct 16 06:16:31 vmware-860-1 vmkernel: 0:01:41:01.973 cpu1:1072)<5>iSCSI: session 0x9a180a0 iSCSI: session 0x9a180a0 retrying all the portals again, since the portal list got exhausted Oct 16 06:16:31 vmware-860-1 vmkernel: 0:01:41:01.973 cpu1:1072)iSCSI: session 0x9a180a0 to iqn.1986-03.com.sun:02:764f3402-8be3-ee7f-ee04-d0052421c44a.lunb waiting 1 seconds before next login attempt Oct 16 06:16:31 vmware-860-1 vmkernel: 0:01:41:01.994 cpu0:1074)iSCSI: bus 0 target 37 trying to establish session 0x9a2c1b0 to portal 0, address 0.0.0.0 port 3260 group 1 Oct 16 06:16:31 vmware-860-1 vmkernel: 0:01:41:01.994 cpu0:1074)iSCSI: session 0x9a2c1b0 to iqn.1986-03.com.sun:02:764f3402-8be3-ee7f-ee04-d0052421c44a.lunb failed to connect, rc -5, I/O error Oct 16 06:16:31 vmware-860-1 vmkernel: 0:01:41:01.994 cpu0:1074)iSCSI: session 0x9a2c1b0 connect failed at 606200 Oct 16 06:16:31 vmware-860-1 vmkernel: 0:01:41:01.994 cpu0:1074)<5>iSCSI: session 0x9a2c1b0 iSCSI: session 0x9a2c1b0 retrying all the portals again, since the portal list got exhausted Oct 16 06:16:31 vmware-860-1 vmkernel: 0:01:41:01.994 cpu0:1074)iSCSI: session 0x9a2c1b0 to iqn.1986-03.com.sun:02:764f3402-8be3-ee7f-ee04-d0052421c44a.lunb waiting 1 seconds before next login attempt Oct 16 06:16:31 vmware-860-1 vmkernel: 0:01:41:02.034 cpu0:1078)iSCSI: bus 0 target 39 trying to establish session 0x9a543d0 to portal 0, address 0.0.0.0 port 3260 group 1 Oct 16 06:16:31 vmware-860-1 vmkernel: 0:01:41:02.034 cpu0:1078)iSCSI: session 0x9a543d0 to iqn.1986-03.com.sun:02:de84a16a-aa8e-c51f-ad5c-a198289b0bc6.luna failed to connect, rc -5, I/O error Oct 16 06:16:31 vmware-860-1 vmkernel: 0:01:41:02.034 cpu0:1078)iSCSI: session 0x9a543d0 connect failed at 606204 [b]Adding and Formatting LUNB with VMFS[/b] Oct 16 06:34:46 vmware-860-1 vmkernel: 0:01:59:17.244 cpu1:1035)LVM: 6505: LVMProbeDevice failed with status "Device does not contain a logical volume". Oct 16 06:34:46 vmware-860-1 vmkernel: 0:01:59:17.586 cpu1:1035)LVM: 4866: Initialized vmhba33:35:0:1, devID 48f742f6-af6382c0-581e-0019b9f7edde Oct 16 06:34:46 vmware-860-1 vmkernel: 0:01:59:17.591 cpu1:1035)LVM: 5016: Zero volumeSize specified: using available space (1342158623232). Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.284 cpu1:1035)FS3: 130: <START rootDir> Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.284 cpu1:1035)Lock [type 10c00001 offset 8486912 v 1, hb offset 0 Oct 16 06:34:54 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7165] Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.284 cpu1:1035)Addr <4, 0, 0>, gen 1, links 2, type dir, flags 0x0, uid 0, gid 0, mode 1755 Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.284 cpu1:1035)len 980, nb 1 tbz 0, zla 1, bs 8388608 Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.284 cpu1:1035)FS3: 132: <END rootDir> Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.284 cpu1:1035)FS3: 157: <START fbb> Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.284 cpu1:1035)159968 resources, each of size 0 Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.284 cpu1:1035)Organized as 25 CGs, 32 C/CG and 200 R/C Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.284 cpu1:1035)CGsize 32768. 0th CG at 65536 Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.284 cpu1:1035)FS3: 159: <END fbb> Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.353 cpu1:1035)FS3: 130: <START fbb> Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.353 cpu1:1035)Lock [type 10c00001 offset 8488960 v 1, hb offset 0 Oct 16 06:34:54 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7165] Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.353 cpu1:1035)Addr <4, 0, 1>, gen 1, links 1, type sys, flags 0x0, uid 0, gid 0, mode 400 Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.353 cpu1:1035)len 884736, nb 1 tbz 0, zla 1, bs 8388608 Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.353 cpu1:1035)FS3: 132: <END fbb> Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.353 cpu1:1035)FS3: 157: <START fdc> Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.353 cpu1:1035)30720 resources, each of size 2048 Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.353 cpu1:1035)Organized as 5 CGs, 32 C/CG and 200 R/C Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.353 cpu1:1035)CGsize 13139968. 0th CG at 65536 Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.353 cpu1:1035)FS3: 159: <END fdc> Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.406 cpu1:1035)FS3: 130: <START fdc> Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.406 cpu1:1035)Lock [type 10c00001 offset 8491008 v 1, hb offset 0 Oct 16 06:34:54 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7165] Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.406 cpu1:1035)Addr <4, 0, 2>, gen 1, links 1, type sys, flags 0x0, uid 0, gid 0, mode 400 Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.406 cpu1:1035)len 63143936, nb 8 tbz 0, zla 1, bs 8388608 Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.406 cpu1:1035)FS3: 132: <END fdc> Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.406 cpu1:1035)FS3: 157: <START pbc> Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.406 cpu1:1035)61440 resources, each of size 4096 Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.406 cpu1:1035)Organized as 120 CGs, 32 C/CG and 16 R/C Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.406 cpu1:1035)CGsize 2129920. 0th CG at 65536 Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.406 cpu1:1035)FS3: 159: <END pbc> Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.556 cpu1:1035)FS3: 130: <START pbc> Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.556 cpu1:1035)Lock [type 10c00001 offset 8493056 v 1, hb offset 0 Oct 16 06:34:54 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7165] Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.556 cpu1:1035)Addr <4, 0, 3>, gen 1, links 1, type sys, flags 0x0, uid 0, gid 0, mode 400 Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.556 cpu1:1035)len 255655936, nb 31 tbz 0, zla 1, bs 8388608 Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.556 cpu1:1035)FS3: 132: <END pbc> Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.556 cpu1:1035)FS3: 157: <START sbc> Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.556 cpu1:1035)3968 resources, each of size 65536 Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.556 cpu1:1035)Organized as 8 CGs, 32 C/CG and 16 R/C Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.556 cpu1:1035)CGsize 33587200. 0th CG at 65536 Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.556 cpu1:1035)FS3: 159: <END sbc> Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.611 cpu1:1035)FS3: 130: <START sbc> Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.611 cpu1:1035)Lock [type 10c00001 offset 8495104 v 1, hb offset 0 Oct 16 06:34:54 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7165] Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.611 cpu1:1035)Addr <4, 0, 4>, gen 1, links 1, type sys, flags 0x0, uid 0, gid 0, mode 400 Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.611 cpu1:1035)len 260374528, nb 32 tbz 0, zla 1, bs 8388608 Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.611 cpu1:1035)FS3: 132: <END sbc> Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.611 cpu1:1035)FS3: 130: <START vh> Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.611 cpu1:1035)Lock [type 10c00001 offset 8497152 v 1, hb offset 0 Oct 16 06:34:54 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7165] Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.611 cpu1:1035)Addr <4, 0, 5>, gen 1, links 1, type sys, flags 0x0, uid 0, gid 0, mode 400 Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.611 cpu1:1035)len 8388608, nb 1 tbz 0, zla 1, bs 8388608 Oct 16 06:34:54 vmware-860-1 vmkernel: 0:01:59:25.611 cpu1:1035)FS3: 132: <END vh> [b]Adding and Formatting LUNB[/b] Oct 16 06:36:19 vmware-860-1 vmkernel: 0:02:00:50.176 cpu1:1034)LVM: 6505: LVMProbeDevice failed with status "Device does not contain a logical volume". Oct 16 06:36:19 vmware-860-1 vmkernel: 0:02:00:50.508 cpu1:1034)LVM: 4866: Initialized vmhba33:38:0:1, devID 48f74353-97bf70f2-8731-0019b9f7edde Oct 16 06:36:19 vmware-860-1 vmkernel: 0:02:00:50.513 cpu1:1034)LVM: 5016: Zero volumeSize specified: using available space (805286372352). Oct 16 06:36:24 vmware-860-1 vmkernel: 0:02:00:55.442 cpu1:1034)FS3: 130: <START rootDir> Oct 16 06:36:24 vmware-860-1 vmkernel: 0:02:00:55.442 cpu1:1034)Lock [type 10c00001 offset 4292608 v 1, hb offset 0 Oct 16 06:36:24 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7255] Oct 16 06:36:24 vmware-860-1 vmkernel: 0:02:00:55.442 cpu1:1034)Addr <4, 0, 0>, gen 1, links 2, type dir, flags 0x0, uid 0, gid 0, mode 1755 Oct 16 06:36:24 vmware-860-1 vmkernel: 0:02:00:55.442 cpu1:1034)len 980, nb 1 tbz 0, zla 1, bs 1048576 Oct 16 06:36:24 vmware-860-1 vmkernel: 0:02:00:55.442 cpu1:1034)FS3: 132: <END rootDir> Oct 16 06:36:24 vmware-860-1 vmkernel: 0:02:00:55.442 cpu1:1034)FS3: 157: <START fbb> Oct 16 06:36:24 vmware-860-1 vmkernel: 0:02:00:55.442 cpu1:1034)767744 resources, each of size 0 Oct 16 06:36:24 vmware-860-1 vmkernel: 0:02:00:55.442 cpu1:1034)Organized as 120 CGs, 32 C/CG and 200 R/C Oct 16 06:36:24 vmware-860-1 vmkernel: 0:02:00:55.442 cpu1:1034)CGsize 32768. 0th CG at 65536 Oct 16 06:36:24 vmware-860-1 vmkernel: 0:02:00:55.442 cpu1:1034)FS3: 159: <END fbb> Oct 16 06:36:24 vmware-860-1 vmkernel: 0:02:00:55.765 cpu1:1034)FS3: 130: <START fbb> Oct 16 06:36:24 vmware-860-1 vmkernel: 0:02:00:55.765 cpu1:1034)Lock [type 10c00001 offset 4294656 v 1, hb offset 0 Oct 16 06:36:24 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7255] Oct 16 06:36:24 vmware-860-1 vmkernel: 0:02:00:55.765 cpu1:1034)Addr <4, 0, 1>, gen 1, links 1, type sys, flags 0x0, uid 0, gid 0, mode 400 Oct 16 06:36:24 vmware-860-1 vmkernel: 0:02:00:55.765 cpu1:1034)len 3997696, nb 4 tbz 0, zla 1, bs 1048576 Oct 16 06:36:24 vmware-860-1 vmkernel: 0:02:00:55.765 cpu1:1034)FS3: 132: <END fbb> Oct 16 06:36:24 vmware-860-1 vmkernel: 0:02:00:55.765 cpu1:1034)FS3: 157: <START fdc> Oct 16 06:36:24 vmware-860-1 vmkernel: 0:02:00:55.765 cpu1:1034)30720 resources, each of size 2048 Oct 16 06:36:24 vmware-860-1 vmkernel: 0:02:00:55.765 cpu1:1034)Organized as 5 CGs, 32 C/CG and 200 R/C Oct 16 06:36:24 vmware-860-1 vmkernel: 0:02:00:55.765 cpu1:1034)CGsize 13139968. 0th CG at 65536 Oct 16 06:36:24 vmware-860-1 vmkernel: 0:02:00:55.765 cpu1:1034)FS3: 159: <END fdc> Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:55.820 cpu1:1034)FS3: 130: <START fdc> Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:55.820 cpu1:1034)Lock [type 10c00001 offset 4296704 v 1, hb offset 0 Oct 16 06:36:25 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7255] Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:55.820 cpu1:1034)Addr <4, 0, 2>, gen 1, links 1, type sys, flags 0x0, uid 0, gid 0, mode 400 Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:55.820 cpu1:1034)len 63143936, nb 61 tbz 0, zla 1, bs 1048576 Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:55.820 cpu1:1034)FS3: 132: <END fdc> Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:55.820 cpu1:1034)FS3: 157: <START pbc> Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:55.820 cpu1:1034)61440 resources, each of size 4096 Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:55.820 cpu1:1034)Organized as 120 CGs, 32 C/CG and 16 R/C Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:55.820 cpu1:1034)CGsize 2129920. 0th CG at 65536 Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:55.820 cpu1:1034)FS3: 159: <END pbc> Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:55.970 cpu1:1034)FS3: 130: <START pbc> Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:55.970 cpu1:1034)Lock [type 10c00001 offset 4298752 v 1, hb offset 0 Oct 16 06:36:25 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7255] Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:55.970 cpu1:1034)Addr <4, 0, 3>, gen 1, links 1, type sys, flags 0x0, uid 0, gid 0, mode 400 Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:55.970 cpu1:1034)len 255655936, nb 244 tbz 0, zla 1, bs 1048576 Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:55.970 cpu1:1034)FS3: 132: <END pbc> Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:55.970 cpu1:1034)FS3: 157: <START sbc> Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:55.970 cpu1:1034)3968 resources, each of size 65536 Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:55.970 cpu1:1034)Organized as 8 CGs, 32 C/CG and 16 R/C Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:55.970 cpu1:1034)CGsize 33587200. 0th CG at 65536 Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:55.970 cpu1:1034)FS3: 159: <END sbc> Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:56.027 cpu1:1034)FS3: 130: <START sbc> Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:56.027 cpu1:1034)Lock [type 10c00001 offset 4300800 v 1, hb offset 0 Oct 16 06:36:25 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7255] Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:56.027 cpu1:1034)Addr <4, 0, 4>, gen 1, links 1, type sys, flags 0x0, uid 0, gid 0, mode 400 Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:56.027 cpu1:1034)len 260374528, nb 249 tbz 0, zla 1, bs 1048576 Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:56.027 cpu1:1034)FS3: 132: <END sbc> Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:56.027 cpu1:1034)FS3: 130: <START vh> Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:56.027 cpu1:1034)Lock [type 10c00001 offset 4302848 v 1, hb offset 0 Oct 16 06:36:25 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7256] Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:56.027 cpu1:1034)Addr <4, 0, 5>, gen 1, links 1, type sys, flags 0x0, uid 0, gid 0, mode 400 Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:56.027 cpu1:1034)len 4194304, nb 4 tbz 0, zla 1, bs 1048576 Oct 16 06:36:25 vmware-860-1 vmkernel: 0:02:00:56.027 cpu1:1034)FS3: 132: <END vh> [b]When attempting to copy the VM from the local VMFS store to the ISCSI target[/b] Oct 16 06:38:00 vmware-860-1 vmkernel: 0:02:02:31.108 cpu0:1061)LinSCSI: 3201: Abort failed for cmd with serial=1195, status=bad0001, retval=bad0001 Oct 16 06:38:00 vmware-860-1 last message repeated 3 times Oct 16 06:38:00 vmware-860-1 vmkernel: 0:02:02:31.109 cpu0:1061)LinSCSI: 3201: Abort failed for cmd with serial=1195, status=bad0001, retval=bad0001 Oct 16 06:38:28 vmware-860-1 vmkernel: 0:02:02:59.674 cpu1:1063)WARNING: FS3: 2649: Writing lock [type 10c00002 offset 9019392 v 3, hb offset 3460096 Oct 16 06:38:28 vmware-860-1 vmkernel: gen 25, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7339] failed: Timeout (ok to retry) Oct 16 06:38:28 vmware-860-1 vmkernel: 0:02:02:59.675 cpu1:1063)Res3: 585: Failed to unlock cluster 2600 (typeID 1): Timeout (ok to retry) Oct 16 06:38:28 vmware-860-1 vmkernel: 0:02:02:59.675 cpu1:1063)FS3: 5032: Waiting for timed-out heartbeat [HB state abcdef02 offset 3460096 gen 25 stamp 7354900048 uuid 48f72706-f36533bb-10a9-0019b9f7eddd jrnl <FB 499802> drv 4.31] Oct 16 06:38:29 vmware-860-1 vmkernel: 0:02:03:00.092 cpu1:1072)iSCSI: bus 0 target 36 trying to establish session 0x9a180a0 to portal 0, address 0.0.0.0 port 3260 group 1 Oct 16 06:38:29 vmware-860-1 vmkernel: 0:02:03:00.093 cpu1:1072)iSCSI: session 0x9a180a0 to iqn.1986-03.com.sun:02:764f3402-8be3-ee7f-ee04-d0052421c44a.lunb failed to connect, rc -5, I/O error Oct 16 06:38:29 vmware-860-1 vmkernel: 0:02:03:00.093 cpu1:1072)iSCSI: session 0x9a180a0 connect failed at 738010 Oct 16 06:38:29 vmware-860-1 vmkernel: 0:02:03:00.093 cpu1:1072)<5>iSCSI: session 0x9a180a0 iSCSI: session 0x9a180a0 retrying all the portals again, since the portal list got exhausted Oct 16 06:38:29 vmware-860-1 vmkernel: 0:02:03:00.093 cpu1:1072)iSCSI: session 0x9a180a0 to iqn.1986-03.com.sun:02:764f3402-8be3-ee7f-ee04-d0052421c44a.lunb waiting 60 seconds before next login attempt Oct 16 06:38:29 vmware-860-1 vmkernel: 0:02:03:00.098 cpu1:1074)iSCSI: bus 0 target 37 trying to establish session 0x9a2c1b0 to portal 0, address 0.0.0.0 port 3260 group 1 Oct 16 06:38:29 vmware-860-1 vmkernel: 0:02:03:00.098 cpu1:1074)iSCSI: session 0x9a2c1b0 to iqn.1986-03.com.sun:02:764f3402-8be3-ee7f-ee04-d0052421c44a.lunb failed to connect, rc -5, I/O error Oct 16 06:38:29 vmware-860-1 vmkernel: 0:02:03:00.099 cpu1:1074)iSCSI: session 0x9a2c1b0 connect failed at 738010 Oct 16 06:38:29 vmware-860-1 vmkernel: 0:02:03:00.099 cpu1:1074)<5>iSCSI: session 0x9a2c1b0 iSCSI: session 0x9a2c1b0 retrying all the portals again, since the portal list got exhausted Oct 16 06:38:29 vmware-860-1 vmkernel: 0:02:03:00.099 cpu1:1074)iSCSI: session 0x9a2c1b0 to iqn.1986-03.com.sun:02:764f3402-8be3-ee7f-ee04-d0052421c44a.lunb waiting 60 seconds before next login attempt Oct 16 06:38:29 vmware-860-1 vmkernel: 0:02:03:00.150 cpu1:1078)iSCSI: bus 0 target 39 trying to establish session 0x9a543d0 to portal 0, address 0.0.0.0 port 3260 group 1 Oct 16 06:38:29 vmware-860-1 vmkernel: 0:02:03:00.151 cpu1:1078)iSCSI: session 0x9a543d0 to iqn.1986-03.com.sun:02:de84a16a-aa8e-c51f-ad5c-a198289b0bc6.luna failed to connect, rc -5, I/O error Oct 16 06:38:29 vmware-860-1 vmkernel: 0:02:03:00.151 cpu1:1078)iSCSI: session 0x9a543d0 connect failed at 738016 Oct 16 06:38:29 vmware-860-1 vmkernel: 0:02:03:00.151 cpu1:1078)<5>iSCSI: session 0x9a543d0 iSCSI: session 0x9a543d0 retrying all the portals again, since the portal list got exhausted Oct 16 06:38:29 vmware-860-1 vmkernel: 0:02:03:00.151 cpu1:1078)iSCSI: session 0x9a543d0 to iqn.1986-03.com.sun:02:de84a16a-aa8e-c51f-ad5c-a198289b0bc6.luna waiting 60 seconds before next login attempt Oct 16 06:38:29 vmware-860-1 vmkernel: 0:02:03:00.166 cpu1:1080)iSCSI: bus 0 target 40 trying to establish session 0x9a684e0 to portal 0, address 0.0.0.0 port 3260 group 1 Oct 16 06:38:29 vmware-860-1 vmkernel: 0:02:03:00.167 cpu1:1080)iSCSI: session 0x9a684e0 to iqn.1986-03.com.sun:02:de84a16a-aa8e-c51f-ad5c-a198289b0bc6.luna failed to connect, rc -5, I/O error Oct 16 06:38:29 vmware-860-1 vmkernel: 0:02:03:00.167 cpu1:1080)iSCSI: session 0x9a684e0 connect failed at 738017 Oct 16 06:38:29 vmware-860-1 vmkernel: 0:02:03:00.167 cpu1:1080)<5>iSCSI: session 0x9a684e0 iSCSI: session 0x9a684e0 retrying all the portals again, since the portal list got exhausted Oct 16 06:38:29 vmware-860-1 vmkernel: 0:02:03:00.167 cpu1:1080)iSCSI: session 0x9a684e0 to iqn.1986-03.com.sun:02:de84a16a-aa8e-c51f-ad5c-a198289b0bc6.luna waiting 60 seconds before next login attempt Oct 16 06:38:37 vmware-860-1 vmkernel: 0:02:03:08.108 cpu1:1068)iSCSI: 50 second timeout expired for session 0x9a402c0, rx 733811, ping 738310, now 738811 Oct 16 06:38:37 vmware-860-1 vmkernel: 0:02:03:08.109 cpu1:1075)iSCSI: session 0x9a402c0 xmit_data failed to send 131120 bytes, rc -512 Oct 16 06:38:37 vmware-860-1 vmkernel: 0:02:03:08.109 cpu1:1076)iSCSI: session 0x9a402c0 to luna dropped Oct 16 06:38:37 vmware-860-1 vmkernel: 0:02:03:08.109 cpu1:1076)iSCSI: session 0x9a402c0 for (2 0 38 *) rx thread 1076handled timeout, notify tx Oct 16 06:38:37 vmware-860-1 vmkernel: 0:02:03:08.109 cpu1:1076)iSCSI: bus 0 target 38 trying to establish session 0x9a402c0 to portal 0, address 138.23.117.32 port 3260 group 1 Oct 16 06:38:37 vmware-860-1 vmkernel: 0:02:03:08.115 cpu1:1076)iSCSI: bus 0 target 38 established session 0x9a402c0 #2 to portal 0, address 138.23.117.32 port 3260 group 1, alias luna Oct 16 06:38:37 vmware-860-1 vmkernel: VMWARE SCSI Id: Supported VPD pages for vmhba33:C0:T35:L0 : 0x0 0x80 0x83 0x86 Oct 16 06:38:37 vmware-860-1 vmkernel: VMWARE SCSI Id: Device id info for vmhba33:C0:T35:L0: 0x51 0x95 0x0 0x4 0x0 0x0 0x0 0x1 0x51 0x94 0x0 0x4 0x0 0x0 0x0 0x1 0x51 0x3 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf7 0xa0 0xfa 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x Oct 16 06:38:37 vmware-860-1 vmkernel: 31 0x39 0x38 0x36 0x2d 0x30 0x33 0x2e 0x63 0x6f 0x6d 0x2e 0x73 0x75 0x6e 0x3a 0x30 0x32 0x3a 0x37 0x36 0x34 0x66 0x33 0x34 0x30 0x32 0x2d 0x38 0x62 0x65 0x33 0x2d 0x65 0x65 0x37 0x66 0x2d 0x65 0x65 0x30 0x34 0x2d 0x64 0x30 0x30 0x35 0x32 0x34 0x32 0x31 0x63 Oct 16 06:38:37 vmware-860-1 vmkernel: 0x34 0x34 0x61 0x2e 0x6c 0x75 0x6e 0x62 0x0 0x0 0x0 0x0 0x51 0xa3 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf7 0xa0 0xfa 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 0x51 0x93 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf7 0xa0 0xfa 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 Oct 16 06:38:37 vmware-860-1 vmkernel: VMWARE SCSI Id: Id for vmhba33:C0:T35:L0 0x60 0x01 0x44 0xf0 0x48 0xf7 0xa0 0xfa 0x00 0x00 0x1e 0xc9 0xd5 0x75 0xd2 0x00 0x53 0x4f 0x4c 0x41 0x52 0x49 Oct 16 06:38:40 vmware-860-1 vmkernel: 0:02:03:11.110 cpu0:1061)LinSCSI: 3201: Abort failed for cmd with serial=1195, status=bad0001, retval=bad0001 Oct 16 06:38:40 vmware-860-1 last message repeated 4 times Oct 16 06:38:44 vmware-860-1 vmkernel: 0:02:03:14.901 cpu1:1037)WARNING: FS3: 4785: Reservation error: Timeout Oct 16 06:38:44 vmware-860-1 vmkernel: 0:02:03:14.901 cpu1:1037)WARNING: FS3: 4979: Reclaiming timed out heartbeat [HB state abcdef02 offset 3460096 gen 25 stamp 7354900048 uuid 48f72706-f36533bb-10a9-0019b9f7eddd jrnl <FB 499802> drv 4.31] failed: Timeout Oct 16 06:38:44 vmware-860-1 vmkernel: 0:02:03:14.901 cpu1:1063)Res3: 585: Failed to unlock cluster 2599 (typeID 1): Busy Oct 16 06:38:44 vmware-860-1 vmkernel: 0:02:03:14.901 cpu1:1063)FS3: 5032: Waiting for timed-out heartbeat [HB state abcdef02 offset 3460096 gen 25 stamp 7354900048 uuid 48f72706-f36533bb-10a9-0019b9f7eddd jrnl <FB 499802> drv 4.31] Oct 16 06:39:17 vmware-860-1 vmkernel: 0:02:03:48.119 cpu1:1047)SCSI: 2285: Could not verify that the disk id of path vmhba33:38:0 matches the id of the target Oct 16 06:39:20 vmware-860-1 vmkernel: 0:02:03:51.110 cpu0:1061)LinSCSI: 3201: Abort failed for cmd with serial=1195, status=bad0001, retval=bad0001 Oct 16 06:39:20 vmware-860-1 last message repeated 4 times Oct 16 06:39:24 vmware-860-1 vmkernel: 0:02:03:54.903 cpu1:1037)WARNING: FS3: 4785: Reservation error: Timeout Oct 16 06:39:24 vmware-860-1 vmkernel: 0:02:03:54.903 cpu1:1037)WARNING: FS3: 4979: Reclaiming timed out heartbeat [HB state abcdef02 offset 3460096 gen 25 stamp 7354900048 uuid 48f72706-f36533bb-10a9-0019b9f7eddd jrnl <FB 499802> drv 4.31] failed: Timeout Oct 16 06:39:24 vmware-860-1 vmkernel: 0:02:03:54.903 cpu1:1063)Res3: 585: Failed to unlock cluster 2598 (typeID 1): Busy Oct 16 06:39:24 vmware-860-1 vmkernel: 0:02:03:54.903 cpu1:1063)FS3: 5032: Waiting for timed-out heartbeat [HB state abcdef02 offset 3460096 gen 25 stamp 7354900048 uuid 48f72706-f36533bb-10a9-0019b9f7eddd jrnl <FB 499802> drv 4.31] Oct 16 06:39:28 vmware-860-1 vmkernel: 0:02:03:59.121 cpu1:1068)iSCSI: 50 second timeout expired for session 0x9a402c0, rx 738912, ping 743411, now 743912 Oct 16 06:39:28 vmware-860-1 vmkernel: 0:02:03:59.121 cpu1:1075)iSCSI: session 0x9a402c0 xmit_data failed to send 131120 bytes, rc -512 Oct 16 06:39:28 vmware-860-1 vmkernel: 0:02:03:59.121 cpu1:1076)iSCSI: session 0x9a402c0 to luna dropped Oct 16 06:39:28 vmware-860-1 vmkernel: 0:02:03:59.121 cpu1:1076)iSCSI: session 0x9a402c0 for (2 0 38 *) rx thread 1076handled timeout, notify tx Oct 16 06:39:28 vmware-860-1 vmkernel: 0:02:03:59.121 cpu1:1076)iSCSI: bus 0 target 38 trying to establish session 0x9a402c0 to portal 0, address 138.23.117.32 port 3260 group 1 Oct 16 06:39:28 vmware-860-1 vmkernel: 0:02:03:59.125 cpu1:1076)iSCSI: bus 0 target 38 established session 0x9a402c0 #3 to portal 0, address 138.23.117.32 port 3260 group 1, alias luna Oct 16 06:39:28 vmware-860-1 vmkernel: VMWARE SCSI Id: Supported VPD pages for vmhba33:C0:T35:L0 : 0x0 0x80 0x83 0x86 Oct 16 06:39:28 vmware-860-1 vmkernel: VMWARE SCSI Id: Device id info for vmhba33:C0:T35:L0: 0x51 0x95 0x0 0x4 0x0 0x0 0x0 0x1 0x51 0x94 0x0 0x4 0x0 0x0 0x0 0x1 0x51 0x3 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf7 0xa0 0xfa 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x Oct 16 06:39:28 vmware-860-1 vmkernel: 31 0x39 0x38 0x36 0x2d 0x30 0x33 0x2e 0x63 0x6f 0x6d 0x2e 0x73 0x75 0x6e 0x3a 0x30 0x32 0x3a 0x37 0x36 0x34 0x66 0x33 0x34 0x30 0x32 0x2d 0x38 0x62 0x65 0x33 0x2d 0x65 0x65 0x37 0x66 0x2d 0x65 0x65 0x30 0x34 0x2d 0x64 0x30 0x30 0x35 0x32 0x34 0x32 0x31 0x63 Oct 16 06:39:28 vmware-860-1 vmkernel: 0x34 0x34 0x61 0x2e 0x6c 0x75 0x6e 0x62 0x0 0x0 0x0 0x0 0x51 0xa3 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf7 0xa0 0xfa 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 0x51 0x93 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf7 0xa0 0xfa 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 Oct 16 06:39:28 vmware-860-1 vmkernel: VMWARE SCSI Id: Id for vmhba33:C0:T35:L0 0x60 0x01 0x44 0xf0 0x48 0xf7 0xa0 0xfa 0x00 0x00 0x1e 0xc9 0xd5 0x75 0xd2 0x00 0x53 0x4f 0x4c 0x41 0x52 0x49 Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.095 cpu1:1072)iSCSI: bus 0 target 36 trying to establish session 0x9a180a0 to portal 0, address 0.0.0.0 port 3260 group 1 Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.095 cpu1:1072)iSCSI: session 0x9a180a0 to iqn.1986-03.com.sun:02:764f3402-8be3-ee7f-ee04-d0052421c44a.lunb failed to connect, rc -5, I/O error Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.095 cpu1:1072)iSCSI: session 0x9a180a0 connect failed at 744010 Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.095 cpu1:1072)<5>iSCSI: session 0x9a180a0 iSCSI: session 0x9a180a0 retrying all the portals again, since the portal list got exhausted Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.095 cpu1:1072)iSCSI: session 0x9a180a0 to iqn.1986-03.com.sun:02:764f3402-8be3-ee7f-ee04-d0052421c44a.lunb waiting 60 seconds before next login attempt Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.101 cpu1:1074)iSCSI: bus 0 target 37 trying to establish session 0x9a2c1b0 to portal 0, address 0.0.0.0 port 3260 group 1 Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.101 cpu1:1074)iSCSI: session 0x9a2c1b0 to iqn.1986-03.com.sun:02:764f3402-8be3-ee7f-ee04-d0052421c44a.lunb failed to connect, rc -5, I/O error Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.101 cpu1:1074)iSCSI: session 0x9a2c1b0 connect failed at 744011 Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.101 cpu1:1074)<5>iSCSI: session 0x9a2c1b0 iSCSI: session 0x9a2c1b0 retrying all the portals again, since the portal list got exhausted Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.101 cpu1:1074)iSCSI: session 0x9a2c1b0 to iqn.1986-03.com.sun:02:764f3402-8be3-ee7f-ee04-d0052421c44a.lunb waiting 60 seconds before next login attempt Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.151 cpu1:1078)iSCSI: bus 0 target 39 trying to establish session 0x9a543d0 to portal 0, address 0.0.0.0 port 3260 group 1 Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.151 cpu1:1078)iSCSI: session 0x9a543d0 to iqn.1986-03.com.sun:02:de84a16a-aa8e-c51f-ad5c-a198289b0bc6.luna failed to connect, rc -5, I/O error Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.151 cpu1:1078)iSCSI: session 0x9a543d0 connect failed at 744016 Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.151 cpu1:1078)<5>iSCSI: session 0x9a543d0 iSCSI: session 0x9a543d0 retrying all the portals again, since the portal list got exhausted Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.151 cpu1:1078)iSCSI: session 0x9a543d0 to iqn.1986-03.com.sun:02:de84a16a-aa8e-c51f-ad5c-a198289b0bc6.luna waiting 60 seconds before next login attempt Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.167 cpu1:1080)iSCSI: bus 0 target 40 trying to establish session 0x9a684e0 to portal 0, address 0.0.0.0 port 3260 group 1 Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.167 cpu1:1080)iSCSI: session 0x9a684e0 to iqn.1986-03.com.sun:02:de84a16a-aa8e-c51f-ad5c-a198289b0bc6.luna failed to connect, rc -5, I/O error Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.167 cpu1:1080)iSCSI: session 0x9a684e0 connect failed at 744017 Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.167 cpu1:1080)<5>iSCSI: session 0x9a684e0 iSCSI: session 0x9a684e0 retrying all the portals again, since the portal list got exhausted Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.167 cpu1:1080)iSCSI: session 0x9a684e0 to iqn.1986-03.com.sun:02:de84a16a-aa8e-c51f-ad5c-a198289b0bc6.luna waiting 60 seconds before next login attempt Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.237 cpu1:1031)Fil3: 5004: WRITE error 0xbad00e5 Oct 16 06:39:29 vmware-860-1 vmkernel: 0:02:04:00.237 cpu1:1037)FS3: 4829: Reclaimed timed out heartbeat [HB state abcdef02 offset 3460096 gen 25 stamp 7336683462 uuid 48f72706-f36533bb-10a9-0019b9f7eddd jrnl <FB 499802> drv 4.31] Oct 16 06:39:29 vmware-860-1 vmkernel: VMWARE SCSI Id: Supported VPD pages for vmhba33:C0:T38:L0 : 0x0 0x80 0x83 0x86 Oct 16 06:39:29 vmware-860-1 vmkernel: VMWARE SCSI Id: Device id info for vmhba33:C0:T38:L0: 0x51 0x95 0x0 0x4 0x0 0x0 0x0 0x1 0x51 0x94 0x0 0x4 0x0 0x0 0x0 0x1 0x51 0x3 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf7 0xa0 0xff 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x Oct 16 06:39:29 vmware-860-1 vmkernel: 31 0x39 0x38 0x36 0x2d 0x30 0x33 0x2e 0x63 0x6f 0x6d 0x2e 0x73 0x75 0x6e 0x3a 0x30 0x32 0x3a 0x64 0x65 0x38 0x34 0x61 0x31 0x36 0x61 0x2d 0x61 0x61 0x38 0x65 0x2d 0x63 0x35 0x31 0x66 0x2d 0x61 0x64 0x35 0x63 0x2d 0x61 0x31 0x39 0x38 0x32 0x38 0x39 0x62 0x30 Oct 16 06:39:29 vmware-860-1 vmkernel: 0x62 0x63 0x36 0x2e 0x6c 0x75 0x6e 0x61 0x0 0x0 0x0 0x0 0x51 0xa3 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf7 0xa0 0xff 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 0x51 0x93 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf7 0xa0 0xff 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 Oct 16 06:39:29 vmware-860-1 vmkernel: VMWARE SCSI Id: Id for vmhba33:C0:T38:L0 0x60 0x01 0x44 0xf0 0x48 0xf7 0xa0 0xff 0x00 0x00 0x1e 0xc9 0xd5 0x75 [root at vmware-860-1 log]# Hope this helps! Tano> Also, iscsitadm list target -v > > Also, I blogged a bit on OpenSolaris iSCSI & VMware ESX ....I was using > b98 on a X4500. > > http://blogs.sun.com/rarneson/entry/zfs_clones_iscsi_and_vmware > >> -- >> This message posted from opensolaris.org >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > > > -- > Ryan Arneson > Sun Microsystems, Inc. > 303-223-6264 > ryan.arneson at sun.com > http://blogs.sun.com/rarneson > >-- This message posted from opensolaris.org
Tano
2008-Oct-16 20:51 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
Also I had read your blog post previously. I will be taking advantage of the cloning/snapshot section of your blog once I am successful writing to the Targets. Thanks again! -- This message posted from opensolaris.org
Nigel Smith
2008-Oct-16 23:30 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
I googled on some sub-strings from your ESX logs and found these threads on the VmWare forum which lists similar error messages, & suggests some actions to try on the ESX server: http://communities.vmware.com/message/828207 Also, see this thread: http://communities.vmware.com/thread/131923 Are you using multiple Ethernet connections between the OpenSolaris box and the ESX server? Your ''iscsitadm list target -v'' is showing "Connections: 0", so run that command after the ESX server initiator has successfully connected to the OpenSolaris iscsi target, and post that output. The log files seem to show the iscsi session has dropped out, and the initiator is auto retrying to connect to the target, but failing. It may help to get a packet capture at this stage to try & see why the logon is failing. Regards Nigel Smith -- This message posted from opensolaris.org
Ryan Arneson
2008-Oct-17 01:10 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
Nigel Smith wrote:> I googled on some sub-strings from your ESX logs > and found these threads on the VmWare forum > which lists similar error messages, > & suggests some actions to try on the ESX server: > > http://communities.vmware.com/message/828207 > > Also, see this thread: > > http://communities.vmware.com/thread/131923 > > Are you using multiple Ethernet connections between the OpenSolaris box > and the ESX server? >Indeed, I think there might be some notion of 2 separate interfaces. I see 0.0.0.0 and the 138.xx.xx.xx networks. Oct 16 06:38:29 vmware-860-1 vmkernel: 0:02:03:00.166 cpu1:1080)iSCSI: bus 0 target 40 trying to establish session 0x9a684e0 to portal 0, address 0.0.0.0 port 3260 group 1 Oct 16 06:16:30 vmware-860-1 vmkernel: 0:01:41:01.021 cpu1:1076)iSCSI: bus 0 target 38 established session 0x9a402c0 #1 to portal 0, address 138.23.117.32 port 3260 group 1, alias luna Do you have an active interface on the OpenSolaris box that is configured for 0.0.0.0 right now? By default, since you haven''t configured the tpgt on the iscsi target, solaris will broadcast all active interfaces in its SendTargets response. On the ESX side, ESX will attempt to log into all addresses in that SendTargets response, even though you may only put 1 address in the sw initiator config. If that is the case, you have a few options a) disable that bogus interface b) fully configure it and and also create a vmkernel interface that can connect to it c) configure a tpgt mask on the iscsi target (iscsitadm create tpgt) to only use the valid address Also, I never see target 40 log into anything...is that still a valid target number? You may want to delete everything in /var/lib/iscsi and reboot the host. The vmkbinding and vmkdiscovery files will be rebuilt and it will start over with target 0. Sometimes, things get a bit crufty. -ryan> Your ''iscsitadm list target -v'' is showing "Connections: 0", > so run that command after the ESX server initiator has > successfully connected to the OpenSolaris iscsi target, > and post that output. > The log files seem to show the iscsi session has dropped out, > and the initiator is auto retrying to connect to the target, > but failing. It may help to get a packet capture at this stage > to try & see why the logon is failing. > Regards > Nigel Smith > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Tano
2008-Oct-17 21:07 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
>Do you have an active interface on the OpenSolaris box that is configured for >0.0.0.0 right now?Not anymore:>By default, since you haven''t configured the tpgt on the iscsi target, solaris will >broadcast all active interfaces in its SendTargets response. On the ESX side, >ESX will attempt to log into all addresses in that SendTargets response, even >though you may only put 1 address in the sw initiator config.THis made a lot of sense and I was flirting with the TPGT idea and this motivated me to try it.>If that is the case, you have a few options>a) disable that bogus interface:it was a physical interface that has been removed>b) fully configure it and and also create a vmkernel interface that can >connect to itdisable and removed.>c) configure a tpgt mask on the iscsi target (iscsitadm create tpgt) to >only use the valid addressConfigured... see information below:>Also, I never see target 40 log into anything...is that still a valid >target number? >You may want to delete everything in /var/lib/iscsi and reboot the host. >The vmkbinding and vmkdiscovery files will be rebuilt and it will start >over with target 0. Sometimes, things get a bit cruftyDeleted /var/lib/iscsi contents ========================Now for more information: This is the result of what I have tried new: I removed all extra interfaces from both the ESX host and the Solaris ISCSI machine ESX is now configured with a single interface: Virtual Switch: vSwitch0 Server: Poweredge 850 Service Console IP: 138.23.117.20 VMKERNEL (ISCSI) IP: 138.23.117.21 VMNETWORK (VLAN ID 25) ==== VMNIC 0 1000 FULL ISCSI target server Poweredge 1900 Broadcomm Gigabit TOE interface BXN0: 138.23.117.32 Steps below: ISCSI TARGET SERVER INTERFACE LIST: root at iscsi-sas:~# ifconfig -a lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1 inet 127.0.0.1 netmask ff000000 bnx0: flags=201000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,CoS> mtu 1500 index 2 inet 138.23.117.32 netmask ffffff00 broadcast 138.23.117.255 ether 0:1e:c9:d5:75:d2 lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1 inet6 ::1/128 root at iscsi-sas:~# ESX HOST INTERFACE LIST [root at vmware-860-1 log]# ifconfig -a lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:21265 errors:0 dropped:0 overruns:0 frame:0 TX packets:21265 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:11963350 (11.4 Mb) TX bytes:11963350 (11.4 Mb) vmnic0 Link encap:Ethernet HWaddr 00:19:B9:F7:ED:DD UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:212272 errors:0 dropped:0 overruns:0 frame:0 TX packets:3354 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:16606213 (15.8 Mb) TX bytes:2131622 (2.0 Mb) Interrupt:97 vswif0 Link encap:Ethernet HWaddr 00:50:56:40:0D:17 inet addr:138.23.117.20 Bcast:138.23.117.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2027 errors:0 dropped:0 overruns:0 frame:0 TX packets:3336 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:495940 (484.3 Kb) TX bytes:2123882 (2.0 Mb) [root at vmware-860-1 log]# iscsitadm create target -b /dev/zvol/rdsk/vdrive/LUNA iscsi iscsitadm create target -b /dev/zvol/rdsk/vdrive/LUNB vscsi iscsitadm create tpgt 1 iscsitadm modify -i 138.23.117.32 1 root at iscsi-sas:~# iscsitadm list tpgt -v TPGT: 1 IP Address: 138.23.117.32 root at iscsi-sas:~# iscsitadm modify target -p 1 iscsi iscsitadm modify target -p 1 vscsi After assigning a TPGT value to an iscsi target it seemed a little promising. But no luck. root at iscsi-sas:~# iscsitadm list target -v Target: vscsi iSCSI Name: iqn.1986-03.com.sun:02:35ec26d8-f173-6dd5-b239-93a9690ffe46.vscsi Connections: 0 ACL list: TPGT list: TPGT: 1 LUN information: LUN: 0 GUID: 0 VID: SUN PID: SOLARIS Type: disk Size: 1.3T Backing store: /dev/zvol/rdsk/vdrive/LUNB Status: online Target: iscsi iSCSI Name: iqn.1986-03.com.sun:02:4d469663-2304-4796-87a5-dffa03cd14ea.iscsi Connections: 0 ACL list: TPGT list: TPGT: 1 LUN information: LUN: 0 GUID: 0 VID: SUN PID: SOLARIS Type: disk Size: 750G Backing store: /dev/zvol/rdsk/vdrive/LUNA Status: online root at iscsi-sas:~# Now test and logs: Logged into ESX Infrastructure: Add iscsi target ip in Dynamic Discovery iscsi Server in ESX: LOGS: Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.499 cpu1:1035)iSCSI: bus 0 target 0 = iqn.1986-03.com.sun:02:4d469663-2304-4796-87a5-dffa03cd14ea.iscsi Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.499 cpu1:1035)iSCSI: bus 0 target 0 portal 0 = address 138.23.117.32 port 3260 group 1 Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.499 cpu1:1035)LinStubs: 2045: fn=0x9bb1d8 arg=0x0 Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.499 cpu1:1035)World: vm 1068: 900: Starting world driver with flags 1 Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.499 cpu0:1068)LinStubs: 2114: fn=0x9bb1d8 arg=0x0 Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.499 cpu1:1035)LinStubs: 2045: fn=0x9b186c arg=0x9203f90 Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.499 cpu1:1035)World: vm 1069: 900: Starting world driver with flags 1 Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.499 cpu1:1035)LinStubs: 2045: fn=0x9b7cd0 arg=0x9203f90 Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.500 cpu1:1035)World: vm 1070: 900: Starting world driver with flags 1 Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.500 cpu0:1069)LinStubs: 2114: fn=0x9b186c arg=0x9203f90 Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.500 cpu0:1070)LinStubs: 2114: fn=0x9b7cd0 arg=0x9203f90 Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.500 cpu0:1070)iSCSI: bus 0 target 0 trying to establish session 0x9203f90 to portal 0, address 138.23.117.32 port 3260 group 1 Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.501 cpu0:1070)iSCSI: bus 0 target 0 established session 0x9203f90 #1 to portal 0, address 138.23.117.32 port 3260 group 1, alias iscsi Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.512 cpu1:1033)iSCSI: bus 0 target 1 = iqn.1986-03.com.sun:02:35ec26d8-f173-6dd5-b239-93a9690ffe46.vscsi Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.512 cpu1:1033)iSCSI: bus 0 target 1 portal 0 = address 138.23.117.32 port 3260 group 1 Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.512 cpu1:1033)LinStubs: 2045: fn=0x9b186c arg=0x92180a0 Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.512 cpu1:1033)World: vm 1071: 900: Starting world driver with flags 1 Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.512 cpu0:1071)LinStubs: 2114: fn=0x9b186c arg=0x92180a0 Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.512 cpu1:1033)LinStubs: 2045: fn=0x9b7cd0 arg=0x92180a0 Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.512 cpu1:1033)World: vm 1072: 900: Starting world driver with flags 1 Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.512 cpu0:1072)LinStubs: 2114: fn=0x9b7cd0 arg=0x92180a0 Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.512 cpu0:1072)iSCSI: bus 0 target 1 trying to establish session 0x92180a0 to portal 0, address 138.23.117.32 port 3260 group 1 Oct 17 06:49:18 vmware-860-1 vmkernel: 0:02:03:01.524 cpu0:1072)iSCSI: bus 0 target 1 established session 0x92180a0 #1 to portal 0, address 138.23.117.32 port 3260 group 1, alias vscsi Note: Adding the ISCSI host was significantly quicker than before. Must be TPGT. Now I rescan the host (recommended) after adding the iscsi target server: /var/log/vmkernel Oct 17 06:51:11 vmware-860-1 vmkernel: 0:02:04:53.940 cpu1:1034)SCSI: 861: GetInfo for adapter vmhba32, [0x3f041100], max_vports=0, vports_inuse=0, linktype=0, state=0, failreason=0, rv=-19, sts=bad0001 Oct 17 06:51:11 vmware-860-1 vmkernel: 0:02:04:53.942 cpu1:1035)SCSI: 861: GetInfo for adapter vmhba32, [0x3f041100], max_vports=0, vports_inuse=0, linktype=0, state=0, failreason=0, rv=-19, sts=bad0001 Oct 17 06:51:11 vmware-860-1 vmkernel: 0:02:04:53.946 cpu1:1033)SCSI: 861: GetInfo for adapter vmhba32, [0x3f041100], max_vports=0, vports_inuse=0, linktype=0, state=0, failreason=0, rv=-19, sts=bad0001 Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.149 cpu1:1034)SCSI: 861: GetInfo for adapter vmhba33, [0x3f058a00], max_vports=0, vports_inuse=0, linktype=0, state=0, failreason=0, rv=-1, sts=bad001f Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.155 cpu0:1035)SCSI: 861: GetInfo for adapter vmhba33, [0x3f058a00], max_vports=0, vports_inuse=0, linktype=0, state=0, failreason=0, rv=-1, sts=bad001f Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.155 cpu0:1035)ScsiScan: 395: Path ''vmhba33:C0:T0:L0'': Vendor: ''SUN '' Model: ''SOLARIS '' Rev: ''1 '' Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.155 cpu0:1035)ScsiScan: 396: Type: 0x0, ANSI rev: 5 Oct 17 06:51:16 vmware-860-1 vmkernel: VMWARE SCSI Id: Supported VPD pages for vmhba33:C0:T0:L0 : 0x0 0x80 0x83 0x86 Oct 17 06:51:16 vmware-860-1 vmkernel: VMWARE SCSI Id: Device id info for vmhba33:C0:T0:L0: 0x51 0x95 0x0 0x4 0x0 0x0 0x0 0x1 0x51 0x94 0x0 0x4 0x0 0x0 0x0 0x1 0x51 0x3 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf8 0xfa 0x2a 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x3 Oct 17 06:51:16 vmware-860-1 vmkernel: 1 0x39 0x38 0x36 0x2d 0x30 0x33 0x2e 0x63 0x6f 0x6d 0x2e 0x73 0x75 0x6e 0x3a 0x30 0x32 0x3a 0x34 0x64 0x34 0x36 0x39 0x36 0x36 0x33 0x2d 0x32 0x33 0x30 0x34 0x2d 0x34 0x37 0x39 0x36 0x2d 0x38 0x37 0x61 0x35 0x2d 0x64 0x66 0x66 0x61 0x30 0x33 0x63 0x64 0x31 Oct 17 06:51:16 vmware-860-1 vmkernel: 0x34 0x65 0x61 0x2e 0x69 0x73 0x63 0x73 0x69 0x0 0x0 0x0 0x51 0xa3 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf8 0xfa 0x2a 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 0x51 0x93 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf8 0xfa 0x2a 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 Oct 17 06:51:16 vmware-860-1 vmkernel: VMWARE SCSI Id: Id for vmhba33:C0:T0:L0 0x60 0x01 0x44 0xf0 0x48 0xf8 0xfa 0x2a 0x00 0x00 0x1e 0xc9 0xd5 0x75 0xd2 0x00 0x53 0x4f 0x4c 0x41 0x52 0x49 Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.158 cpu0:1035)ScsiScan: 641: Discovered path vmhba33:C0:T0:L0 Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.159 cpu0:1035)ScsiScan: 395: Path ''vmhba33:C0:T1:L0'': Vendor: ''SUN '' Model: ''SOLARIS '' Rev: ''1 '' Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.159 cpu0:1035)ScsiScan: 396: Type: 0x0, ANSI rev: 5 Oct 17 06:51:16 vmware-860-1 vmkernel: VMWARE SCSI Id: Supported VPD pages for vmhba33:C0:T1:L0 : 0x0 0x80 0x83 0x86 Oct 17 06:51:16 vmware-860-1 vmkernel: VMWARE SCSI Id: Device id info for vmhba33:C0:T1:L0: 0x51 0x95 0x0 0x4 0x0 0x0 0x0 0x1 0x51 0x94 0x0 0x4 0x0 0x0 0x0 0x1 0x51 0x3 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf8 0xfa 0x2b 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x3 Oct 17 06:51:16 vmware-860-1 vmkernel: 1 0x39 0x38 0x36 0x2d 0x30 0x33 0x2e 0x63 0x6f 0x6d 0x2e 0x73 0x75 0x6e 0x3a 0x30 0x32 0x3a 0x33 0x35 0x65 0x63 0x32 0x36 0x64 0x38 0x2d 0x66 0x31 0x37 0x33 0x2d 0x36 0x64 0x64 0x35 0x2d 0x62 0x32 0x33 0x39 0x2d 0x39 0x33 0x61 0x39 0x36 0x39 0x30 0x66 0x66 Oct 17 06:51:16 vmware-860-1 vmkernel: 0x65 0x34 0x36 0x2e 0x76 0x73 0x63 0x73 0x69 0x0 0x0 0x0 0x51 0xa3 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf8 0xfa 0x2b 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 0x51 0x93 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf8 0xfa 0x2b 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 Oct 17 06:51:16 vmware-860-1 vmkernel: VMWARE SCSI Id: Id for vmhba33:C0:T1:L0 0x60 0x01 0x44 0xf0 0x48 0xf8 0xfa 0x2b 0x00 0x00 0x1e 0xc9 0xd5 0x75 0xd2 0x00 0x53 0x4f 0x4c 0x41 0x52 0x49 Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.162 cpu0:1035)ScsiScan: 641: Discovered path vmhba33:C0:T1:L0 Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.179 cpu0:1035)SCSI: 1917: Device vmhba33:0:0 has not been identified as being attached to an active/passive SAN. It is either attached to an active/active SAN or is a local device. Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.179 cpu0:1035)ScsiPath: 3178: Plugin ''legacyMP'' claimed path ''vmhba33:C0:T0:L0'' Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.179 cpu0:1035)SCSI: 1917: Device vmhba33:1:0 has not been identified as being attached to an active/passive SAN. It is either attached to an active/active SAN or is a local device. Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.179 cpu0:1035)ScsiPath: 3178: Plugin ''legacyMP'' claimed path ''vmhba33:C0:T1:L0'' Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.200 cpu0:1035)ScsiDevice: 3479: Adding Device "vml.0200000000600144f048f8fa2a00001ec9d575d200534f4c415249" from Plugin "legacyMP", Device Type 0 Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.200 cpu0:1035)SCSI: 5495: Logical device vml.0200000000600144f048f8fa2a00001ec9d575d200534f4c415249 for target vmhba33:0:0 was registered successfully. Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.201 cpu0:1035)ScsiDevice: 3479: Adding Device "vml.0200000000600144f048f8fa2b00001ec9d575d200534f4c415249" from Plugin "legacyMP", Device Type 0 Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.201 cpu0:1035)SCSI: 5495: Logical device vml.0200000000600144f048f8fa2b00001ec9d575d200534f4c415249 for target vmhba33:1:0 was registered successfully. Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.204 cpu1:1033)SCSI: 861: GetInfo for adapter vmhba33, [0x3f058a00], max_vports=0, vports_inuse=0, linktype=0, state=0, failreason=0, rv=-1, sts=bad001f Oct 17 06:51:16 vmware-860-1 vmkernel: VMWARE SCSI Id: Supported VPD pages for vmhba33:C0:T0:L0 : 0x0 0x80 0x83 0x86 Oct 17 06:51:16 vmware-860-1 vmkernel: VMWARE SCSI Id: Device id info for vmhba33:C0:T0:L0: 0x51 0x95 0x0 0x4 0x0 0x0 0x0 0x1 0x51 0x94 0x0 0x4 0x0 0x0 0x0 0x1 0x51 0x3 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf8 0xfa 0x2a 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x3 Oct 17 06:51:16 vmware-860-1 vmkernel: 1 0x39 0x38 0x36 0x2d 0x30 0x33 0x2e 0x63 0x6f 0x6d 0x2e 0x73 0x75 0x6e 0x3a 0x30 0x32 0x3a 0x34 0x64 0x34 0x36 0x39 0x36 0x36 0x33 0x2d 0x32 0x33 0x30 0x34 0x2d 0x34 0x37 0x39 0x36 0x2d 0x38 0x37 0x61 0x35 0x2d 0x64 0x66 0x66 0x61 0x30 0x33 0x63 0x64 0x31 Oct 17 06:51:16 vmware-860-1 vmkernel: 0x34 0x65 0x61 0x2e 0x69 0x73 0x63 0x73 0x69 0x0 0x0 0x0 0x51 0xa3 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf8 0xfa 0x2a 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 0x51 0x93 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf8 0xfa 0x2a 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 Oct 17 06:51:16 vmware-860-1 vmkernel: VMWARE SCSI Id: Id for vmhba33:C0:T0:L0 0x60 0x01 0x44 0xf0 0x48 0xf8 0xfa 0x2a 0x00 0x00 0x1e 0xc9 0xd5 0x75 0xd2 0x00 0x53 0x4f 0x4c 0x41 0x52 0x49 Oct 17 06:51:16 vmware-860-1 vmkernel: VMWARE SCSI Id: Supported VPD pages for vmhba33:C0:T1:L0 : 0x0 0x80 0x83 0x86 Oct 17 06:51:16 vmware-860-1 vmkernel: VMWARE SCSI Id: Device id info for vmhba33:C0:T1:L0: 0x51 0x95 0x0 0x4 0x0 0x0 0x0 0x1 0x51 0x94 0x0 0x4 0x0 0x0 0x0 0x1 0x51 0x3 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf8 0xfa 0x2b 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x3 Oct 17 06:51:16 vmware-860-1 vmkernel: 1 0x39 0x38 0x36 0x2d 0x30 0x33 0x2e 0x63 0x6f 0x6d 0x2e 0x73 0x75 0x6e 0x3a 0x30 0x32 0x3a 0x33 0x35 0x65 0x63 0x32 0x36 0x64 0x38 0x2d 0x66 0x31 0x37 0x33 0x2d 0x36 0x64 0x64 0x35 0x2d 0x62 0x32 0x33 0x39 0x2d 0x39 0x33 0x61 0x39 0x36 0x39 0x30 0x66 0x66 Oct 17 06:51:16 vmware-860-1 vmkernel: 0x65 0x34 0x36 0x2e 0x76 0x73 0x63 0x73 0x69 0x0 0x0 0x0 0x51 0xa3 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf8 0xfa 0x2b 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 0x51 0x93 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf8 0xfa 0x2b 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 Oct 17 06:51:16 vmware-860-1 vmkernel: VMWARE SCSI Id: Id for vmhba33:C0:T1:L0 0x60 0x01 0x44 0xf0 0x48 0xf8 0xfa 0x2b 0x00 0x00 0x1e 0xc9 0xd5 0x75 0xd2 0x00 0x53 0x4f 0x4c 0x41 0x52 0x49 Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.453 cpu1:1034)SCSI: 861: GetInfo for adapter vmhba0, [0x3f040d00], max_vports=0, vports_inuse=0, linktype=0, state=0, failreason=0, rv=-25, sts=bad0001 Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.453 cpu1:1034)ScsiScan: 395: Path ''vmhba0:C0:T0:L0'': Vendor: ''ATA '' Model: ''Maxtor 7Y250M0 '' Rev: ''YAR5'' Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.453 cpu1:1034)ScsiScan: 396: Type: 0x0, ANSI rev: 5 Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.453 cpu1:1034)ScsiUid: 653: No supported identifier types found in VPD device id page. Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.453 cpu1:1034)ScsiScan: 516: Path ''vmhba0:C0:T0:L0'': No standard UID: Failure Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.455 cpu1:1034)SCSI: 861: GetInfo for adapter vmhba0, [0x3f040d00], max_vports=0, vports_inuse=0, linktype=0, state=0, failreason=0, rv=-25, sts=bad0001 Oct 17 06:51:16 vmware-860-1 vmkernel: 0:02:04:59.461 cpu1:1034)SCSI: 861: GetInfo for adapter vmhba0, [0x3f040d00], max_vports=0, vports_inuse=0, linktype=0, state=0, failreason=0, rv=-25, sts=bad0001 Oct 17 06:51:16 vmware-860-1 vmkernel: VMWARE SCSI Id: Supported VPD pages for vmhba0:C0:T0:L0 : 0x0 0x80 0x83 Oct 17 06:51:16 vmware-860-1 vmkernel: VMWARE SCSI Id: Device id info for vmhba0:C0:T0:L0: 0x2 0x0 0x0 0x18 0x4c 0x69 0x6e 0x75 0x78 0x20 0x41 0x54 0x41 0x2d 0x53 0x43 0x53 0x49 0x20 0x73 0x69 0x6d 0x75 0x6c 0x61 0x74 0x6f 0x72 Oct 17 06:51:16 vmware-860-1 vmkernel: VMWARE SCSI Id: Id for vmhba0:C0:T0:L0 0x59 0x36 0x33 0x39 0x36 0x52 0x30 0x45 0x20 0x20 0x20 0x20 0x20 0x20 0x20 0x20 0x20 0x20 0x20 0x20 0x4d 0x61 0x78 0x74 0x6f 0x72 Again discovering the disks were significantly quicker. Which was leading me to a glimmer of hope that this may actually work. Now we add the disks to the list of available storage units in ESX. First I add LUNA (750Gigabytes) or target name iqn.xxxxx.iscsi /var/log/vmkernel Oct 17 06:53:48 vmware-860-1 vmkernel: 0:02:07:31.591 cpu1:1035)LVM: 6505: LVMProbeDevice failed with status "Device does not contain a logical volume". Oct 17 06:53:49 vmware-860-1 vmkernel: 0:02:07:31.962 cpu1:1035)LVM: 4866: Initialized vmhba33:0:0:1, devID 48f898ed-04041295-632a-0019b9f7eddd Oct 17 06:53:49 vmware-860-1 vmkernel: 0:02:07:31.967 cpu1:1035)LVM: 5016: Zero volumeSize specified: using available space (805286372352). Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:35.878 cpu1:1035)FS3: 130: <START rootDir> Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:35.878 cpu1:1035)Lock [type 10c00001 offset 4292608 v 1, hb offset 0 Oct 17 06:53:53 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7655] Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:35.878 cpu1:1035)Addr <4, 0, 0>, gen 1, links 2, type dir, flags 0x0, uid 0, gid 0, mode 1755 Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:35.878 cpu1:1035)len 980, nb 1 tbz 0, zla 1, bs 1048576 Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:35.878 cpu1:1035)FS3: 132: <END rootDir> Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:35.878 cpu1:1035)FS3: 157: <START fbb> Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:35.878 cpu1:1035)767744 resources, each of size 0 Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:35.878 cpu1:1035)Organized as 120 CGs, 32 C/CG and 200 R/C Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:35.878 cpu1:1035)CGsize 32768. 0th CG at 65536 Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:35.878 cpu1:1035)FS3: 159: <END fbb> Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.011 cpu1:1035)FS3: 130: <START fbb> Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.011 cpu1:1035)Lock [type 10c00001 offset 4294656 v 1, hb offset 0 Oct 17 06:53:53 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7655] Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.011 cpu1:1035)Addr <4, 0, 1>, gen 1, links 1, type sys, flags 0x0, uid 0, gid 0, mode 400 Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.011 cpu1:1035)len 3997696, nb 4 tbz 0, zla 1, bs 1048576 Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.011 cpu1:1035)FS3: 132: <END fbb> Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.011 cpu1:1035)FS3: 157: <START fdc> Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.011 cpu1:1035)30720 resources, each of size 2048 Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.011 cpu1:1035)Organized as 5 CGs, 32 C/CG and 200 R/C Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.011 cpu1:1035)CGsize 13139968. 0th CG at 65536 Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.011 cpu1:1035)FS3: 159: <END fdc> Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.056 cpu1:1035)FS3: 130: <START fdc> Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.056 cpu1:1035)Lock [type 10c00001 offset 4296704 v 1, hb offset 0 Oct 17 06:53:53 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7656] Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.056 cpu1:1035)Addr <4, 0, 2>, gen 1, links 1, type sys, flags 0x0, uid 0, gid 0, mode 400 Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.056 cpu1:1035)len 63143936, nb 61 tbz 0, zla 1, bs 1048576 Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.056 cpu1:1035)FS3: 132: <END fdc> Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.056 cpu1:1035)FS3: 157: <START pbc> Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.056 cpu1:1035)61440 resources, each of size 4096 Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.056 cpu1:1035)Organized as 120 CGs, 32 C/CG and 16 R/C Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.056 cpu1:1035)CGsize 2129920. 0th CG at 65536 Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.056 cpu1:1035)FS3: 159: <END pbc> Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.193 cpu1:1035)FS3: 130: <START pbc> Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.193 cpu1:1035)Lock [type 10c00001 offset 4298752 v 1, hb offset 0 Oct 17 06:53:53 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7656] Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.193 cpu1:1035)Addr <4, 0, 3>, gen 1, links 1, type sys, flags 0x0, uid 0, gid 0, mode 400 Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.193 cpu1:1035)len 255655936, nb 244 tbz 0, zla 1, bs 1048576 Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.193 cpu1:1035)FS3: 132: <END pbc> Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.193 cpu1:1035)FS3: 157: <START sbc> Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.193 cpu1:1035)3968 resources, each of size 65536 Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.193 cpu1:1035)Organized as 8 CGs, 32 C/CG and 16 R/C Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.193 cpu1:1035)CGsize 33587200. 0th CG at 65536 Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.193 cpu1:1035)FS3: 159: <END sbc> Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.241 cpu1:1035)FS3: 130: <START sbc> Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.241 cpu1:1035)Lock [type 10c00001 offset 4300800 v 1, hb offset 0 Oct 17 06:53:53 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7656] Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.241 cpu1:1035)Addr <4, 0, 4>, gen 1, links 1, type sys, flags 0x0, uid 0, gid 0, mode 400 Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.241 cpu1:1035)len 260374528, nb 249 tbz 0, zla 1, bs 1048576 Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.241 cpu1:1035)FS3: 132: <END sbc> Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.241 cpu1:1035)FS3: 130: <START vh> Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.241 cpu1:1035)Lock [type 10c00001 offset 4302848 v 1, hb offset 0 Oct 17 06:53:53 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7656] Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.241 cpu1:1035)Addr <4, 0, 5>, gen 1, links 1, type sys, flags 0x0, uid 0, gid 0, mode 400 Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.241 cpu1:1035)len 4194304, nb 4 tbz 0, zla 1, bs 1048576 Oct 17 06:53:53 vmware-860-1 vmkernel: 0:02:07:36.241 cpu1:1035)FS3: 132: <END vh> Now I add the storage unit LUNB iqn.xxxxx.vscsi Oct 17 06:55:20 vmware-860-1 vmkernel: 0:02:09:03.219 cpu1:1034)LVM: 6505: LVMProbeDevice failed with status "Device does not contain a logical volume". Oct 17 06:55:20 vmware-860-1 vmkernel: 0:02:09:03.549 cpu1:1034)LVM: 4866: Initialized vmhba33:1:0:1, devID 48f89948-fdafdfa6-af31-0019b9f7eddd Oct 17 06:55:20 vmware-860-1 vmkernel: 0:02:09:03.554 cpu1:1034)LVM: 5016: Zero volumeSize specified: using available space (1384033523712). Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.215 cpu1:1034)FS3: 130: <START rootDir> Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.215 cpu1:1034)Lock [type 10c00001 offset 8486912 v 1, hb offset 0 Oct 17 06:55:27 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7750] Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.215 cpu1:1034)Addr <4, 0, 0>, gen 1, links 2, type dir, flags 0x0, uid 0, gid 0, mode 1755 Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.215 cpu1:1034)len 980, nb 1 tbz 0, zla 1, bs 8388608 Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.215 cpu1:1034)FS3: 132: <END rootDir> Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.215 cpu1:1034)FS3: 157: <START fbb> Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.215 cpu1:1034)164960 resources, each of size 0 Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.215 cpu1:1034)Organized as 26 CGs, 32 C/CG and 200 R/C Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.215 cpu1:1034)CGsize 32768. 0th CG at 65536 Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.215 cpu1:1034)FS3: 159: <END fbb> Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.280 cpu1:1034)FS3: 130: <START fbb> Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.280 cpu1:1034)Lock [type 10c00001 offset 8488960 v 1, hb offset 0 Oct 17 06:55:27 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7750] Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.280 cpu1:1034)Addr <4, 0, 1>, gen 1, links 1, type sys, flags 0x0, uid 0, gid 0, mode 400 Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.280 cpu1:1034)len 917504, nb 1 tbz 0, zla 1, bs 8388608 Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.280 cpu1:1034)FS3: 132: <END fbb> Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.280 cpu1:1034)FS3: 157: <START fdc> Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.280 cpu1:1034)30720 resources, each of size 2048 Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.280 cpu1:1034)Organized as 5 CGs, 32 C/CG and 200 R/C Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.280 cpu1:1034)CGsize 13139968. 0th CG at 65536 Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.280 cpu1:1034)FS3: 159: <END fdc> Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.325 cpu1:1034)FS3: 130: <START fdc> Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.325 cpu1:1034)Lock [type 10c00001 offset 8491008 v 1, hb offset 0 Oct 17 06:55:27 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7750] Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.325 cpu1:1034)Addr <4, 0, 2>, gen 1, links 1, type sys, flags 0x0, uid 0, gid 0, mode 400 Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.325 cpu1:1034)len 63143936, nb 8 tbz 0, zla 1, bs 8388608 Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.325 cpu1:1034)FS3: 132: <END fdc> Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.325 cpu1:1034)FS3: 157: <START pbc> Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.325 cpu1:1034)61440 resources, each of size 4096 Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.325 cpu1:1034)Organized as 120 CGs, 32 C/CG and 16 R/C Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.325 cpu1:1034)CGsize 2129920. 0th CG at 65536 Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.325 cpu1:1034)FS3: 159: <END pbc> Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.460 cpu1:1034)FS3: 130: <START pbc> Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.460 cpu1:1034)Lock [type 10c00001 offset 8493056 v 1, hb offset 0 Oct 17 06:55:27 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7750] Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.460 cpu1:1034)Addr <4, 0, 3>, gen 1, links 1, type sys, flags 0x0, uid 0, gid 0, mode 400 Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.460 cpu1:1034)len 255655936, nb 31 tbz 0, zla 1, bs 8388608 Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.460 cpu1:1034)FS3: 132: <END pbc> Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.460 cpu1:1034)FS3: 157: <START sbc> Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.460 cpu1:1034)3968 resources, each of size 65536 Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.460 cpu1:1034)Organized as 8 CGs, 32 C/CG and 16 R/C Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.461 cpu1:1034)CGsize 33587200. 0th CG at 65536 Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.461 cpu1:1034)FS3: 159: <END sbc> Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.507 cpu1:1034)FS3: 130: <START sbc> Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.507 cpu1:1034)Lock [type 10c00001 offset 8495104 v 1, hb offset 0 Oct 17 06:55:27 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7750] Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.507 cpu1:1034)Addr <4, 0, 4>, gen 1, links 1, type sys, flags 0x0, uid 0, gid 0, mode 400 Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.507 cpu1:1034)len 260374528, nb 32 tbz 0, zla 1, bs 8388608 Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.507 cpu1:1034)FS3: 132: <END sbc> Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.507 cpu1:1034)FS3: 130: <START vh> Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.507 cpu1:1034)Lock [type 10c00001 offset 8497152 v 1, hb offset 0 Oct 17 06:55:27 vmware-860-1 vmkernel: gen 0, mode 0, owner 00000000-00000000-0000-000000000000 mtime 7750] Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.507 cpu1:1034)Addr <4, 0, 5>, gen 1, links 1, type sys, flags 0x0, uid 0, gid 0, mode 400 Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.507 cpu1:1034)len 8388608, nb 1 tbz 0, zla 1, bs 8388608 Oct 17 06:55:27 vmware-860-1 vmkernel: 0:02:09:10.507 cpu1:1034)FS3: 132: <END vh> Now lets try to do a cold migration of an existing virtual server from the local storage to the iscsi SAN. Oct 17 06:57:14 vmware-860-1 vmkernel: 0:02:10:57.309 cpu1:1061)LinSCSI: 3201: Abort failed for cmd with serial=1294, status=bad0001, retval=bad0001 Nothing on the ESX side. Migration is still "in progress:" Then: Oct 17 06:57:14 vmware-860-1 last message repeated 4 times Oct 17 06:57:51 vmware-860-1 vmkernel: 0:02:11:34.305 cpu0:1068)iSCSI: 50 second timeout expired for session 0x9203f90, rx 784431, ping 788930, now 789431 Oct 17 06:57:51 vmware-860-1 vmkernel: 0:02:11:34.306 cpu0:1069)iSCSI: session 0x9203f90 xmit_data failed to send 131120 bytes, rc -512 Oct 17 06:57:51 vmware-860-1 vmkernel: 0:02:11:34.306 cpu1:1070)iSCSI: session 0x9203f90 to iscsi dropped Oct 17 06:57:51 vmware-860-1 vmkernel: 0:02:11:34.306 cpu1:1070)iSCSI: session 0x9203f90 for (2 0 0 *) rx thread 1070handled timeout, notify tx Oct 17 06:57:51 vmware-860-1 vmkernel: 0:02:11:34.306 cpu1:1070)iSCSI: bus 0 target 0 trying to establish session 0x9203f90 to portal 0, address 138.23.117.32 port 3260 group 1 Oct 17 06:57:51 vmware-860-1 vmkernel: 0:02:11:34.307 cpu1:1070)iSCSI: bus 0 target 0 established session 0x9203f90 #2 to portal 0, address 138.23.117.32 port 3260 group 1, alias iscsi Oct 17 06:57:52 vmware-860-1 vmkernel: 0:02:11:35.427 cpu1:1031)Fil3: 5004: WRITE error 0xbad00e5 Oct 17 06:57:52 vmware-860-1 vmkernel: 0:02:11:35.427 cpu1:1037)FS3: 4829: Reclaimed timed out heartbeat [HB state abcdef02 offset 3225088 gen 29 stamp 7842338662 uuid 48f87b0e-ec804c43-8088-0019b9f7eddd jrnl <FB 499805> drv 4.31] Oct 17 06:57:52 vmware-860-1 vmkernel: VMWARE SCSI Id: Supported VPD pages for vmhba33:C0:T0:L0 : 0x0 0x80 0x83 0x86 Oct 17 06:57:52 vmware-860-1 vmkernel: VMWARE SCSI Id: Device id info for vmhba33:C0:T0:L0: 0x51 0x95 0x0 0x4 0x0 0x0 0x0 0x1 0x51 0x94 0x0 0x4 0x0 0x0 0x0 0x1 0x51 0x3 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf8 0xfa 0x2a 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x3 Oct 17 06:57:52 vmware-860-1 vmkernel: 1 0x39 0x38 0x36 0x2d 0x30 0x33 0x2e 0x63 0x6f 0x6d 0x2e 0x73 0x75 0x6e 0x3a 0x30 0x32 0x3a 0x34 0x64 0x34 0x36 0x39 0x36 0x36 0x33 0x2d 0x32 0x33 0x30 0x34 0x2d 0x34 0x37 0x39 0x36 0x2d 0x38 0x37 0x61 0x35 0x2d 0x64 0x66 0x66 0x61 0x30 0x33 0x63 0x64 0x31 Oct 17 06:57:52 vmware-860-1 vmkernel: 0x34 0x65 0x61 0x2e 0x69 0x73 0x63 0x73 0x69 0x0 0x0 0x0 0x51 0xa3 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf8 0xfa 0x2a 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 0x51 0x93 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf8 0xfa 0x2a 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 Oct 17 06:57:52 vmware-860-1 vmkernel: VMWARE SCSI Id: Id for vmhba33:C0:T0:L0 0x60 0x01 0x44 0xf0 0x48 0xf8 0xfa 0x2a 0x00 0x00 0x1e 0xc9 0xd5 0x75 0xd2 0x00 0x53 0x4f 0x4c 0x41 0x52 0x49 Oct 17 06:57:52 vmware-860-1 vmkernel: VMWARE SCSI Id: Supported VPD pages for vmhba33:C0:T1:L0 : 0x0 0x80 0x83 0x86 Oct 17 06:57:52 vmware-860-1 vmkernel: VMWARE SCSI Id: Device id info for vmhba33:C0:T1:L0: 0x51 0x95 0x0 0x4 0x0 0x0 0x0 0x1 0x51 0x94 0x0 0x4 0x0 0x0 0x0 0x1 0x51 0x3 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf8 0xfa 0x2b 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x3 Oct 17 06:57:52 vmware-860-1 vmkernel: 1 0x39 0x38 0x36 0x2d 0x30 0x33 0x2e 0x63 0x6f 0x6d 0x2e 0x73 0x75 0x6e 0x3a 0x30 0x32 0x3a 0x33 0x35 0x65 0x63 0x32 0x36 0x64 0x38 0x2d 0x66 0x31 0x37 0x33 0x2d 0x36 0x64 0x64 0x35 0x2d 0x62 0x32 0x33 0x39 0x2d 0x39 0x33 0x61 0x39 0x36 0x39 0x30 0x66 0x66 Oct 17 06:57:52 vmware-860-1 vmkernel: 0x65 0x34 0x36 0x2e 0x76 0x73 0x63 0x73 0x69 0x0 0x0 0x0 0x51 0xa3 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf8 0xfa 0x2b 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 0x51 0x93 0x0 0x10 0x60 0x1 0x44 0xf0 0x48 0xf8 0xfa 0x2b 0x0 0x0 0x1e 0xc9 0xd5 0x75 0xd2 0x0 Oct 17 06:57:52 vmware-860-1 vmkernel: VMWARE SCSI Id: Id for vmhba33:C0:T1:L0 0x60 0x01 0x44 0xf0 0x48 0xf8 0xfa 0x2b 0x00 0x00 0x1e 0xc9 0xd5 0x75 0xd2 0x00 0x53 0x4f 0x4c 0x41 0x52 0x49 Rinse and repeat till VMWARE times out and gotta reboot the ESX host so I can at least work on it again. Any further ideas? I''m going to study the error messages some more. However, It''s is exteremly frustrating when a Virtual Server Opensolaris 2008.05 is able to host iscsi targets and the existing ESX host is able to utilize the partitions. Tano -- This message posted from opensolaris.org
Mike La Spina
2008-Oct-17 21:49 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
Hello Tano, The issue here is not the target or VMware but a missing GUID on the target as the issue. Observe the target smf properties using iscsitadm list target -v You have iSCSI Name: iqn.1986-03.com.sun:02:35ec26d8-f173-6dd5-b239-93a9690ffe46.vscsi Connections: 0 ACL list: TPGT list: TPGT: 1 LUN information: LUN: 0 GUID: 0 VID: SUN PID: SOLARIS Type: disk Size: 1.3T Backing store: /dev/zvol/rdsk/vdrive/LUNB Status: online Target: iscsi iSCSI Name: iqn.1986-03.com.sun:02:4d469663-2304-4796-87a5-dffa03cd14ea.iscsi Connections: 0 ACL list: TPGT list: TPGT: 1 LUN information: LUN: 0 GUID: 0 VID: SUN PID: SOLARIS Type: disk Size: 750G Backing store: /dev/zvol/rdsk/vdrive/LUNA Status: online Both targets have the same invalid GUID of zero and this will prevent NAA from working properly. To fix this you can create a two new temporary targets and export the smf props to an xml file. e.g. svccfg -export iscsitgt > /iscsibackup/myiscsitargetbu.xml then edit the xml file switching the newly generated guid''s to your valid targets and zeroing the temp ones. Now you can import the file with scvadm import /iscsibackup/myiscsitargetbu.xml When you restart your iscsitgt server you should have the guids in place and it should work with vmware. The you can delete the temps targets. http://blog.laspina.ca -- This message posted from opensolaris.org
Tano
2008-Oct-17 22:12 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
Hi, I rebooted the server after I submitted the information to release the locks set up on my ESX host. After the reboot: I reran the iscsitadm list target -v and the GUIDs showed up. Only interesting problem: the GUID''s are identical (any problems with that?) root at iscsi-sas:~# iscsitadm list target -v Target: iscsi iSCSI Name: iqn.1986-03.com.sun:02:4d469663-2304-4796-87a5-dffa03cd14ea.iscsi Connections: 1 Initiator: iSCSI Name: iqn.1998-01.com.vmware:vmware-860-1-4403d26f Alias: vmware-860-1.ucr.edu ACL list: TPGT list: TPGT: 1 LUN information: LUN: 0 GUID: 600144f048f8fa2a00001ec9d575d200 VID: SUN PID: SOLARIS Type: disk Size: 750G Backing store: /dev/zvol/rdsk/vdrive/LUNA Status: online Target: vscsi iSCSI Name: iqn.1986-03.com.sun:02:35ec26d8-f173-6dd5-b239-93a9690ffe46.vscsi Connections: 1 Initiator: iSCSI Name: iqn.1998-01.com.vmware:vmware-860-1-4403d26f Alias: vmware-860-1.ucr.edu ACL list: TPGT list: TPGT: 1 LUN information: LUN: 0 GUID: 600144f048f8fa2b00001ec9d575d200 VID: SUN PID: SOLARIS Type: disk Size: 1.3T Backing store: /dev/zvol/rdsk/vdrive/LUNB Status: online root at iscsi-sas:~# when I attempted to run svccfg -export iscsitgt > /iscsibackup/myiscsibackup.xml tells me that -e is an illegal option. svccfg does not have an --export or -export option. I checked the man pages. -- This message posted from opensolaris.org
Nigel Smith
2008-Oct-18 13:43 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
According to the svccfg(1M) man page: http://docs.sun.com/app/docs/doc/819-2240/svccfg-1m?a=view ...it should be just ''export'' without a leading ''-'' or ''--''. I''ve been googling on NAA and this is the ''Network Address Authority'', It seems to be yet another way of uniquely identifying a target & Lun, and is apparently to be compatble with the way that Fibre Channel & SAS do this. For futher details, see: http://tools.ietf.org/html/rfc3980 "T11 Network Address Authority (NAA) Naming Format for iSCSI Node Names" I also found this blog post: http://timjacobs.blogspot.com/2008/08/matching-luns-between-esx-hosts-and-vcb.html ...which talks about Vmware ESX and NAA. For anyone interested in the code fix''s to the solaris iscsi target to support Vmware ESX server, take a look at these links: http://hg.genunix.org/onnv-gate.hg/rev/29862a7558ef http://hg.genunix.org/onnv-gate.hg/rev/5b422642546a Tano, based on the above, I would say you need unique GUID''s for two separate Targets/LUNS. Best Regards Nigel Smith http://nwsmith.blogspot.com/ -- This message posted from opensolaris.org
Mike La Spina
2008-Oct-18 15:08 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
Ciao, Your GUID''s must not be the same an NAA is already established on the targets and if you previously tried to initialize the LUN with VMware it would have assigned the value in the VMFS header wich is now stored on your raw ZFS backing store. This will confuse VMware and it will remember it now some where in its definitions. You need to remove the second datastore from VMware and delete the target definition and ZFS backing store. Once you recreate the backing and target you should have a new GUID and iqn which should cure the issue. Regards, Mike -- This message posted from opensolaris.org
Tano
2008-Oct-20 23:34 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
A couple of updates: Installed Opensolaris on a Poweredge 1850 with a single network card, default iscsitarget configuration (no special tweaks or tpgt settings), vmotion was about 10 percent successful before I received write errors on disk. 10 percent better than the Poweredge 1900 iscsitarget. The GUID''s are set by VMWare when the iscsi initiator connects to the Opensolaris Target. Therefore I have no control what the GUIDs are and from my observations it doesn''t matter with the GUIDs are identical. Unless there is a bug in Vmware and GUIDs. I have followed the instructions to delete the backing stores, the zfs partitions and start a new. I even went as far as rebooting the machine after I created a Single LUN, connected to the vmware initiator. I then repeated the same steps when creating the second LUN. Overall VMWare determined the GUID # of the iscsi target. I Right now I am applying a ton of VMWare patches that have iscsi connectivity repairs and other security updates. I will be resorting back to a linux iscsi target model if the patches do not work to check whether the physical machines have an abnormality or networking that may be causing problems. I''ll be submitting more updates as I continue testing! cliff notes: nothing has worked so far :( -- This message posted from opensolaris.org
Nigel Smith
2008-Oct-21 09:17 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
Well, my colleague & myself have recently had a basic Vmare ESX cluster working, with the Solaris iscsi target, in the Lab at work, so I know it does work. We used ESX 3.5i on two Dell Precision 390 workstations, booted from USB memory sticks. We used snv_97 and no special tweaks required. We used Vmotion to move a running Windows XP guest from one ESX host to the another. Windows XP was playing a video feed at the time. It all worked fine. We repeated the operation three times. My colleague is the ESX expert, but I believe it was update 2 with all latest patches applied. But we only had a single iscsi target setup on the Solaris box, The target size was 200Gb, formated with VMFS. Ok, another thing you could try, which may give a clue to what is going wrong, is to run the ''iscsisnoop.d'' script on the Solaris box. http://www.solarisinternals.com/wiki/index.php/DTrace_Topics_iSCSI This is a DTrace script which shows what iscsi target events are happening, so interesting if it shows anything unusual at the point of failure. But, I''m beginning to think it could be one of your hardware components that is playing up, but no clue so far. It could be anywhere on the path. Maybe you could check the Solaris iScsi target works ok under stress from something other that ESX, like say the Windows iscsi initiator. Regards Nigel Smith -- This message posted from opensolaris.org
Eugene Chupriyanov
2008-Oct-21 17:56 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
I have a very similar problem with SNV_(( and Virtual Iron (http://www.opensolaris.org/jive/thread.jspa?threadID=79831&tstart=0) I am using IBM x3650 Server with 6 SAS drives. And what we have in common is Broadcomm network cards (BNX driver). From previous experiance I know this cards had a driver problem in linux. So as a wild guess maybe problem is here? Can you try another card in your server? Unfortunately I don''t have compatible spare card to check it.. Regards, Eugene -- This message posted from opensolaris.org
Tano
2008-Oct-21 20:50 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
The poweredge 1850 has an intel etherexpress pro 1000 internal cards in it. However, some new updates, even the microsoft initiator hung writing a 1.5 gigabyte file to the iscsitarget on the opensolaris box. I''ve installed linux iscsitarget on the same box and will reattempt the iscsi targets to the microsoft and esx servers. I''ll also get the DTRACE of the iscsi box later this afternoon. Sorry for the delay. -- This message posted from opensolaris.org
Tano
2008-Oct-21 21:57 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
one more update: common hardware between all my machines soo far has been the PERC (Poweredge Raid Controllers) or also known as the LSI MEGA RAID controller. The 1850 has a PERC 4d/i the 1900 has a PERC 5/i I''ll be testing the iscsitarget with a SATA controller to test my hypothesis. -- This message posted from opensolaris.org
Nigel Smith
2008-Oct-21 23:16 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
Hi tano I hope you can try with the ''iscsisnoop.d'' script, so we can see if your problem is the same as what Eugene is seeing. Please can you also check the contents of the file: /var/svc/log/system-iscsitgt\:default.log .. just to make sure that the iscsi target is not core dumping & restarting. I''ve also done a post on the storage-forum on how to enable a debug log on the iscsi target, which may also give some clues. http://mail.opensolaris.org/pipermail/storage-discuss/2008-October/006423.html It may also be worth trying with a smaller target size, just to see if that is a factor. (There have in the past been bugs, now fixed, which triggered with ''large'' targets.) As I said, it worked ok for me with a 200Gb target. Many thanks for all your testing. Please bear with us on this one. If it is a problem with the Solaris iscsi target we need to get to the bottom of the root cause. Following Eugene report, I''m beginning to fear that some sort of regression has been introduced into the iscsi target code... Regards Nigel Smith -- This message posted from opensolaris.org
Tano
2008-Oct-22 20:29 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
root at iscsi-sas:/tmp# cat /var/svc/log/system-iscsitgt\:default.log [ Oct 21 09:17:49 Enabled. ] [ Oct 21 09:17:49 Executing start method ("/lib/svc/method/svc-iscsitgt start"). ] [ Oct 21 09:17:49 Method "start" exited with status 0. ] [ Oct 21 17:02:12 Disabled. ] [ Oct 21 17:02:12 Rereading configuration. ] [ Oct 22 12:40:13 Disabled. ] [ Oct 22 12:40:34 Rereading configuration. ] [ Oct 22 12:53:35 Enabled. ] [ Oct 22 12:53:35 Executing start method ("/lib/svc/method/svc-iscsitgt start"). ] [ Oct 22 12:53:35 Method "start" exited with status 0. ] [ Oct 22 12:54:02 Rereading configuration. ] [ Oct 22 12:54:02 No ''refresh'' method defined. Treating as :true. ] [ Oct 22 12:54:06 Stopping because service restarting. ] [ Oct 22 12:54:06 Executing stop method ("/lib/svc/method/svc-iscsitgt stop 90"). ] [ Oct 22 12:54:06 Method "stop" exited with status 0. ] [ Oct 22 12:54:06 Executing start method ("/lib/svc/method/svc-iscsitgt start"). ] [ Oct 22 12:54:06 Method "start" exited with status 0. ] [ Oct 22 12:59:15 Rereading configuration. ] [ Oct 22 12:59:15 No ''refresh'' method defined. Treating as :true. ] [ Oct 22 12:59:19 Rereading configuration. ] [ Oct 22 12:59:19 No ''refresh'' method defined. Treating as :true. ] root at iscsi-sas:/tmp# CPU REMOTE IP EVENT BYTES ITT SCSIOP 0 138.23.117.29 login-response 0 0 - 2 138.23.117.29 login-command 587 0 - 2 138.23.117.29 login-command 587 0 - 0 138.23.117.29 login-response 0 0 - 2 138.23.117.29 login-command 587 0 - 0 138.23.117.29 login-response 0 0 - 0 138.23.117.29 login-command 587 0 - 0 138.23.117.29 login-response 0 0 - 0 138.23.117.29 login-command 600 0 - 3 138.23.117.29 scsi-command 131072 2201616384 write(10) 3 138.23.117.29 scsi-command 131072 2218393600 write(10) 0 138.23.117.29 login-response 466 0 - 3 138.23.117.29 scsi-command 0 3226992640 0x0 0 138.23.117.29 scsi-response 0 3226992640 - 3 138.23.117.29 scsi-command 0 3243769856 0x12 3 138.23.117.29 data-send 8 3243769856 - 3 138.23.117.29 scsi-response 8 3243769856 - 0 138.23.117.29 scsi-command 0 3260547072 0x12 3 138.23.117.29 data-send 152 3260547072 - 3 138.23.117.29 scsi-response 152 3260547072 - 0 138.23.117.29 scsi-command 0 3277324288 0x12 3 138.23.117.29 data-send 8 3277324288 - 3 138.23.117.29 scsi-response 8 3277324288 - 0 138.23.117.29 nop-receive 0 2268725248 - 3 138.23.117.29 nop-send 0 2268725248 - 0 138.23.117.29 scsi-command 131072 2285502464 write(10) 3 138.23.117.29 login-command 587 0 - 3 138.23.117.29 login-response 0 0 - 3 138.23.117.29 login-command 587 0 - 3 138.23.117.29 login-response 0 0 - 0 138.23.117.29 login-command 587 0 - 0 138.23.117.29 login-response 0 0 - 2 138.23.117.29 login-command 587 0 - 2 138.23.117.29 login-response 0 0 - 2 138.23.117.29 login-command 587 0 - 0 138.23.117.29 login-response 0 0 - 2 138.23.117.29 scsi-command 131072 2302279680 write(10) 1 138.23.117.29 login-command 600 0 - 2 138.23.117.29 scsi-command 131072 2319056896 write(10) 1 138.23.117.29 login-response 466 0 - 1 138.23.117.29 scsi-command 0 3294101504 0x0 1 138.23.117.29 scsi-response 0 3294101504 - 1 138.23.117.29 scsi-command 0 3310878720 0x12 1 138.23.117.29 data-send 8 3310878720 - 1 138.23.117.29 scsi-response 8 3310878720 - 1 138.23.117.29 scsi-command 0 3327655936 0x12 0 138.23.117.29 scsi-command 0 3344433152 0x12 2 138.23.117.29 data-send 152 3327655936 - 2 138.23.117.29 scsi-response 152 3327655936 - 0 138.23.117.29 data-send 8 3344433152 - 0 138.23.117.29 scsi-response 8 3344433152 - 0 138.23.117.29 nop-receive 0 2369388544 - 1 138.23.117.29 nop-send 0 2369388544 - 0 138.23.117.29 scsi-command 131072 2386165760 write(10) 1 138.23.117.29 login-command 587 0 - 1 138.23.117.29 login-response 0 0 - 1 138.23.117.29 login-command 587 0 - 1 138.23.117.29 login-response 0 0 - 1 138.23.117.29 login-command 587 0 - 1 138.23.117.29 login-response 0 0 - 1 138.23.117.29 login-command 587 0 - 1 138.23.117.29 login-response 0 0 - 0 138.23.117.29 login-response 0 0 - 2 138.23.117.29 login-command 587 0 - 0 138.23.117.29 scsi-command 131072 2402942976 write(10) 0 138.23.117.29 scsi-command 131072 2419720192 write(10) 3 138.23.117.29 login-command 600 0 - 3 138.23.117.29 login-response 466 0 - 3 138.23.117.29 scsi-command 0 3361210368 0x0 3 138.23.117.29 scsi-response 0 3361210368 - 3 138.23.117.29 scsi-command 0 3377987584 0x12 3 138.23.117.29 data-send 8 3377987584 - 3 138.23.117.29 scsi-response 8 3377987584 - 3 138.23.117.29 scsi-command 0 3394764800 0x12 3 138.23.117.29 data-send 152 3394764800 - 1 138.23.117.29 scsi-command 0 3411542016 0x12 1 138.23.117.29 data-send 8 3411542016 - 1 138.23.117.29 scsi-response 8 3411542016 - 3 138.23.117.29 scsi-response 152 3394764800 - 1 138.23.117.29 nop-receive 0 2470051840 - 2 138.23.117.29 nop-send 0 2470051840 - 3 138.23.117.29 scsi-command 131072 2486829056 write(10) 3 138.23.117.29 login-command 587 0 - 3 138.23.117.29 login-response 0 0 - 1 138.23.117.29 login-response 0 0 - 3 138.23.117.29 login-command 587 0 - 2 138.23.117.29 login-command 587 0 - 0 138.23.117.29 login-response 0 0 - 1 138.23.117.29 login-response 0 0 - 3 138.23.117.29 login-command 587 0 - 0 138.23.117.29 login-command 587 0 - 2 138.23.117.29 login-response 0 0 - 1 138.23.117.29 scsi-command 0 3428319232 0x0 1 138.23.117.29 scsi-response 0 3428319232 - 2 138.23.117.29 scsi-command 131072 2503606272 write(10) 2 138.23.117.29 scsi-command 131072 2520383488 write(10) 3 138.23.117.29 login-command 600 0 - 3 138.23.117.29 login-response 466 0 - 3 138.23.117.29 scsi-command 0 3445096448 0x0 3 138.23.117.29 scsi-response 0 3445096448 - 3 138.23.117.29 scsi-command 0 3461873664 0x12 3 138.23.117.29 data-send 8 3461873664 - 3 138.23.117.29 scsi-response 8 3461873664 - 3 138.23.117.29 scsi-command 0 3478650880 0x12 3 138.23.117.29 data-send 152 3478650880 - 3 138.23.117.29 scsi-response 152 3478650880 - 3 138.23.117.29 scsi-command 0 3495428096 0x12 2 138.23.117.29 data-send 8 3495428096 - 2 138.23.117.29 scsi-response 8 3495428096 - 1 138.23.117.29 nop-receive 0 2570715136 - 2 138.23.117.29 nop-send 0 2570715136 - 1 138.23.117.29 scsi-command 131072 2587492352 write(10) 1 138.23.117.29 login-command 587 0 - 1 138.23.117.29 login-response 0 0 - 0 138.23.117.29 login-command 587 0 - 0 138.23.117.29 login-response 0 0 - 0 138.23.117.29 login-command 587 0 - 0 138.23.117.29 login-response 0 0 - 0 138.23.117.29 login-response 0 0 - 2 138.23.117.29 login-command 587 0 - 2 138.23.117.29 login-command 587 0 - 0 138.23.117.29 login-response 0 0 - 2 138.23.117.29 scsi-command 131072 2604269568 write(10) 2 138.23.117.29 scsi-command 131072 2621046784 write(10) 3 138.23.117.29 login-command 600 0 - 3 138.23.117.29 login-response 466 0 - 2 138.23.117.29 scsi-command 0 3512205312 0x0 2 138.23.117.29 scsi-response 0 3512205312 - 0 138.23.117.29 scsi-command 0 3545759744 0x12 2 138.23.117.29 scsi-command 0 3528982528 0x12 2 138.23.117.29 data-send 8 3528982528 - 2 138.23.117.29 scsi-response 8 3528982528 - 3 138.23.117.29 data-send 152 3545759744 - 3 138.23.117.29 scsi-response 152 3545759744 - 1 138.23.117.29 scsi-command 0 3562536960 0x12 1 138.23.117.29 data-send 8 3562536960 - 1 138.23.117.29 scsi-response 8 3562536960 - 0 138.23.117.29 nop-receive 0 2671378432 - 0 138.23.117.29 scsi-command 131072 2688155648 write(10) 2 138.23.117.29 nop-send 0 2671378432 - 0 138.23.117.29 login-command 587 0 - 2 138.23.117.29 login-response 0 0 - 0 138.23.117.29 login-command 587 0 - 2 138.23.117.29 login-response 0 0 - 3 138.23.117.29 login-command 587 0 - 3 138.23.117.29 login-response 0 0 - 0 138.23.117.29 login-command 587 0 - 2 138.23.117.29 login-response 0 0 - 2 138.23.117.29 login-command 587 0 - 2 138.23.117.29 login-response 0 0 - 1 138.23.117.29 scsi-command 131072 2704932864 write(10) 1 138.23.117.29 scsi-command 131072 2721710080 write(10) 2 138.23.117.29 login-command 600 0 - 2 138.23.117.29 login-response 466 0 - 2 138.23.117.29 scsi-command 0 3579314176 0x0 2 138.23.117.29 scsi-response 0 3579314176 - 2 138.23.117.29 scsi-command 0 3596091392 0x12 2 138.23.117.29 data-send 8 3596091392 - 2 138.23.117.29 scsi-response 8 3596091392 - 2 138.23.117.29 scsi-command 0 3612868608 0x12 0 138.23.117.29 data-send 8 3629645824 - 0 138.23.117.29 scsi-response 8 3629645824 - 2 138.23.117.29 data-send 152 3612868608 - 2 138.23.117.29 scsi-response 152 3612868608 - 2 138.23.117.29 scsi-command 0 3629645824 0x12 1 138.23.117.29 login-command 587 0 - 1 138.23.117.29 login-response 0 0 - 0 138.23.117.29 nop-send 0 2772041728 - 3 138.23.117.29 nop-receive 0 2772041728 - 3 138.23.117.29 scsi-command 131072 2788818944 write(10) this is a sample of the iscsisnoop.d output while I try to vmotion a server from the local vmstore to the iscsitarget on the poweredge. I will be making an ISCSITARGET with the built in SATA controller of the motherboard to see if I get the same problem. Then I will be sure that the PERC controller is the culprit of my issues since the Poweredge 1850 with the PERC 4d/i is experiecing the same problems. I will be attempting a firmware upgrade on the PERC controller after that. -- This message posted from opensolaris.org
Tano
2008-Oct-22 22:13 UTC
[zfs-discuss] HELP! zfs with iscsitadm (on poweredge1900) and VMWare!
Firmware upgrade was unsuccessful of the PERC raid controller. I tested with Vista 64 bit initiator and it experienced the same issue as the VMware initiators. If someone has a poweredge 1850/1900/1950 server that has been successul please let me know what you''ve done or your hardware profile. Unfortunately this leads me to believe that opensolaris and the certain high end dell poweredge servers have an incompatibility issue with the iscsitarget service. I will test out anything else anyone wishes me to try out. I have a copy of the snoop output available if someone wishes to parse through over 7000 lines of logs. My Dell Poweredge 1900 Server has the following equipment: Intel Quad Core 1.6Ghz 1066 FSB Integrated PERC 5/i RAID SAS controller. CERC 6 channel SATA 1.5Gb/s controller for 2 disk SaTA boot drives 2 x SATA 7200 RPM 1.5GB/s 160GB Harddrives 6 x SAS 15k RPM 3G/S 450 GB Hard Drives 8 GB of 667mhz RAM Broadcomm Gigabit Card with TOE Switch Packet size at 1500 Looking forward to any additional suggestions. -- This message posted from opensolaris.org
Nigel Smith
2008-Oct-22 23:46 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
Well the ''/var/svc/log/system-iscsitgt\:default.log'' is NOT showing any core dumps, which is good, but means that we need to look & think deeper for the answer. The ''iscsisnoop.d'' output does looks similar to that captured by Eugene over on the storage forum, but Eugene only showed a short sequence. http://mail.opensolaris.org/pipermail/storage-discuss/2008-October/006414.html Here we have a longer sequence of ''iscsisnoop.d'' output clearly showing the looping, as the error occurs, causing the initiator and target to try to re-establish the session. The question is - what is the root cause, & what is just consequential effect. Tano, it you could also get some debug log messages from the iscsi target (/tmp/target_log), that would help to confirm that this is the same (or not) as what Eugene is seeing: http://mail.opensolaris.org/pipermail/storage-discuss/2008-October/006428.html It would be useful to modify the ''iscsisnoop.d'' to give timestamps, as this would help to show if there are any unusual delays. And the DTrace iscsi probes have a ''args[1]'' which can give further details on sequence numbers and tags. Having seen your ''iscsisnoop.d'' output, and the ''/tmp/target_log'' from Eugene, I now going back to thinking this IS an iscsi issue, with the initiator and target mis-interacting in some way, and NOT a driver/hardware issue. I know that SUN have recently been doing a lot of stress testing with the iscsi target and various initiators, including Linux. I have found the snv_93 and snv_97 iscsi target to work well with the Vmware ESX and Microsoft initiators. So it is a surprise to see these problems occurring. Maybe some of the more resent builds snv_98, 99 have ''fixes'' that have cause the problem... Regards Nigel Smith -- This message posted from opensolaris.org
Nigel Smith
2008-Oct-22 23:49 UTC
[zfs-discuss] HELP! zfs with iscsitadm (on poweredge1900) and VMWare!
Hi Tano I will have a look at your snoop file. (Tomorrow now, as it''s late in the UK!) I will send you my email address. Thanks Nigel Smith -- This message posted from opensolaris.org
Nigel Smith
2008-Oct-27 01:11 UTC
[zfs-discuss] HELP! zfs with iscsitadm (on poweredge1900) and VMWare!
I asked Tano to use the ''snoop'' command to capture the Ethernet packets to a file, while he attempted VMware''s ''VMotion''. # snoop -d {device} -o {filename} tcp port 3260 This file was made available to me on Tano''s web server. The file size was nearly 85 Mbytes, capturing over 100,000 packets. I have downloaded the capture file, and been looking at it with Ethereal and WireShark. I do not have a corresponding ''iscsisnoop.d'' file, but from the pattern of activity that I see, I can well imagine that it would show the same pattern of that we saw from Eugene, which I reported on here: http://mail.opensolaris.org/pipermail/storage-discuss/2008-October/006444.html (So here I''m looking at what''s happening at the lower TCP level, rather than at the iScsi level.) In the Ethernet capture file, I can see the pattern of bursts of writes from the initiator. The Target can accept so many of these, and then needs to slow things down by reducing the TCP window size. Eventually the target says the TCP Window size is zero, effectively asking the initiator to stop. Now to start with, the target only leaves the ''TCP ZeroWindow'', in place for a fraction of a second. Then it opens things up again by sending a ''TCP Window Update'', restoring the window to 65160 bytes, and transfer resumes. This is normal and expected. But eventually we get to a stage where the target sets the TCP ''ZeroWindow'' and leaves it there for an extended period of time. I talking about seconds here. The initiator starts to send ''TCP ZeroWindowProbe'' packets every 5 seconds. The target promptly responds with a ''TCP ZeroWindowProbeAck'' packet. (Presumably, this is the initiator just confirming that the target is still alive.) This cycle of Probes & Ack''s repeats for 50 seconds. During this period the target shows no sign of wanting to accept any more data. Then the initiator seems to decide it has had enough, and just cannot be bothered to wait any longer, and it [RST,ACK]''s the TCP session, and then starts a fresh iscsi login. (And then we go around the whole cycle of the pattern again.) The question is why has the target refused to accept any more data for over 50 seconds! The obvious conclusion would be that the OpenSolaris box is so busy that it does not have any time left to empty the network stack buffers. But this then just leads you to another question - why? So the mystery deepens, and I am running out of ideas! Tano, maybe you could check the network performance, with the ''iperf'' programs, as mentioned here: http://mail.opensolaris.org/pipermail/zfs-discuss/2008-October/052136.html Does the OpenSolaris box give any indication of being busy with other things? Try running ''prstat'' to see if it gives any clues. Presumably you are using ZFS as the backing store for iScsi, in which case, maybe try with a UFS formatted disk to see if that is a factor. Regards Nigel Smith -- This message posted from opensolaris.org
Tano
2008-Oct-27 21:17 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
so I found some more information and have been at it diligently. Checking my hardware bios, Dell likes to share a lot of it''s IRQs with other peripherals. back in the old days when we were limited to just 15 IRQs it was imperative that certain critical hardware had it''s own IRQ. It may seem to be the same case here. I have disabled everything that I can from the bios, removed all additional RAID or boot cards. Also, I have turned off the I/OAT DMA settings (http://en.wikipedia.org/wiki/Direct_memory_access). I also have changed the network card from the Broadcom TOE adapter to the an Intel Etherexpress Pro 1000G card with it''s own iRQs. I reinstalled the server and have started to try vmotion again. it''s copying! Vmotion is actually working but at a snail''s pace. In 1 hour it has copied only 28% of a 15 GIG VMDK folder. That''s slow, but I don''t know if it is my disk subsystem (using the internal SATA controller) or that TCP is having issues. Going to be sitting on the logs and watching it. IOSTAT -xn 1 reports activity every 10 to 15 seconds.. more information soon.. but it seems that the irq conflicts or I/OAT DMA may be the culprit. -- This message posted from opensolaris.org
Tano
2008-Oct-28 00:17 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
around 30 to 40% it really starts to slow down, but no disconnection or timeout yet. The speed is unacceptable and therefore will continue to with the notion that something is wrong in the tcp stack/iscsi. Following the snoop logs, it shows that the Windows size on the iscsi end is 0, and the on the vmware size is 32969. It seems that the window size is negotiatted for a quite some time then finally about 10 to 20 megs of data is allowed to pass (based on the iostat -xn 1 report at the time of negotiation) and then rinse and repeat. I tried to use a desktop gigabit adapted for a network card, but it doesn''t seem to want to get enabled in opensolaris even though drivers are available and installed. maybe wrong drivers? I''ll continue some more, but at this time i''m also out of options. Tano -- This message posted from opensolaris.org
Nigel Smith
2008-Oct-28 00:59 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
Hi Tano Please check out my post on the storage-forum for another idea to try which may give further clues: http://mail.opensolaris.org/pipermail/storage-discuss/2008-October/006458.html Best Regards Nigel Smith -- This message posted from opensolaris.org
Tano
2008-Oct-28 20:05 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
So it''s finally working: nothing special was done to get it working either which is extremely vexing! I disabled the I/OAT DMA feature from the BIOS that apparently assists the network card and enabled the TPGT option on the iscsi target. I have two iscsitargets, one 100G on a mirror on the internal SATA controller , and a 1TB block on a RAIDZ partition. I have confirmed by disabling I/OAT DMA that I can READ/WRITE to the raidz via ISCSI. With I/OAT DMA enabled I can only read from the disks. Writes will LAG/FAIL within 10 megabytes Based on the wiki, I/OAT DMA only provides a 10% speed improvement on the network card. It seems that the broadcom drivers supplied with Solaris may be the culprit? I hope for all those individuals who were experiencing this problem can try to turn off the I/OAT DMA or similar option to see whether their problems go away. Transferred 100 gigs of data from the local store to the iscsi target on open solaris in 26 minutes. Local store = 1 SATA 1.5gb/s drive pushing at 65mb/s read average; not too bad! The I/OAT DMA feature works fine under Debian Linux and serves iscsi targets without any issues. Thank Nigel for all your help and patience. I will post on this topic some more if I get anything new, (basically if I have been getting extremely lucky and the problem returns all of a sudden.) -- This message posted from opensolaris.org
Nigel Smith
2008-Oct-29 10:50 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
Hi Tano Great to hear that you''ve now got this working!! I understand you are using a Broadcom network card, from your previous posts I can see you are using the ''bnx'' driver. I will raise this as a bug, but first please would you run ''/usr/X11/bin/scanpci'' to indentify the exact ''vendor id'' and ''device id'' for the Broadcom network chipset, and report that back here. I must admit that this is the first I have heard of ''I/OAT DMA'', so I did some Googling on it, and found this links: http://opensolaris.org/os/community/arc/caselog/2008/257/onepager/ To quote from that ARC case: "All new Sun Intel based platforms have Intel I/OAT (I/O Acceleration Technology) hardware. The first such hardware is an on-systemboard asynchronous DMA engine code named Crystal Beach. Through a set of RFEs Solaris will use this hardware to implement TCP receive side zero CPU copy via a socket." Ok, so I think that makes some sense, in the context of the problem we were seeing. It''s referring to how the network adaptor transfers the data it has received, out of the buffer and onto the rest of the operating system. I''ve just looked to see if I can find the source code for the BNX driver, but I cannot find it. Digging deeper we find on this page: http://www.opensolaris.org/os/about/no_source/ ..on the ''ON'' tab, that: "Components for which there are currently no plans to release source" bnx driver (B) Broadcom NetXtreme II Gigabit Ethernet driver So the bnx driver is closed source :-( Regards Nigel Smith -- This message posted from opensolaris.org
Miles Nordin
2008-Oct-29 18:24 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
>>>>> "ns" == Nigel Smith <nwsmith at wilusa.freeserve.co.uk> writes:ns> the bnx driver is closed source :-( The GPL''d Linux driver is contributed by Broadcom: http://git.kernel.org/?p=linux/kernel/git/stable/linux-2.6.27.y.git;a=blob;f=drivers/net/bnx2.c;h=2486a656f12d9f47ff27ead587e084a3c337a1a3;hb=HEAD and I believe the chip itself is newer than the Solaris 10 ``all new bits will be open-source'''' pitch. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 304 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20081029/eb0f0da8/attachment.bin>
Tano
2008-Oct-29 21:08 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
ns> I will raise this as a bug, but first please would you run ''/usr/X11/bin/scanpci'' to indentify the exact ''vendor id'' and ''device id'' for the Broadcom network chipset, and report that back here Primary network interface Embedded NIC: pci bus 0x0005 cardnum 0x00 function 0x00: vendor 0x14e4 device 0x164c Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet Plus the two external add on Broadcom cards: (CURRENTLY NOT IN USE) pci bus 0x000b cardnum 0x00 function 0x00: vendor 0x1166 device 0x0103 Broadcom EPB PCI-Express to PCI-X Bridge pci bus 0x000c cardnum 0x00 function 0x00: vendor 0x14e4 device 0x164c Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet pci bus 0x000d cardnum 0x00 function 0x00: vendor 0x1166 device 0x0103 Broadcom EPB PCI-Express to PCI-X Bridge pci bus 0x000e cardnum 0x00 function 0x00: vendor 0x14e4 device 0x164c Broadcom Corporation NetXtreme II BCM5708 Gigabit Ethernet I will submit the information that you had asked in email very soon. Tano -- This message posted from opensolaris.org
Tano
2008-Nov-24 18:44 UTC
[zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!
Nigel, I have sent you an email with the output that you were looking for. Once a solution has been discovered I''ll post it on here so everyone can see. Tano -- This message posted from opensolaris.org
Reasonably Related Threads
- Cannot set Windows ACL security permissions Ubuntu 18.04 LXD privileged container
- AD Functional Level vs very old SaMBa member server
- Wheezy member Server - Unable to edit permissions of share without usermapping - shall I add to Wiki?
- Domain Member Server (wheezy) - Unable to edit permissions of share without usermapping - shall I add to Wiki?
- alloc_sub_basic: NULL source string!