Cody Campbell
2008-Jul-03 23:18 UTC
[zfs-discuss] Poor read/write performance when using ZFS iSCSI target
Greetings, I want to take advantage of the iSCSI target support in the latest release (svn_91) of OpenSolaris, and I''m running into some performance problems when reading/writing from/to my target. I''m including as much detail as I can so bear with me here... I''ve built an x86 OpenSolaris server (Intel Xeon running NV_91) with a zpool of 15 750GB SATA disks, of which I''ve created and exported a ZFS Volume with the shareiscsi=on property set to generate an iSCSI target. My problem is, when I connect to this target from any initiator (tested with both Linux 2.6 and OpenSolaris NV_91 SPARC and x86), the read/write speed is dreadful (~ 3 megabytes / second!). When I test read/write performance locally with the backing pool, I have excellent speeds. The same can be said when I use services such as NFS and FTP to move files between other hosts on the network and the volume I am exporting as a Target. When doing this I have achieved the near-Gigabit speeds I expect, which has me thinking this isn''t a network problem of some sort (I''ve already disabled the Neagle algorithm if you''re wondering). It''s not until I add the iSCSI target to the stack that the speeds go south, so I am concerned that I may be missing something in configuration of the target. Below are some details pertaining to my configuration. OpenSolaris iSCSI Target Host: target_host:~ # zpool status pool0 pool: pool0 state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM pool0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c0t0d0 ONLINE 0 0 0 c0t1d0 ONLINE 0 0 0 c0t2d0 ONLINE 0 0 0 c0t3d0 ONLINE 0 0 0 c0t4d0 ONLINE 0 0 0 c0t5d0 ONLINE 0 0 0 c0t6d0 ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c0t7d0 ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 c1t4d0 ONLINE 0 0 0 c1t5d0 ONLINE 0 0 0 spares c1t6d0 AVAIL errors: No known data errors target_host:~ # zfs get all pool0/vol0 NAME PROPERTY VALUE SOURCE pool0/vol0 type volume - pool0/vol0 creation Wed Jul 2 18:16 2008 - pool0/vol0 used 5T - pool0/vol0 available 7.92T - pool0/vol0 referenced 34.2G - pool0/vol0 compressratio 1.00x - pool0/vol0 reservation none default pool0/vol0 volsize 5T - pool0/vol0 volblocksize 8K - pool0/vol0 checksum on default pool0/vol0 compression off default pool0/vol0 readonly off default pool0/vol0 shareiscsi on local pool0/vol0 copies 1 default pool0/vol0 refreservation 5T local target_host:~ # iscsitadm list target -v pool0/vol0 Target: pool0/vol0 iSCSI Name: iqn.1986-03.com.sun:02:fb1c7071-8f35-eb03-9efb-b950d5bdd1ab Alias: pool0/vol0 Connections: 1 Initiator: iSCSI Name: iqn.1986-03.com.sun:01:0003ba681e7f.486c0829 Alias: unknown ACL list: TPGT list: TPGT: 1 LUN information: LUN: 0 GUID: 010000304865b1b400002a00486c29d2 VID: SUN PID: SOLARIS Type: disk Size: 5.0T Backing store: /dev/zvol/rdsk/pool0/vol0 Status: online OpenSolaris iSCSI Initiator Host: initiator_host:~ # iscsiadm list target -vS iqn.1986-03.com.sun:02:fb1c7071-8f35-eb03-9efb-b950d5bdd1ab Target: iqn.1986-03.com.sun:02:fb1c7071-8f35-eb03-9efb-b950d5bdd1ab Alias: pool0/vol0 TPGT: 1 ISID: 4000002a0000 Connections: 1 CID: 0 IP address (Local): 192.168.4.2:63960 IP address (Peer): 192.168.4.3:3260 Discovery Method: SendTargets Login Parameters (Negotiated): Data Sequence In Order: yes Data PDU In Order: yes Default Time To Retain: 20 Default Time To Wait: 2 Error Recovery Level: 0 First Burst Length: 65536 Immediate Data: yes Initial Ready To Transfer (R2T): yes Max Burst Length: 262144 Max Outstanding R2T: 1 Max Receive Data Segment Length: 8192 Max Connections: 1 Header Digest: NONE Data Digest: NONE LUN: 0 Vendor: SUN Product: SOLARIS OS Device Name: /dev/rdsk/c5t0d0s2 The IOSTATS on the backing pool show the awful performance when running a DD to the iSCSI disk form the Initiator host: initiator_host:~ # dd if=/dev/zero bs=1k of=/dev/dsk/c5t0d0 count=1000000 target_host:~ # zpool iostat pool0 1 capacity operations bandwidth pool used avail read write read write ---------- ----- ----- ----- ----- ----- ----- pool0 52.2G 9.45T 26 53 211K 459K pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 4.53K 0 35.3M pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 1.83K 0 14.5M pool0 52.2G 9.45T 0 4.01K 0 30.6M pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 pool0 52.2G 9.45T 0 0 0 0 iPerf Results when connecting to the target from the initiator: initiator_host:~ # iperf -c target_host -f MB ------------------------------------------------------------ Client connecting to target_host, TCP port 5001 TCP window size: 0.05 MByte (default) ------------------------------------------------------------ [ 4] local 192.168.4.2 port 36309 connected with 192.168.4.3 port 5001 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 1097 MBytes 110 MBytes/sec Traceroute info for these hosts (they''re both on the same Physical and logical Subnet): initiator_host:~ # traceroute target_host traceroute: Warning: Multiple interfaces found; using 192.168.4.2 @ ce1 traceroute to target_host (192.168.4.3), 30 hops max, 40 byte packets 1 sr1521.carnegie (192.168.4.3) 0.294 ms 0.205 ms 0.156 ms target_host:~ # traceroute initiator_host traceroute to initiator_host (192.168.4.2), 30 hops max, 40 byte packets 1 strauss-san.carnegie (192.168.4.2) 0.234 ms 0.170 ms 0.182 ms This message posted from opensolaris.org
Roch - PAE
2008-Aug-18 15:41 UTC
[zfs-discuss] Poor read/write performance when using ZFS iSCSI target
initiator_host:~ # dd if=/dev/zero bs=1k of=/dev/dsk/c5t0d0 count=1000000 So this is going at 3000 x 1K writes per second or 330usec per write. The iscsi target is probably doing a over the wire operation for each request. So it looks fine at first glance. -r Cody Campbell writes: > Greetings, > > I want to take advantage of the iSCSI target support in the latest > release (svn_91) of OpenSolaris, and I''m running into some performance > problems when reading/writing from/to my target. I''m including as much > detail as I can so bear with me here... > > I''ve built an x86 OpenSolaris server (Intel Xeon running NV_91) with a > zpool of 15 750GB SATA disks, of which I''ve created and exported a ZFS > Volume with the shareiscsi=on property set to generate an iSCSI > target. > > My problem is, when I connect to this target from any initiator > (tested with both Linux 2.6 and OpenSolaris NV_91 SPARC and x86), the > read/write speed is dreadful (~ 3 megabytes / second!). When I test > read/write performance locally with the backing pool, I have excellent > speeds. The same can be said when I use services such as NFS and FTP > to move files between other hosts on the network and the volume I am > exporting as a Target. When doing this I have achieved the > near-Gigabit speeds I expect, which has me thinking this isn''t a > network problem of some sort (I''ve already disabled the Neagle > algorithm if you''re wondering). It''s not until I add the iSCSI target > to the stack that the speeds go south, so I am concerned that I may be > missing something in configuration of the target. > > Below are some details pertaining to my configuration. > > OpenSolaris iSCSI Target Host: > > target_host:~ # zpool status pool0 > pool: pool0 > state: ONLINE > scrub: none requested > config: > > NAME STATE READ WRITE CKSUM > pool0 ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > c0t0d0 ONLINE 0 0 0 > c0t1d0 ONLINE 0 0 0 > c0t2d0 ONLINE 0 0 0 > c0t3d0 ONLINE 0 0 0 > c0t4d0 ONLINE 0 0 0 > c0t5d0 ONLINE 0 0 0 > c0t6d0 ONLINE 0 0 0 > raidz1 ONLINE 0 0 0 > c0t7d0 ONLINE 0 0 0 > c1t0d0 ONLINE 0 0 0 > c1t1d0 ONLINE 0 0 0 > c1t2d0 ONLINE 0 0 0 > c1t3d0 ONLINE 0 0 0 > c1t4d0 ONLINE 0 0 0 > c1t5d0 ONLINE 0 0 0 > spares > c1t6d0 AVAIL > > errors: No known data errors > > target_host:~ # zfs get all pool0/vol0 > NAME PROPERTY VALUE SOURCE > pool0/vol0 type volume - > pool0/vol0 creation Wed Jul 2 18:16 2008 - > pool0/vol0 used 5T - > pool0/vol0 available 7.92T - > pool0/vol0 referenced 34.2G - > pool0/vol0 compressratio 1.00x - > pool0/vol0 reservation none default > pool0/vol0 volsize 5T - > pool0/vol0 volblocksize 8K - > pool0/vol0 checksum on default > pool0/vol0 compression off default > pool0/vol0 readonly off default > pool0/vol0 shareiscsi on local > pool0/vol0 copies 1 default > pool0/vol0 refreservation 5T local > > > target_host:~ # iscsitadm list target -v pool0/vol0 > > Target: pool0/vol0 > iSCSI Name: iqn.1986-03.com.sun:02:fb1c7071-8f35-eb03-9efb-b950d5bdd1ab > Alias: pool0/vol0 > Connections: 1 > Initiator: > iSCSI Name: iqn.1986-03.com.sun:01:0003ba681e7f.486c0829 > Alias: unknown > ACL list: > TPGT list: > TPGT: 1 > LUN information: > LUN: 0 > GUID: 010000304865b1b400002a00486c29d2 > VID: SUN > PID: SOLARIS > Type: disk > Size: 5.0T > Backing store: /dev/zvol/rdsk/pool0/vol0 > Status: online > > > OpenSolaris iSCSI Initiator Host: > > > initiator_host:~ # iscsiadm list target -vS iqn.1986-03.com.sun:02:fb1c7071-8f35-eb03-9efb-b950d5bdd1ab > Target: iqn.1986-03.com.sun:02:fb1c7071-8f35-eb03-9efb-b950d5bdd1ab > Alias: pool0/vol0 > TPGT: 1 > ISID: 4000002a0000 > Connections: 1 > CID: 0 > IP address (Local): 192.168.4.2:63960 > IP address (Peer): 192.168.4.3:3260 > Discovery Method: SendTargets > Login Parameters (Negotiated): > Data Sequence In Order: yes > Data PDU In Order: yes > Default Time To Retain: 20 > Default Time To Wait: 2 > Error Recovery Level: 0 > First Burst Length: 65536 > Immediate Data: yes > Initial Ready To Transfer (R2T): yes > Max Burst Length: 262144 > Max Outstanding R2T: 1 > Max Receive Data Segment Length: 8192 > Max Connections: 1 > Header Digest: NONE > Data Digest: NONE > > LUN: 0 > Vendor: SUN > Product: SOLARIS > OS Device Name: /dev/rdsk/c5t0d0s2 > > The IOSTATS on the backing pool show the awful performance when running a DD to the iSCSI disk form the Initiator host: > > initiator_host:~ # dd if=/dev/zero bs=1k of=/dev/dsk/c5t0d0 count=1000000 > > > target_host:~ # zpool iostat pool0 1 > capacity operations bandwidth > pool used avail read write read write > ---------- ----- ----- ----- ----- ----- ----- > pool0 52.2G 9.45T 26 53 211K 459K > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 4.53K 0 35.3M > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 1.83K 0 14.5M > pool0 52.2G 9.45T 0 4.01K 0 30.6M > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > pool0 52.2G 9.45T 0 0 0 0 > > iPerf Results when connecting to the target from the initiator: > > initiator_host:~ # iperf -c target_host -f MB > ------------------------------------------------------------ > Client connecting to target_host, TCP port 5001 > TCP window size: 0.05 MByte (default) > ------------------------------------------------------------ > [ 4] local 192.168.4.2 port 36309 connected with 192.168.4.3 port 5001 > [ ID] Interval Transfer Bandwidth > [ 4] 0.0-10.0 sec 1097 MBytes 110 MBytes/sec > > Traceroute info for these hosts (they''re both on the same Physical and logical Subnet): > > initiator_host:~ # traceroute target_host > traceroute: Warning: Multiple interfaces found; using 192.168.4.2 @ ce1 > traceroute to target_host (192.168.4.3), 30 hops max, 40 byte packets > 1 sr1521.carnegie (192.168.4.3) 0.294 ms 0.205 ms 0.156 ms > > target_host:~ # traceroute initiator_host > traceroute to initiator_host (192.168.4.2), 30 hops max, 40 byte packets > 1 strauss-san.carnegie (192.168.4.2) 0.234 ms 0.170 ms 0.182 ms > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss