Hello, I just made a setup in our lab which should make ZFS fly, but unfortunately performance is significantly lower than expected: for large sequential data transfers write speed is about 50 MB/s while I was expecting at least 150 MB/s. Setup ----- The setup consists of five servers in total: one OpenSolaris ZFS server and four SAN servers. ZFS accesses the SAN servers via iSCSI and IPoIB. * ZFS Server Operating system: OpenSolaris build 78. CPU: Two Intel Xeon CPU''s, eight cores in total. RAM: 16 GB. Disks: not relevant for this test. * SAN Servers Operating system: Linux 2.6.22.18 kernel, 64-bit + iSCSI Enterprise Target (IET). IET has been configured such that it performs both read and write caching. CPU: Intel Xeon CPU E5310, 1.60GHz, four cores in total. RAM: two servers with 8 GB RAM, one with 4 GB RAM, one with 2 GB RAM. Disks: 16 disks in total: two disks with the Linux OS and 14 set up in RAID-0 via LVM. The LVM volume is exported via iSCSI and used by ZFS. These SAN servers give excellent performance results when accessed via Linux'' open-iscsi initiator. * Network 4x SDR InfiniBand. The raw transfer speed of this network is 8 Gbit/s. Netperf reports 1.6 Gbit/s between the ZFS server and one SAN server (IPoIB, single-threaded). iSCSI transfer speed between the ZFS server and one SAN server is about 150 MB/s. Performance test ---------------- Software: xdd (see also http://www.ioperformance.com/products.htm). I modified xdd such that the -dio command line option enables O_RSYNC and O_DSYNC in open() instead of calling directio(). Test command: xdd -verbose -processlock -dio -op write -targets 1 testfile -reqsize 1 -blocksize $((2**20)) -mbytes 1000 -passes 3 This test command triggers synchronous writes with a block size of 1 MB (verified this with truss). I am using synchronous writes because these give the same performance results as very large buffered writes (large compared to ZFS'' cache size). Write performance reported by xdd for synchronous sequential writes: 50 MB/s, which is lower than expected. Any help with improving the performance of this setup is highly appreciated. Bart Van Assche. This message posted from opensolaris.org
Hi Bart; Your setup is composed of a lot of components. I''d suggest the following. 1) check the system with one SAN server and see the performance 2) check the internal performance of one SAN server 3) TRY using Solaris instead of Linux as solaris iSCSI target could offer more performance 4) For performance over IB I strongly suggest you Lustre 5) Check your Ethernet setup Regards Mertol Mertol Ozyoney Storage Practice - Sales Manager Sun Microsystems, TR Istanbul TR Phone +902123352200 Mobile +905339310752 Fax +902123352222 Email mertol.ozyoney at Sun.COM -----Original Message----- From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss-bounces at opensolaris.org] On Behalf Of Bart Van Assche Sent: 20 Mart 2008 Per?embe 17:33 To: zfs-discuss at opensolaris.org Subject: [zfs-discuss] ZFS performance lower than expected Hello, I just made a setup in our lab which should make ZFS fly, but unfortunately performance is significantly lower than expected: for large sequential data transfers write speed is about 50 MB/s while I was expecting at least 150 MB/s. Setup ----- The setup consists of five servers in total: one OpenSolaris ZFS server and four SAN servers. ZFS accesses the SAN servers via iSCSI and IPoIB. * ZFS Server Operating system: OpenSolaris build 78. CPU: Two Intel Xeon CPU''s, eight cores in total. RAM: 16 GB. Disks: not relevant for this test. * SAN Servers Operating system: Linux 2.6.22.18 kernel, 64-bit + iSCSI Enterprise Target (IET). IET has been configured such that it performs both read and write caching. CPU: Intel Xeon CPU E5310, 1.60GHz, four cores in total. RAM: two servers with 8 GB RAM, one with 4 GB RAM, one with 2 GB RAM. Disks: 16 disks in total: two disks with the Linux OS and 14 set up in RAID-0 via LVM. The LVM volume is exported via iSCSI and used by ZFS. These SAN servers give excellent performance results when accessed via Linux'' open-iscsi initiator. * Network 4x SDR InfiniBand. The raw transfer speed of this network is 8 Gbit/s. Netperf reports 1.6 Gbit/s between the ZFS server and one SAN server (IPoIB, single-threaded). iSCSI transfer speed between the ZFS server and one SAN server is about 150 MB/s. Performance test ---------------- Software: xdd (see also http://www.ioperformance.com/products.htm). I modified xdd such that the -dio command line option enables O_RSYNC and O_DSYNC in open() instead of calling directio(). Test command: xdd -verbose -processlock -dio -op write -targets 1 testfile -reqsize 1 -blocksize $((2**20)) -mbytes 1000 -passes 3 This test command triggers synchronous writes with a block size of 1 MB (verified this with truss). I am using synchronous writes because these give the same performance results as very large buffered writes (large compared to ZFS'' cache size). Write performance reported by xdd for synchronous sequential writes: 50 MB/s, which is lower than expected. Any help with improving the performance of this setup is highly appreciated. Bart Van Assche. This message posted from opensolaris.org _______________________________________________ zfs-discuss mailing list zfs-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 3/20/08, Bart Van Assche <bart.vanassche at qlayer.com> wrote:> > Hello, > > I just made a setup in our lab which should make ZFS fly, but > unfortunately performance is significantly lower than expected: for large > sequential data transfers write speed is about 50 MB/s while I was expecting > at least 150 MB/s. > > Setup > ----- > The setup consists of five servers in total: one OpenSolaris ZFS server > and four SAN servers. ZFS accesses the SAN servers via iSCSI and IPoIB. > > * ZFS Server > Operating system: OpenSolaris build 78. > CPU: Two Intel Xeon CPU''s, eight cores in total. > RAM: 16 GB. > Disks: not relevant for this test. > > * SAN Servers > Operating system: Linux 2.6.22.18 kernel, 64-bit + iSCSI Enterprise Target > (IET). IET has been configured such that it performs both read and write > caching. > CPU: Intel Xeon CPU E5310, 1.60GHz, four cores in total. > RAM: two servers with 8 GB RAM, one with 4 GB RAM, one with 2 GB RAM. > Disks: 16 disks in total: two disks with the Linux OS and 14 set up in > RAID-0 via LVM. The LVM volume is exported via iSCSI and used by ZFS. > > These SAN servers give excellent performance results when accessed via > Linux'' open-iscsi initiator. > > * Network > 4x SDR InfiniBand. The raw transfer speed of this network is 8 Gbit/s. > Netperf reports 1.6 Gbit/s between the ZFS server and one SAN server > (IPoIB, single-threaded). iSCSI transfer speed between the ZFS server and > one SAN server is about 150 MB/s. > > > Performance test > ---------------- > Software: xdd (see also http://www.ioperformance.com/products.htm). I > modified xdd such that the -dio command line option enables O_RSYNC and > O_DSYNC in open() instead of calling directio(). > Test command: xdd -verbose -processlock -dio -op write -targets 1 testfile > -reqsize 1 -blocksize $((2**20)) -mbytes 1000 -passes 3 > This test command triggers synchronous writes with a block size of 1 MB > (verified this with truss). I am using synchronous writes because these give > the same performance results as very large buffered writes (large compared > to ZFS'' cache size). > > Write performance reported by xdd for synchronous sequential writes: 50 > MB/s, which is lower than expected. > > > Any help with improving the performance of this setup is highly > appreciated. > > > Bart Van Assche. > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >Have you considered building one solaris system and using its iSCSI target? When it comes to software iSCSI, you tend to get VERY different results when moving from one platform to the next. In my experience, Linux is notorious on iSCSI for working well with itself, and nothing else. Have you tested this with vxvm or UFS before blindly pointing the finger at zfs? It seems very unlikely ZFS is the source of your problem. --Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080320/82ff3415/attachment.html>
----- "Tim" <tim at tcsac.net> wrote:> Have you considered building one solaris system and using its iSCSI > target? When it comes to software iSCSI, you tend to get VERY > different results when moving from one platform to the next. In my > experience, Linux is notorious on iSCSI for working well with itself, > and nothing else. > > Have you tested this with vxvm or UFS before blindly pointing the > finger at zfs? It seems very unlikely ZFS is the source of your > problem.I have verified the iSCSI communication between the ZFS server and the SAN servers by capturing the network traffic on port 3260. What I see in Wireshark is that all iSCSI traffic looks normal, with one exception: there are delays in the network traffic. It looks like the ZFS server is communicating with only one SAN server at a time. Bart. This message posted from opensolaris.org
> It looks like the ZFS server is communicating with only one SAN server at a time.This leads to the following question: is there a setting in ZFS that enables concurrent writes to the ZFS storage targets instead of serializing all write actions ? Bart. This message posted from opensolaris.org
Bart Van Assche wrote:> Hello, > > I just made a setup in our lab which should make ZFS fly, but unfortunately performance is significantly lower than expected: for large sequential data transfers write speed is about 50 MB/s while I was expecting at least 150 MB/s. > > Setup > ----- > The setup consists of five servers in total: one OpenSolaris ZFS server and four SAN servers. ZFS accesses the SAN servers via iSCSI and IPoIB. > > * ZFS Server > Operating system: OpenSolaris build 78. > CPU: Two Intel Xeon CPU''s, eight cores in total. > RAM: 16 GB. > Disks: not relevant for this test. > > * SAN Servers > Operating system: Linux 2.6.22.18 kernel, 64-bit + iSCSI Enterprise Target (IET). IET has been configured such that it performs both read and write caching. > CPU: Intel Xeon CPU E5310, 1.60GHz, four cores in total. > RAM: two servers with 8 GB RAM, one with 4 GB RAM, one with 2 GB RAM. > Disks: 16 disks in total: two disks with the Linux OS and 14 set up in RAID-0 via LVM. The LVM volume is exported via iSCSI and used by ZFS. > > These SAN servers give excellent performance results when accessed via Linux'' open-iscsi initiator. > > * Network > 4x SDR InfiniBand. The raw transfer speed of this network is 8 Gbit/s. Netperf reports 1.6 Gbit/s between the ZFS server and one SAN server (IPoIB, single-threaded). iSCSI transfer speed between the ZFS server and one SAN server is about 150 MB/s. > > > Performance test > ---------------- > Software: xdd (see also http://www.ioperformance.com/products.htm). I modified xdd such that the -dio command line option enables O_RSYNC and O_DSYNC in open() instead of calling directio(). > Test command: xdd -verbose -processlock -dio -op write -targets 1 testfile -reqsize 1 -blocksize $((2**20)) -mbytes 1000 -passes 3 > This test command triggers synchronous writes with a block size of 1 MB (verified this with truss). I am using synchronous writes because these give the same performance results as very large buffered writes (large compared to ZFS'' cache size). > > Write performance reported by xdd for synchronous sequential writes: 50 MB/s, which is lower than expected. > > > Any help with improving the performance of this setup is highly appreciated. > > > Bart Van Assche. > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discussIf I understand this correctly, you''ve stripped the disks together w/ Linux lvm, then exported a single ISCSI volume to ZFS (or two for mirroring; which isn''t clear). I don''t know how many concurrent IOs Solaris thinks your ISCSI volumes will handle, but that''s one area to examine. The only way to realize full performance is going to be to get ZFS to issue multiple IOs to the ISCSI boxes at once. I''d also suggest just exporting the raw disks to zfs, and have it do the stripping. On 4 commodity 500 GB SATA drives set up w/ RAID Z, my 2.6 Ghz dual core AMD box sustains 100+ MB/sec read or write.... it happily saturates a GB nic w/ multiple concurrent reads over Samba. W/ 16 drives direct attach you should see close to 500 MB/sec sustained IO throughput. - Bart -- Bart Smaalders Solaris Kernel Performance barts at cyber.eng.sun.com http://blogs.sun.com/barts "You will contribute more with mercurial than with thunderbird."
Bart Van Assche wrote:> Hello, > > I just made a setup in our lab which should make ZFS fly, but unfortunately performance is significantly lower than expected: for large sequential data transfers write speed is about 50 MB/s while I was expecting at least 150 MB/s. > > Setup > ----- > The setup consists of five servers in total: one OpenSolaris ZFS server and four SAN servers. ZFS accesses the SAN servers via iSCSI and IPoIB. > > * ZFS Server > Operating system: OpenSolaris build 78. > CPU: Two Intel Xeon CPU''s, eight cores in total. > RAM: 16 GB. > Disks: not relevant for this test. > > * SAN Servers > Operating system: Linux 2.6.22.18 kernel, 64-bit + iSCSI Enterprise Target (IET). IET has been configured such that it performs both read and write caching. > CPU: Intel Xeon CPU E5310, 1.60GHz, four cores in total. > RAM: two servers with 8 GB RAM, one with 4 GB RAM, one with 2 GB RAM. > Disks: 16 disks in total: two disks with the Linux OS and 14 set up in RAID-0 via LVM. The LVM volume is exported via iSCSI and used by ZFS. > > These SAN servers give excellent performance results when accessed via Linux'' open-iscsi initiator. > > * Network > 4x SDR InfiniBand. The raw transfer speed of this network is 8 Gbit/s. Netperf reports 1.6 Gbit/s between the ZFS server and one SAN server (IPoIB, single-threaded). iSCSI transfer speed between the ZFS server and one SAN server is about 150 MB/s. > > > Performance test > ---------------- > Software: xdd (see also http://www.ioperformance.com/products.htm). I modified xdd such that the -dio command line option enables O_RSYNC and O_DSYNC in open() instead of calling directio(). > Test command: xdd -verbose -processlock -dio -op write -targets 1 testfile -reqsize 1 -blocksize $((2**20)) -mbytes 1000 -passes 3 > This test command triggers synchronous writes with a block size of 1 MB (verified this with truss). I am using synchronous writes because these give the same performance results as very large buffered writes (large compared to ZFS'' cache size). >This makes extensive use of the ZIL. Check the ZFS Best Practices Guide for info on sync loads and the ZIL. IMHO, it makes little sense to do such performance tests without using a slog. -- richard> Write performance reported by xdd for synchronous sequential writes: 50 MB/s, which is lower than expected. > > > Any help with improving the performance of this setup is highly appreciated. > > > Bart Van Assche. > > > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
Bart Smaalders wrote:> On 4 commodity 500 GB SATA drives set up w/ RAID Z, my 2.6 Ghz dual > core AMD box sustains > 100+ MB/sec read or write.... it happily saturates a GB nic w/ multiple > concurrent reads over > Samba. >This leads me to a question I''ve been meaning to ask for a while. I''m waiting for the parts to come in to setup 3 new servers. What should I expect for read or write bandwidth local to the server for 5x300GB 10K RPM U320 drives? (Old drive I know, but it''s what I inheirited) Ignoring for the moment the sync semantics with NFS, what might that translate to for Maximum Read/Write bandwidth for NFS access? Is there a good rule of thumb for how many 1Gb Enet interfaces I might want to ''dladm'' together to make sure the network interface isn''t the ''weak link''? Each of these machines has 2 quad port Intel e1000 cards. Will the PCI (PCI-X 133Mhz 64bit) bus be a bottle neck? -Kyle
On 3/20/08, Kyle McDonald <KMcDonald at egenera.com> wrote:> > Bart Smaalders wrote: > > On 4 commodity 500 GB SATA drives set up w/ RAID Z, my 2.6 Ghz dual > > core AMD box sustains > > 100+ MB/sec read or write.... it happily saturates a GB nic w/ multiple > > concurrent reads over > > Samba. > > > This leads me to a question I''ve been meaning to ask for a while. I''m > waiting for the parts to come in to setup 3 new servers. > > What should I expect for read or write bandwidth local to the server for > 5x300GB 10K RPM U320 drives? (Old drive I know, but it''s what I > inheirited) > > > Ignoring for the moment the sync semantics with NFS, what might that > translate to for Maximum Read/Write bandwidth for NFS access? > > Is there a good rule of thumb for how many 1Gb Enet interfaces I might > want to ''dladm'' together to make sure the network interface isn''t the > ''weak link''? > > Each of these machines has 2 quad port Intel e1000 cards. Will the PCI > (PCI-X 133Mhz 64bit) bus be a bottle neck? > > -Kyle > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >- Show quoted text - On 3/20/08, Kyle McDonald <KMcDonald at egenera.com> wrote:> > Bart Smaalders wrote: > > On 4 commodity 500 GB SATA drives set up w/ RAID Z, my 2.6 Ghz dual > > core AMD box sustains > > 100+ MB/sec read or write.... it happily saturates a GB nic w/ multiple > > concurrent reads over > > Samba. > > > This leads me to a question I''ve been meaning to ask for a while. I''m > waiting for the parts to come in to setup 3 new servers. > > What should I expect for read or write bandwidth local to the server for > 5x300GB 10K RPM U320 drives? (Old drive I know, but it''s what I > inheirited) > > > Ignoring for the moment the sync semantics with NFS, what might that > translate to for Maximum Read/Write bandwidth for NFS access? > > Is there a good rule of thumb for how many 1Gb Enet interfaces I might > want to ''dladm'' together to make sure the network interface isn''t the > ''weak link''? > > Each of these machines has 2 quad port Intel e1000 cards. Will the PCI > (PCI-X 133Mhz 64bit) bus be a bottle neck? > > -Kyle > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >Your PCI bus won''t be an issue: 1.06 GB/s 8.5 Gbps As for the drives... it depends entirely on the age/model. Are you looking for raw throughput numbers, or iops? If it''s throughput numbers, it will also depend on the type of workload. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080320/a78f3841/attachment.html>
> If I understand this correctly, you''ve stripped the disks together > w/ Linux lvm, then exported a single ISCSI volume to ZFS (or two for > mirroring; which isn''t clear).The disks in the SAN servers were indeed striped together with Linux LVM and exported as a single volume to ZFS. The ZFS pool was set up as follows: $ zpool create -f storagepoola raidz1 c4t8d0 c4t6d0 c4t12d0 c4t10d0 $ zpool get all storagepoola NAME PROPERTY VALUE SOURCE storagepoola size 12.5T - storagepoola used 71.6G - storagepoola available 12.4T - storagepoola capacity 0% - storagepoola altroot - default storagepoola health ONLINE - storagepoola guid 13384031601381355037 - storagepoola version 10 default storagepoola bootfs - default storagepoola delegation on default storagepoola autoreplace off default storagepoola cachefile - default storagepoola failmode wait default> I don''t know how many concurrent IOs Solaris thinks your ISCSI volumes > will handle, but that''s one area to examine. The only way to realize full > performance is going to be to get ZFS to issue multiple > IOs to the ISCSI boxes at once.I have read the zpool and zfs man pages, but it''s still not clear to me how I can configure ZFS such that it issues concurrent I/O''s to the iSCSI boxes ?> I''d also suggest just exporting the raw disks to zfs, and have it do the > stripping.Thanks, I''ll try this configuration as soon as I have the time. Bart. This message posted from opensolaris.org
Bart Van Assche wrote:>> If I understand this correctly, you''ve stripped the disks together >> w/ Linux lvm, then exported a single ISCSI volume to ZFS (or two for >> mirroring; which isn''t clear). >> > > The disks in the SAN servers were indeed striped together with Linux LVM and exported as a single volume to ZFS. The ZFS pool was set up as follows: > > $ zpool create -f storagepoola raidz1 c4t8d0 c4t6d0 c4t12d0 c4t10d0 >Please see the discussions on raidz performance in the Best Practices Guide.> $ zpool get all storagepoola > NAME PROPERTY VALUE SOURCE > storagepoola size 12.5T - > storagepoola used 71.6G - > storagepoola available 12.4T - > storagepoola capacity 0% - > storagepoola altroot - default > storagepoola health ONLINE - > storagepoola guid 13384031601381355037 - > storagepoola version 10 default > storagepoola bootfs - default > storagepoola delegation on default > storagepoola autoreplace off default > storagepoola cachefile - default > storagepoola failmode wait default > > >> I don''t know how many concurrent IOs Solaris thinks your ISCSI volumes >> will handle, but that''s one area to examine. The only way to realize full >> performance is going to be to get ZFS to issue multiple >> IOs to the ISCSI boxes at once. >> > > I have read the zpool and zfs man pages, but it''s still not clear to me how I can configure ZFS such that it issues concurrent I/O''s to the iSCSI boxes ? > >I don''t know how to tell ZFS to not issue concurrent I/Os. By default, it will try to queue up to 35 I/Os to each vdev. You should be able to use iostat to examine the queues and vdev performance. -- richard
> The disks in the SAN servers were indeed striped together with Linux LVM > and exported as a single volume to ZFS.That is really going to hurt. In general, you''re much better off giving ZFS access to all the individual LUNs. The intermediate LVM layer kills the concurrency that''s native to ZFS. Jeff
> > The disks in the SAN servers were indeed striped together with Linux LVM > > and exported as a single volume to ZFS. > > That is really going to hurt. In general, you''re much better off > giving ZFS access to all the individual LUNs. The intermediate > LVM layer kills the concurrency that''s native to ZFS.Thanks for the hint. By this time I have upgraded from OpenSolaris build 78 to build 87, and I gave ZFS access to all individual LUN''s. zpool iostat now reports a write speed of 133 MB/s in the test I ran. This means that ZFS sends out data at a rate of (4/3*133) = 177 MB. Each of the iSCSI target systems has to write about (177/4) = 44 MB/s to disk. The iSCSI target implementation I used in this test (SCST) can receive data at a rate of 300 MB/s, and the target systems are capable of writing to disk at a speed of 700 MB/s (16 independent disks). Or: the bottleneck in this setup is the speed at which OpenSolaris'' iSCSI initiator can send data out. Setup details: Zpool setup (64 disks in total, organized in 16 raidz1 groups, where the four disks of each raidz1 group are mounted in a different server): zpool create storagepoola raidz1 c4t19d0 c4t21d0 c4t15d0 c4t17d0 raidz1 c4t20d0 c4t22d0 c4t16d0 c4t18d0 raidz1 c4t37d0 c4t51d0 c4t65d0 c4t23d0 raidz1 c4t38d0 c4t52d0 c4t66d0 c4t24d0 raidz1 c4t39d0 c4t53d0 c4t67d0 c4t25d0 raidz1 c4t40d0 c4t54d0 c4t68d0 c4t26d0 raidz1 c4t41d0 c4t55d0 c4t69d0 c4t27d0 raidz1 c4t42d0 c4t56d0 c4t70d0 c4t28d0 raidz1 c4t43d0 c4t57d0 c4t71d0 c4t29d0 raidz1 c4t44d0 c4t58d0 c4t72d0 c4t30d0 raidz1 c4t45d0 c4t59d0 c4t73d0 c4t31d0 raidz1 c4t46d0 c4t60d0 c4t74d0 c4t32d0 raidz1 c4t47d0 c4t61d0 c4t75d0 c4t33d0 raidz1 c4t48d0 c4t62d0 c4t76d0 c4t34d0 raidz1 c4t49d0 c4t63d0 c4t77d0 c4t35d0 raidz1 c4t50d0 c4t64d0 c4t78d0 c4t36d0 Test command I ran on the storage pool: ( cd /storagepoola && while true; do rm -f blk; dd if=/dev/zero of=blk bs=10M count=1000; done ) iSCSI parameters: # iscsiadm list target -v Target: ... Alias: - TPGT: 1 ISID: 4000002a0000 Connections: 1 CID: 0 IP address (Local): 192.168.65.10:52665 IP address (Peer): 192.168.65.15:3260 Discovery Method: Static Login Parameters (Negotiated): Data Sequence In Order: yes Data PDU In Order: yes Default Time To Retain: 20 Default Time To Wait: 2 Error Recovery Level: 0 First Burst Length: 65536 Immediate Data: yes Initial Ready To Transfer (R2T): yes Max Burst Length: 262144 Max Outstanding R2T: 1 Max Receive Data Segment Length: 1048576 Max Connections: 1 Header Digest: NONE Data Digest: NONE IPoIB througput as reported by netperf (OpenSolaris to Linux): about 2100 Mbit/s = 262 MB/s. IPoIB througput as reported by netperf (Linux to Linux): about 2950 Mbit/s = 368 MB/s. This message posted from opensolaris.org