I would like some input on this one please. Two CentOS 5.5 XEN servers, with 1GB NIC''s, connected to a 1GB switch transfer files to each other at about 30MB/s between each other. Both servers have the following setup: CentOS 5.5 x64 XEN 3.0 (from xm info: xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64) kernel 2.6.18-194.11.3.el5xen 1GB NIC''s 7200rpm SATA HDD''s The hardware configuration can''t change, I need to use these servers as they are. They are both used in production with a few xen domU''s virtual machines running on them. I want to connect them both to a SAN, with gigabit connectivity and would like to know how I can increase network performance a bit, as is. The upstream datacentre only supplies 100MB network connection, so in the internet side of it isn''t much of a problem. If I do manage to reach 100MB that will be my limit in any case. root@zaxen02.securehosting.co.za:/vm/xen/template/centos-5-x64-cpanel root@zaxen02.securehosting.co.za:/ root@zaxen02.securehosting.co.za''s password: centos-5-x64-cpanel.tar.gz 100% 1163MB 29.1MB/s 00:40 iperf indicates that the network throughput is about 930MB though: root@zaxen01:[~]$ iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 196.34.x.x port 5001 connected with 196.34.x.x port 45453 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 1.09 GBytes 935 Mbits/sec root@zaxen02:[~]$ iperf -c zaxen01 ------------------------------------------------------------ Client connecting to zaxen01, TCP port 5001 TCP window size: 16.0 KByte (default) ------------------------------------------------------------ [ 3] local 196.34.x.x port 45453 connected with 196.34.x.x port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.09 GBytes 936 Mbits/sec Is iperf really that accurate, or reliable in this instance, since the packet size is so small that it probably goes straight to memory, instead of HDD? But at the same time, changing the packet size to 10MB, 100MB and 1000MB respectively doesn''t seem to degrade performance much either: root@zaxen02:[~]$ iperf -w 10M -c zaxen01 ------------------------------------------------------------ Client connecting to zaxen01, TCP port 5001 TCP window size: 256 KByte (WARNING: requested 10.0 MByte) ------------------------------------------------------------ [ 3] local 196.34.x.x port 36756 connected with 196.34.x.x port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.07 GBytes 921 Mbits/sec root@zaxen02:[~]$ iperf -w 100M -c zaxen01 ------------------------------------------------------------ Client connecting to zaxen01, TCP port 5001 TCP window size: 256 KByte (WARNING: requested 100 MByte) ------------------------------------------------------------ [ 3] local 196.34.x.x port 36757 connected with 196.34.x.x9 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.08 GBytes 927 Mbits/sec root@zaxen02:[~]$ iperf -w 1000M -c zaxen01 ------------------------------------------------------------ Client connecting to zaxen01, TCP port 5001 TCP window size: 256 KByte (WARNING: requested 1000 MByte) ------------------------------------------------------------ [ 3] local 196.34.x.x port 36758 connected with 196.34.x.x port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.04 GBytes 895 Mbits/sec -- Kind Regards Rudi Ahlers SoftDux Website: http://www.SoftDux.com Technical Blog: http://Blog.SoftDux.com Office: 087 805 9573 Cell: 082 554 7532 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
On 11/24/2010 03:03 AM, Rudi Ahlers wrote:> 100% 1163MB > 29.1MB/s 00:40 > > > iperf indicates that the network throughput is about 930MB though: > > root@zaxen01:[~]$ iperf -s > ------------------------------------------------------------ > Server listening on TCP port 5001 > TCP window size: 85.3 KByte (default) > ------------------------------------------------------------ > [ 4] local 196.34.x.x port 5001 connected with 196.34.x.x port 45453 > [ ID] Interval Transfer Bandwidth > [ 4] 0.0-10.0 sec 1.09 GBytes 935 Mbits/sec >29.1MB/s (megabytes) is 232.8 megabit. Yes, you still have room for improvement in that regard. You said sata drives, but what spec.? Any idea on your throughput for your source/destinatation hard drives? Would they both sustain higher than 29.9MB/s ? _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
Rudi Ahlers
2010-Nov-24 17:29 UTC
Re: [Xen-users] slow network throughput, how to improve?
On Wed, Nov 24, 2010 at 7:15 PM, Richie <listmail@triad.rr.com> wrote:> On 11/24/2010 03:03 AM, Rudi Ahlers wrote: >> >> 100% 1163MB >> 29.1MB/s 00:40 >> >> >> iperf indicates that the network throughput is about 930MB though: >> >> root@zaxen01:[~]$ iperf -s >> ------------------------------------------------------------ >> Server listening on TCP port 5001 >> TCP window size: 85.3 KByte (default) >> ------------------------------------------------------------ >> [ 4] local 196.34.x.x port 5001 connected with 196.34.x.x port 45453 >> [ ID] Interval Transfer Bandwidth >> [ 4] 0.0-10.0 sec 1.09 GBytes 935 Mbits/sec >> > > 29.1MB/s (megabytes) is 232.8 megabit. Yes, you still have room for > improvement in that regard. You said sata drives, but what spec.? Any idea > on your throughput for your source/destinatation hard drives? Would they > both sustain higher than 29.9MB/s ? > > _______________________________________________Hi Richie, zaxen01: Seagate ST31000340AS 1GB HDD, and zaxen02: root@zaxen02:[~]$ cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: ST3250318AS Rev: HP11 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: ATA Model: ST3250318AS Rev: HP11 Type: Direct-Access ANSI SCSI revision: 05 These drives are setup in software RAID1 Some more testing: zaxen01, with 1x SATAII HDD, 8GB RAM, Core2Quad CPU, Intel Server grade motherboard: root@zaxen01:[~]$ dd if=/dev/zero of=filename bs=1024 count=1000000 1000000+0 records in 1000000+0 records out 1024000000 bytes (1.0 GB) copied, 12.2736 seconds, 83.4 MB/s zaxen02, with 2x SATAII HDD setup in software RAID1, 8GB RAM, Core2Quad CPU, SuperMicro Server grade motherboard: root@zaxen02:[~]$ dd if=/dev/zero of=filename bs=1024 count=1000000 1000000+0 records in 1000000+0 records out 1024000000 bytes (1.0 GB) copied, 6.0765 seconds, 169 MB/s Inhouse server, with 2x SATAII HDD in software RAID1, 2GB RAM, Core2Duo CPU, using Gigabyte motherboard: [root@intranet ~]# dd if=/dev/zero of=filename bs=1024 count=1000000 1000000+0 records in 1000000+0 records out 1024000000 bytes (1.0 GB) copied, 10.381 seconds, 98.6 MB/s -- Kind Regards Rudi Ahlers SoftDux Website: http://www.SoftDux.com Technical Blog: http://Blog.SoftDux.com Office: 087 805 9573 Cell: 082 554 7532 _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users