similar to: Expected network throughput

Displaying 20 results from an estimated 12000 matches similar to: "Expected network throughput"

2006 Apr 04
9
Very slow domU network performance
I set up a domU as a backup server, but it has very, very poor network performance with external computers. I ran some tests with iperf and found some very weird results. Using iperf, I get these approximate numbers (the left column is the iperf client and the right column is the iperf server): domU --> domU 1.77 Gbits/sec (using 127.0.0.1) domU --> domU 1.85 Gbits/sec (using domU
2006 Apr 04
9
Very slow domU network performance
I set up a domU as a backup server, but it has very, very poor network performance with external computers. I ran some tests with iperf and found some very weird results. Using iperf, I get these approximate numbers (the left column is the iperf client and the right column is the iperf server): domU --> domU 1.77 Gbits/sec (using 127.0.0.1) domU --> domU 1.85 Gbits/sec (using domU
2009 Jan 17
25
GPLPV network performance
Just reporting some iperf results. In each case, Dom0 is iperf server, DomU is iperf client: (1) Dom0: Intel Core2 3.16 GHz, CentOS 5.2, xen 3.0.3. DomU: Windows XP SP3, GPLPV 0.9.12-pre13, file based. Iperf: 1.17 Gbits/sec (2) Dom0: Intel Core2 2.33 GHz, CentOS 5.2, xen 3.0.3. DomU: Windows XP SP3, GPLPV 0.9.12-pre13, file based. Iperf: 725 Mbits/sec (3) Dom0: Intel Core2 2.33 GHz,
2011 Jan 11
1
Bonding performance question
I have a Dell server with four bonded, gigabit interfaces. Bonding mode is 802.3ad, xmit_hash_policy=layer3+4. When testing this setup with iperf, I never get more than a total of about 3Gbps throughput. Is there anything to tweak to get better throughput? Or am I running into other limits (e.g. was reading about tcp retransmit limits for mode 0). The iperf test was run with iperf -s on the
2010 May 29
1
IFB0 throughput 3-4% lower than expected
I have two boxes for the purpose of testing traffic control and my knowledge thereof (which is at the inkling stage). The boxes are connected by 100Mbit ethernet cards via a switch. For egress traffic via eth0 I achieve a throughput that is close to the specified CEILing, particularly for values above 1mbit. Ingress traffic does not seem so well behaved. Above about 1mbit rates achieved are
2010 Nov 24
1
slow network throughput, how to improve?
would like some input on this one please. Two CentOS 5.5 XEN servers, with 1GB NIC's, connected to a 1GB switch transfer files to each other at about 30MB/s between each other. Both servers have the following setup: CentOS 5.5 x64 XEN 1GB NIC's 7200rpm SATA HDD's The hardware configuration can't change, I need to use these servers as they are. They are both used in production
2014 Nov 28
1
poor throughput with tinc
Hi, I am testing tinc for a very large scale deployment. I am using tinc-1.1 for testing. test results below are for tinc in switch mode. all other settings are default. test is performed in LAN env. 2 different hosts. I am getting only 24.6 Mbits/sec when tinc is used. without tinc on the same hosts/link I get 95 to 100 Mbits/sec using iperf. Over Tinc: iperf -c 192.168.9.9 -b 100m -l 32k -w
2006 Jul 05
1
kernel udp rate limit
Hi List. First post, be gentle please. Is there any limit in the linux UDP rate? I am using linux kernel 2.6 and iperf to measure bandwidth between two endpoints connected by 100 Mbits ethernet. Running (as root) iperf -u -s and iperf -u -c always gives me 1.05 Mbits/seg even when runned in the same machine. Can somebody clarify this? Thanks in advance. Sebastian
2010 Nov 15
5
Poor performance on bandwidth, Xen 4.0.1 kernel pvops 2.6.32.24
Hello list, I have two differents installation Xen Hypervisor on two identical physical server, on the same switch : The problem is on my new server (Xen 4.0.1 with pvops kernel 2.6.32.24), I have bad performance on bandwidth I have test with a files copy and "iperf". Result iperf average: Transfert Bandwidth XEN-A -> Windows
2011 Mar 11
1
UDP Perfomance tuning
Hi, We are running on 5.5 on a HP ProLiant DL360 G6. Kernel version is 2.6.18-194.17.1.el5 (we had also tested with the latest available kernel kernel-2.6.18-238.1.1.el5.x86_64) We running some performance tests using the "iperf" utility. We are seeing very bad and inconsistent performance on the UDP testing. The maximum we could get, was 440 Mbits/sec, and it varies from 250 to 440
2006 Jan 04
3
TC/CBQ shaping problems
Hello everyone, I''m a newbie experimenting with CBQ shaping and am facing a few problems. Can any of you please help? TEST SETUP: +---------------+ +----------------+ | 10.0.0.103 |----------->| 10.0.0.102 | +---------------+ +----------------+ 10.0.0.103: Linux, 100Mbit/s NIC 10.0.0.102: Windows, 100Mbit/s NIC, iperf tcp server (ports 2000 and 2001) WHAT I
2010 Aug 03
1
performance with libvirt and kvm
Hi, I am seeing a performance degradation while using libvirt to start my vm (kvm). vm is fedora 12 and host is also fedora 12, both with 2.6.32.10-90.fc12.i686. Here are the statistics from iperf : >From VM: [ 3] 0.0-30.0 sec 199 MBytes 55.7 Mbits/sec >From host : [ 3] 0.0-30.0 sec 331 MBytes 92.6 Mbits/sec libvirt command as seen from ps output : /usr/bin/qemu-kvm -S -M
2008 Jul 10
1
TX tcp checksum errors with Xen GPLPV 0.9.9 Drivers (xen 3.2.1 and windows Server x86 2003 R2)
Hello, My first post on Xen-Users, so .. i''ve discovered a strange problem. Setup: -A Windows server 2003R2 (x86) with GPL PV driver 0.9.9 ipferf 1.7.0 (from 2003) -dom0 a opensuse 11.0 xen: # rpm -q -a |grep -i xen kqemu-kmp-xen-1.3.0pre11_2.6.25.5_1.1-7.1 kiwi-desc-xenboot-2.38-67.1 xen-3.2.1_16881_04-4.2 xen-tools-3.2.1_16881_04-4.2 xen-libs-3.2.1_16881_04-4.2
2004 Oct 12
3
Performance Issues with GBit LAN
Hi. I have 2 PC's connected with 1GBit NIC's. When I transfer a file from my File-Server(Redhat9.0, 256 SD-RAM, 300MHz PII, RTL8169 NIC, 2x Western Digital WD200JB RAID 0) to my Windows-PC(AMD Athlon XP 1800+, 1024 MB DDR-RAM, WINXP PRO, RTL8169 NIC, 2x Western Digital WD080JB RAID 0) with Samba, i get Speeds around 8-9MB/sec. I think this is too low for an GBit Network, so i tested the
2018 Jun 30
1
[PATCH net-next v3 4/4] net: vhost: add rx busy polling in tx path
On Fri, 29 Jun 2018 23:33:58 -0700 xiangxia.m.yue at gmail.com wrote: > From: Tonghao Zhang <xiangxia.m.yue at gmail.com> > > This patch improves the guest receive and transmit performance. > On the handle_tx side, we poll the sock receive queue at the > same time. handle_rx do that in the same way. > > We set the poll-us=100us and use the iperf3 to test Where/how do
2013 Apr 04
1
Freenas domU network performance issue
Hi guys, I''m running a freenas domU (FreeBSD 8.3 based, ZFS v28, 2 vcpus mapped to the same HT capable core) to serve storage for all purpose including other domUs running on the same host. I did some study to understand how well it works and the result is kind of confusing. In summary, the network performance between domains on the same host is worse than expected. And NFS service to
2004 Nov 17
9
serious networking (em) performance (ggate and NFS) problem
Dear best guys, I really love 5.3 in many ways but here're some unbelievable transfer rates, after I went out and bought a pair of Intel GigaBit Ethernet Cards to solve my performance problem (*laugh*): (In short, see *** below) Tests were done with two Intel GigaBit Ethernet cards (82547EI, 32bit PCI Desktop adapter MT) connected directly without a switch/hub and "device
2006 May 10
11
HTB at 100+ Mbits/sec
Hello all, I''ve been trying to test HTB performance for different link bandwidths to find potential limits and this is what I have so far: http://home.comcast.net/~msethuraman/htbtest/ Can members please go over the setup, test procedure and the results and answer a few questions? 1. Is the testing methodology okay and can the results be considered accurate? If so, is this a decent
2014 Apr 29
2
Degraded performance when using GRE over tinc
Hi, In a setup where OpenVSwitch is used with GRE tunels on top of an interface provided by tinc, I'm experiencing significant performance degradation problems (from 100Mb/s down to 1Mb/s in the worst case) and I'm not sure how to fix this. The manifestation of the problem is, from the user point of view, iperf reports ~100Mb/s and rsync reports ~1Mb/s: $ iperf -c 91.224.149.132
2015 Mar 13
3
Network throughput testing software available for CentOS/Linux
On 12-03-2015 17:39, Digimer wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 12/03/15 04:29 PM, Gilbert Sebenste wrote: >> Hello everyone, >> >> A network engineer buddy of mine brought up for discussion with me >> that he'd like to do some throughput testing, but he's new to >> Linux/RedHat. Is there any software I can recommend to