similar to: kernel udp rate limit

Displaying 20 results from an estimated 6000 matches similar to: "kernel udp rate limit"

2011 Mar 11
1
UDP Perfomance tuning
Hi, We are running on 5.5 on a HP ProLiant DL360 G6. Kernel version is 2.6.18-194.17.1.el5 (we had also tested with the latest available kernel kernel-2.6.18-238.1.1.el5.x86_64) We running some performance tests using the "iperf" utility. We are seeing very bad and inconsistent performance on the UDP testing. The maximum we could get, was 440 Mbits/sec, and it varies from 250 to 440
2006 Jan 04
3
TC/CBQ shaping problems
Hello everyone, I''m a newbie experimenting with CBQ shaping and am facing a few problems. Can any of you please help? TEST SETUP: +---------------+ +----------------+ | 10.0.0.103 |----------->| 10.0.0.102 | +---------------+ +----------------+ 10.0.0.103: Linux, 100Mbit/s NIC 10.0.0.102: Windows, 100Mbit/s NIC, iperf tcp server (ports 2000 and 2001) WHAT I
2011 Jan 11
1
Bonding performance question
I have a Dell server with four bonded, gigabit interfaces. Bonding mode is 802.3ad, xmit_hash_policy=layer3+4. When testing this setup with iperf, I never get more than a total of about 3Gbps throughput. Is there anything to tweak to get better throughput? Or am I running into other limits (e.g. was reading about tcp retransmit limits for mode 0). The iperf test was run with iperf -s on the
2006 Apr 04
9
Very slow domU network performance
I set up a domU as a backup server, but it has very, very poor network performance with external computers. I ran some tests with iperf and found some very weird results. Using iperf, I get these approximate numbers (the left column is the iperf client and the right column is the iperf server): domU --> domU 1.77 Gbits/sec (using 127.0.0.1) domU --> domU 1.85 Gbits/sec (using domU
2006 Apr 04
9
Very slow domU network performance
I set up a domU as a backup server, but it has very, very poor network performance with external computers. I ran some tests with iperf and found some very weird results. Using iperf, I get these approximate numbers (the left column is the iperf client and the right column is the iperf server): domU --> domU 1.77 Gbits/sec (using 127.0.0.1) domU --> domU 1.85 Gbits/sec (using domU
2006 Jun 21
1
Expected network throughput
Hi, I have just started to work with Xen and have a question regarding the expected network throughput. Here is my configuration: Processor: 2.8 GHz Intel Celeron (Socket 775) Motherboard: Gigabyte 8I865GVMF-775 Memory: 1.5 GB Basic system: Kubuntu 6.06 Dapper Drake Xen version: 3.02 (Latest 3.0 stable download) I get the following iperf results: Src Dest Throughput Dom0 Dom0
2010 Nov 15
5
Poor performance on bandwidth, Xen 4.0.1 kernel pvops 2.6.32.24
Hello list, I have two differents installation Xen Hypervisor on two identical physical server, on the same switch : The problem is on my new server (Xen 4.0.1 with pvops kernel 2.6.32.24), I have bad performance on bandwidth I have test with a files copy and "iperf". Result iperf average: Transfert Bandwidth XEN-A -> Windows
2010 Aug 03
1
performance with libvirt and kvm
Hi, I am seeing a performance degradation while using libvirt to start my vm (kvm). vm is fedora 12 and host is also fedora 12, both with 2.6.32.10-90.fc12.i686. Here are the statistics from iperf : >From VM: [ 3] 0.0-30.0 sec 199 MBytes 55.7 Mbits/sec >From host : [ 3] 0.0-30.0 sec 331 MBytes 92.6 Mbits/sec libvirt command as seen from ps output : /usr/bin/qemu-kvm -S -M
2009 Jan 17
25
GPLPV network performance
Just reporting some iperf results. In each case, Dom0 is iperf server, DomU is iperf client: (1) Dom0: Intel Core2 3.16 GHz, CentOS 5.2, xen 3.0.3. DomU: Windows XP SP3, GPLPV 0.9.12-pre13, file based. Iperf: 1.17 Gbits/sec (2) Dom0: Intel Core2 2.33 GHz, CentOS 5.2, xen 3.0.3. DomU: Windows XP SP3, GPLPV 0.9.12-pre13, file based. Iperf: 725 Mbits/sec (3) Dom0: Intel Core2 2.33 GHz,
2013 Apr 04
1
Freenas domU network performance issue
Hi guys, I''m running a freenas domU (FreeBSD 8.3 based, ZFS v28, 2 vcpus mapped to the same HT capable core) to serve storage for all purpose including other domUs running on the same host. I did some study to understand how well it works and the result is kind of confusing. In summary, the network performance between domains on the same host is worse than expected. And NFS service to
2014 Apr 29
2
Degraded performance when using GRE over tinc
Hi, In a setup where OpenVSwitch is used with GRE tunels on top of an interface provided by tinc, I'm experiencing significant performance degradation problems (from 100Mb/s down to 1Mb/s in the worst case) and I'm not sure how to fix this. The manifestation of the problem is, from the user point of view, iperf reports ~100Mb/s and rsync reports ~1Mb/s: $ iperf -c 91.224.149.132
2013 Oct 21
2
Very slow network speed using Tinc
Hi all, We are using Tinc 2.0.22 as a layer 2 VPN between nodes over the Internet. We are experiencing very slow network speed using Tinc. Between 2 nodes, we have 150 Mbit/s network speed without Tinc (public IPv4 to public IPv4 using iperf), and only 3 Mbit/s using Tinc (private IPv4 to private IPv4). Here is the configuration of Tinc we use : AddressFamily = ipv4 BindToInterface = vmbr1
2008 Jul 10
1
TX tcp checksum errors with Xen GPLPV 0.9.9 Drivers (xen 3.2.1 and windows Server x86 2003 R2)
Hello, My first post on Xen-Users, so .. i''ve discovered a strange problem. Setup: -A Windows server 2003R2 (x86) with GPL PV driver 0.9.9 ipferf 1.7.0 (from 2003) -dom0 a opensuse 11.0 xen: # rpm -q -a |grep -i xen kqemu-kmp-xen-1.3.0pre11_2.6.25.5_1.1-7.1 kiwi-desc-xenboot-2.38-67.1 xen-3.2.1_16881_04-4.2 xen-tools-3.2.1_16881_04-4.2 xen-libs-3.2.1_16881_04-4.2
2013 Sep 06
1
Problem with lost packets
Hello, I have problem with lost packets in tinc connection. I have tinc VPN with about 60 nodes (central node with tinc 1.0.20 and 59 nodes with tinc 1.0.9 connecting with central). Everything works fine except 2 nodes. In this nodes I observed about 90 % packet loss in connection to central. In the logs (debug level 5) I don't see any errors. I tested connection with iperf (server in
2012 Dec 03
1
Strange QoS behavior
Hi, I'm having some weird problem with the setup of the QoS on a bridged network. As the docs states, outbound/inbound average speed should be expressed in KBps (KBytes per second) but in order to get a maximum speed of 10Mbps (megabits per second) surprising enough I have to use 2560 on the guest (not 1280 as expected). Using 1280 units I get a speed og 5Mbps. I'm aware of peak and
2012 Dec 03
1
Strange behavior of QoS
Hi, I'm having some weird problem with the setup of the QoS on a bridged network. As the docs states, outbound/inbound average speed should be expressed in KBps (KBytes per second) but in order to get a maximum speed of 10Mbps (megabits per second) surprising enough I have to use 2560 on the guest (not 1280 as expected). Using 1280 units I get a speed og 5Mbps. I'm aware of peak and
2004 Nov 17
9
serious networking (em) performance (ggate and NFS) problem
Dear best guys, I really love 5.3 in many ways but here're some unbelievable transfer rates, after I went out and bought a pair of Intel GigaBit Ethernet Cards to solve my performance problem (*laugh*): (In short, see *** below) Tests were done with two Intel GigaBit Ethernet cards (82547EI, 32bit PCI Desktop adapter MT) connected directly without a switch/hub and "device
2014 Nov 28
1
poor throughput with tinc
Hi, I am testing tinc for a very large scale deployment. I am using tinc-1.1 for testing. test results below are for tinc in switch mode. all other settings are default. test is performed in LAN env. 2 different hosts. I am getting only 24.6 Mbits/sec when tinc is used. without tinc on the same hosts/link I get 95 to 100 Mbits/sec using iperf. Over Tinc: iperf -c 192.168.9.9 -b 100m -l 32k -w
2010 Nov 24
1
slow network throughput, how to improve?
would like some input on this one please. Two CentOS 5.5 XEN servers, with 1GB NIC's, connected to a 1GB switch transfer files to each other at about 30MB/s between each other. Both servers have the following setup: CentOS 5.5 x64 XEN 1GB NIC's 7200rpm SATA HDD's The hardware configuration can't change, I need to use these servers as they are. They are both used in production
2006 Feb 01
0
prio test results
Hi, below are some test results from implementing a prio qdisc ''that is also below''. The qdisc is attacted to a vlan interface for my external network. Both tests were run at the same time. The links are policed at 6.0M ''by our provider''. 192.168.70.1 --> 192.168.30.1 My question is: If using a prio qdisc should''nt the iperf run with a tos of b8 have