similar to: Network optimization ?

Displaying 20 results from an estimated 6000 matches similar to: "Network optimization ?"

2006 Jun 21
1
Expected network throughput
Hi, I have just started to work with Xen and have a question regarding the expected network throughput. Here is my configuration: Processor: 2.8 GHz Intel Celeron (Socket 775) Motherboard: Gigabyte 8I865GVMF-775 Memory: 1.5 GB Basic system: Kubuntu 6.06 Dapper Drake Xen version: 3.02 (Latest 3.0 stable download) I get the following iperf results: Src Dest Throughput Dom0 Dom0
2006 Apr 04
9
Very slow domU network performance
I set up a domU as a backup server, but it has very, very poor network performance with external computers. I ran some tests with iperf and found some very weird results. Using iperf, I get these approximate numbers (the left column is the iperf client and the right column is the iperf server): domU --> domU 1.77 Gbits/sec (using 127.0.0.1) domU --> domU 1.85 Gbits/sec (using domU
2006 Apr 04
9
Very slow domU network performance
I set up a domU as a backup server, but it has very, very poor network performance with external computers. I ran some tests with iperf and found some very weird results. Using iperf, I get these approximate numbers (the left column is the iperf client and the right column is the iperf server): domU --> domU 1.77 Gbits/sec (using 127.0.0.1) domU --> domU 1.85 Gbits/sec (using domU
2009 Jan 17
25
GPLPV network performance
Just reporting some iperf results. In each case, Dom0 is iperf server, DomU is iperf client: (1) Dom0: Intel Core2 3.16 GHz, CentOS 5.2, xen 3.0.3. DomU: Windows XP SP3, GPLPV 0.9.12-pre13, file based. Iperf: 1.17 Gbits/sec (2) Dom0: Intel Core2 2.33 GHz, CentOS 5.2, xen 3.0.3. DomU: Windows XP SP3, GPLPV 0.9.12-pre13, file based. Iperf: 725 Mbits/sec (3) Dom0: Intel Core2 2.33 GHz,
2011 Jan 11
1
Bonding performance question
I have a Dell server with four bonded, gigabit interfaces. Bonding mode is 802.3ad, xmit_hash_policy=layer3+4. When testing this setup with iperf, I never get more than a total of about 3Gbps throughput. Is there anything to tweak to get better throughput? Or am I running into other limits (e.g. was reading about tcp retransmit limits for mode 0). The iperf test was run with iperf -s on the
2014 Nov 28
1
poor throughput with tinc
Hi, I am testing tinc for a very large scale deployment. I am using tinc-1.1 for testing. test results below are for tinc in switch mode. all other settings are default. test is performed in LAN env. 2 different hosts. I am getting only 24.6 Mbits/sec when tinc is used. without tinc on the same hosts/link I get 95 to 100 Mbits/sec using iperf. Over Tinc: iperf -c 192.168.9.9 -b 100m -l 32k -w
2010 Nov 24
1
slow network throughput, how to improve?
would like some input on this one please. Two CentOS 5.5 XEN servers, with 1GB NIC's, connected to a 1GB switch transfer files to each other at about 30MB/s between each other. Both servers have the following setup: CentOS 5.5 x64 XEN 1GB NIC's 7200rpm SATA HDD's The hardware configuration can't change, I need to use these servers as they are. They are both used in production
2006 Feb 01
0
prio test results
Hi, below are some test results from implementing a prio qdisc ''that is also below''. The qdisc is attacted to a vlan interface for my external network. Both tests were run at the same time. The links are policed at 6.0M ''by our provider''. 192.168.70.1 --> 192.168.30.1 My question is: If using a prio qdisc should''nt the iperf run with a tos of b8 have
2010 Sep 20
0
No subject
connection will remain a TCP connection unless it is broken and restarted. Usually if I stop the client and wait for about 30 seconds to reconnect, there is a much greater chance that the MTU probes work fine, and in about 30 seconds MTU is fixed to 1416. Every time when the MTU probing fails, I see latency between 700 - 1000 ms with 32 byte pings over a LAN. Every time when the MTU probing does
2006 Jul 05
1
kernel udp rate limit
Hi List. First post, be gentle please. Is there any limit in the linux UDP rate? I am using linux kernel 2.6 and iperf to measure bandwidth between two endpoints connected by 100 Mbits ethernet. Running (as root) iperf -u -s and iperf -u -c always gives me 1.05 Mbits/seg even when runned in the same machine. Can somebody clarify this? Thanks in advance. Sebastian
2010 May 29
1
IFB0 throughput 3-4% lower than expected
I have two boxes for the purpose of testing traffic control and my knowledge thereof (which is at the inkling stage). The boxes are connected by 100Mbit ethernet cards via a switch. For egress traffic via eth0 I achieve a throughput that is close to the specified CEILing, particularly for values above 1mbit. Ingress traffic does not seem so well behaved. Above about 1mbit rates achieved are
2013 Dec 17
1
Speed issue in only one direction
Hi all, I'm back again with my speed issues. The past issues where dependant of network I used. Now I run my tests in a lab, with 2 configurations linked by a Gigabit switch : node1: Intel Core i5-2400 with Debian 7.2 node2: Intel Core i5-3570 with Debian 7.2 Both have AES and PCLMULQDQ announced in /proc/cpuinfo. I use Tinc 1.1 from Git. When I run an iperf test from node2 (client) to
2011 Mar 11
1
UDP Perfomance tuning
Hi, We are running on 5.5 on a HP ProLiant DL360 G6. Kernel version is 2.6.18-194.17.1.el5 (we had also tested with the latest available kernel kernel-2.6.18-238.1.1.el5.x86_64) We running some performance tests using the "iperf" utility. We are seeing very bad and inconsistent performance on the UDP testing. The maximum we could get, was 440 Mbits/sec, and it varies from 250 to 440
2016 Jul 16
1
Tinc 1.0.24 regulary disconnected
Promox 4.2 running on 2 nodes + 1 quorum = total 3 servers. All of them have tinc 1.0.24 running. On very rare occasions (every few days or 1~2 weeks), my website hosted on this proxmox node will throw cloudflare 522 connection timed out for few seconds or few minutes: https://support.cloudflare.com/hc/en-us/articles/200171906-Error-522-Connection-timed-out This problem has been driving me
2015 Jul 02
0
Samba server read issues
Hi all, I set up a samba server into Debian 3.2.0-4-amd64. This runs as guest OS into VirtualBox machine over OS X host OS. Connection seems pretty good: $ iperf -s ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.0.21 port 5001
2008 Jul 10
1
TX tcp checksum errors with Xen GPLPV 0.9.9 Drivers (xen 3.2.1 and windows Server x86 2003 R2)
Hello, My first post on Xen-Users, so .. i''ve discovered a strange problem. Setup: -A Windows server 2003R2 (x86) with GPL PV driver 0.9.9 ipferf 1.7.0 (from 2003) -dom0 a opensuse 11.0 xen: # rpm -q -a |grep -i xen kqemu-kmp-xen-1.3.0pre11_2.6.25.5_1.1-7.1 kiwi-desc-xenboot-2.38-67.1 xen-3.2.1_16881_04-4.2 xen-tools-3.2.1_16881_04-4.2 xen-libs-3.2.1_16881_04-4.2
2010 Aug 03
1
performance with libvirt and kvm
Hi, I am seeing a performance degradation while using libvirt to start my vm (kvm). vm is fedora 12 and host is also fedora 12, both with 2.6.32.10-90.fc12.i686. Here are the statistics from iperf : >From VM: [ 3] 0.0-30.0 sec 199 MBytes 55.7 Mbits/sec >From host : [ 3] 0.0-30.0 sec 331 MBytes 92.6 Mbits/sec libvirt command as seen from ps output : /usr/bin/qemu-kvm -S -M
2004 Oct 12
3
Performance Issues with GBit LAN
Hi. I have 2 PC's connected with 1GBit NIC's. When I transfer a file from my File-Server(Redhat9.0, 256 SD-RAM, 300MHz PII, RTL8169 NIC, 2x Western Digital WD200JB RAID 0) to my Windows-PC(AMD Athlon XP 1800+, 1024 MB DDR-RAM, WINXP PRO, RTL8169 NIC, 2x Western Digital WD080JB RAID 0) with Samba, i get Speeds around 8-9MB/sec. I think this is too low for an GBit Network, so i tested the
2014 Apr 29
2
Degraded performance when using GRE over tinc
Hi, In a setup where OpenVSwitch is used with GRE tunels on top of an interface provided by tinc, I'm experiencing significant performance degradation problems (from 100Mb/s down to 1Mb/s in the worst case) and I'm not sure how to fix this. The manifestation of the problem is, from the user point of view, iperf reports ~100Mb/s and rsync reports ~1Mb/s: $ iperf -c 91.224.149.132
2006 Jan 04
3
TC/CBQ shaping problems
Hello everyone, I''m a newbie experimenting with CBQ shaping and am facing a few problems. Can any of you please help? TEST SETUP: +---------------+ +----------------+ | 10.0.0.103 |----------->| 10.0.0.102 | +---------------+ +----------------+ 10.0.0.103: Linux, 100Mbit/s NIC 10.0.0.102: Windows, 100Mbit/s NIC, iperf tcp server (ports 2000 and 2001) WHAT I