similar to: TX tcp checksum errors with Xen GPLPV 0.9.9 Drivers (xen 3.2.1 and windows Server x86 2003 R2)

Displaying 20 results from an estimated 3000 matches similar to: "TX tcp checksum errors with Xen GPLPV 0.9.9 Drivers (xen 3.2.1 and windows Server x86 2003 R2)"

2008 Jul 11
8
Another GPLPV pre-release 0.9.11-pre7
I''ve just uploaded 0.9.11-pre7. save/restore should be working for 32 bits on both SMP and UP, and maybe for 64 bits although it''s not tested. If someone could test migration it would be much appreciated. The installer seems to not install the drivers under 64 bit environment... but they can then be installed manually. Not sure why at this point.
2008 Jul 11
8
Another GPLPV pre-release 0.9.11-pre7
I''ve just uploaded 0.9.11-pre7. save/restore should be working for 32 bits on both SMP and UP, and maybe for 64 bits although it''s not tested. If someone could test migration it would be much appreciated. The installer seems to not install the drivers under 64 bit environment... but they can then be installed manually. Not sure why at this point.
2008 Oct 16
1
GPLPV 0.9.10 & 0.9.11.pre17/18 Network Issues
Hello, I have been testing James'' GPLPV drivers and found excellent performance when using iperf but have been having issues when trying to download a file from a shared folder on my Windows 2003 Enterprise HVM to any other system whether it is linux or windows. Basically my initial iperf tests were showing 937Mbits/sec down and 345Mbits/sec up but when I try to copy a 2GB file
2009 Jan 17
25
GPLPV network performance
Just reporting some iperf results. In each case, Dom0 is iperf server, DomU is iperf client: (1) Dom0: Intel Core2 3.16 GHz, CentOS 5.2, xen 3.0.3. DomU: Windows XP SP3, GPLPV 0.9.12-pre13, file based. Iperf: 1.17 Gbits/sec (2) Dom0: Intel Core2 2.33 GHz, CentOS 5.2, xen 3.0.3. DomU: Windows XP SP3, GPLPV 0.9.12-pre13, file based. Iperf: 725 Mbits/sec (3) Dom0: Intel Core2 2.33 GHz,
2010 Aug 24
4
Slow windows network with gplpv driver.
Have 4 xen servers with windows domains. 2 work OK and 2 have xp domains with slow network performance. Doing most of the testing on a xen system that currently is not in production. Running xen 4.0 with 2.6.32 kernel from lenny backports. Have xp service pack3 freshly installed with no updates. Installed gplpv gplpv_XP_0.11.0.213.msi hdtack gives 60mb/sec which is good iperf gives ~15Mbits/sec
2015 Mar 13
3
Network throughput testing software available for CentOS/Linux
On 12-03-2015 17:39, Digimer wrote: > -----BEGIN PGP SIGNED MESSAGE----- > Hash: SHA1 > > On 12/03/15 04:29 PM, Gilbert Sebenste wrote: >> Hello everyone, >> >> A network engineer buddy of mine brought up for discussion with me >> that he'd like to do some throughput testing, but he's new to >> Linux/RedHat. Is there any software I can recommend to
2004 Apr 07
1
(no subject)
Hello I was testing HTB using IPerf TCP traffic and the results were very good. Until I tried to add some UDP traffic the results were a little strange. this is my setup tc qdisc del dev eth1 root tc qdisc add dev eth1 handle 1:0 root htb default 2 tc class add dev eth1 parent 1:0 classid 1:1 htb rate 1mbit tc class add dev eth1 parent 1:1 classid 1:2 htb rate 500kbit ceil 1mbit tc class add
2011 Mar 07
1
IPERF Server
When starting IPERF with "iperf -s" or "iperf -sD" it seems to stop after client runs its first test. I would like to leave it running for a few hours to give someone a chance to run a few tests. Is there a way to leave it active on the server and kill it manually later?
2011 Jan 11
1
Bonding performance question
I have a Dell server with four bonded, gigabit interfaces. Bonding mode is 802.3ad, xmit_hash_policy=layer3+4. When testing this setup with iperf, I never get more than a total of about 3Gbps throughput. Is there anything to tweak to get better throughput? Or am I running into other limits (e.g. was reading about tcp retransmit limits for mode 0). The iperf test was run with iperf -s on the
2006 Jul 05
1
kernel udp rate limit
Hi List. First post, be gentle please. Is there any limit in the linux UDP rate? I am using linux kernel 2.6 and iperf to measure bandwidth between two endpoints connected by 100 Mbits ethernet. Running (as root) iperf -u -s and iperf -u -c always gives me 1.05 Mbits/seg even when runned in the same machine. Can somebody clarify this? Thanks in advance. Sebastian
2006 Apr 04
9
Very slow domU network performance
I set up a domU as a backup server, but it has very, very poor network performance with external computers. I ran some tests with iperf and found some very weird results. Using iperf, I get these approximate numbers (the left column is the iperf client and the right column is the iperf server): domU --> domU 1.77 Gbits/sec (using 127.0.0.1) domU --> domU 1.85 Gbits/sec (using domU
2006 Apr 04
9
Very slow domU network performance
I set up a domU as a backup server, but it has very, very poor network performance with external computers. I ran some tests with iperf and found some very weird results. Using iperf, I get these approximate numbers (the left column is the iperf client and the right column is the iperf server): domU --> domU 1.77 Gbits/sec (using 127.0.0.1) domU --> domU 1.85 Gbits/sec (using domU
2002 Nov 06
1
help, strange question about tcp and udp traffic control?
Hi ; +--------+ +-----------+ +--------+ | server |---------- | linux box |---------------------| Client | +--------+ +-----------+ +--------+ MY script: tc-htb3 qdisc del dev eth1 root ipchains -F tc-htb3 qdisc add dev eth1 root handle 10: htb default 20 r2q 40 tc-htb3 class add dev eth1 parent 10: classid 10:1 htb
2017 May 12
2
Poor network performance
Hello, I have some problem with poor network performance on libvirt with qemu and openvswitch. I’m using libvirt 1.3.1, qemu 2.5 and openvswitch 2.6.0 on Ubuntu 16.04 currently. My connection diagram looks like below: +---------------------------+ +---------------------------+
2014 Nov 18
2
vhost + multiqueue + RSS question.
On 11/17/2014 07:58 PM, Michael S. Tsirkin wrote: > On Mon, Nov 17, 2014 at 01:22:07PM +0200, Gleb Natapov wrote: >> > On Mon, Nov 17, 2014 at 12:38:16PM +0200, Michael S. Tsirkin wrote: >>> > > On Mon, Nov 17, 2014 at 09:44:23AM +0200, Gleb Natapov wrote: >>>> > > > On Sun, Nov 16, 2014 at 08:56:04PM +0200, Michael S. Tsirkin wrote:
2014 Nov 18
2
vhost + multiqueue + RSS question.
On 11/17/2014 07:58 PM, Michael S. Tsirkin wrote: > On Mon, Nov 17, 2014 at 01:22:07PM +0200, Gleb Natapov wrote: >> > On Mon, Nov 17, 2014 at 12:38:16PM +0200, Michael S. Tsirkin wrote: >>> > > On Mon, Nov 17, 2014 at 09:44:23AM +0200, Gleb Natapov wrote: >>>> > > > On Sun, Nov 16, 2014 at 08:56:04PM +0200, Michael S. Tsirkin wrote:
2014 Nov 17
5
vhost + multiqueue + RSS question.
On Mon, Nov 17, 2014 at 12:38:16PM +0200, Michael S. Tsirkin wrote: > On Mon, Nov 17, 2014 at 09:44:23AM +0200, Gleb Natapov wrote: > > On Sun, Nov 16, 2014 at 08:56:04PM +0200, Michael S. Tsirkin wrote: > > > On Sun, Nov 16, 2014 at 06:18:18PM +0200, Gleb Natapov wrote: > > > > Hi Michael, > > > > > > > > I am playing with vhost multiqueue
2014 Nov 17
5
vhost + multiqueue + RSS question.
On Mon, Nov 17, 2014 at 12:38:16PM +0200, Michael S. Tsirkin wrote: > On Mon, Nov 17, 2014 at 09:44:23AM +0200, Gleb Natapov wrote: > > On Sun, Nov 16, 2014 at 08:56:04PM +0200, Michael S. Tsirkin wrote: > > > On Sun, Nov 16, 2014 at 06:18:18PM +0200, Gleb Natapov wrote: > > > > Hi Michael, > > > > > > > > I am playing with vhost multiqueue
2011 Mar 11
1
UDP Perfomance tuning
Hi, We are running on 5.5 on a HP ProLiant DL360 G6. Kernel version is 2.6.18-194.17.1.el5 (we had also tested with the latest available kernel kernel-2.6.18-238.1.1.el5.x86_64) We running some performance tests using the "iperf" utility. We are seeing very bad and inconsistent performance on the UDP testing. The maximum we could get, was 440 Mbits/sec, and it varies from 250 to 440
2014 Apr 29
2
Degraded performance when using GRE over tinc
Hi, In a setup where OpenVSwitch is used with GRE tunels on top of an interface provided by tinc, I'm experiencing significant performance degradation problems (from 100Mb/s down to 1Mb/s in the worst case) and I'm not sure how to fix this. The manifestation of the problem is, from the user point of view, iperf reports ~100Mb/s and rsync reports ~1Mb/s: $ iperf -c 91.224.149.132