similar to: Freenas domU network performance issue

Displaying 20 results from an estimated 5000 matches similar to: "Freenas domU network performance issue"

2011 Jan 11
1
Bonding performance question
I have a Dell server with four bonded, gigabit interfaces. Bonding mode is 802.3ad, xmit_hash_policy=layer3+4. When testing this setup with iperf, I never get more than a total of about 3Gbps throughput. Is there anything to tweak to get better throughput? Or am I running into other limits (e.g. was reading about tcp retransmit limits for mode 0). The iperf test was run with iperf -s on the
2010 Nov 15
5
Poor performance on bandwidth, Xen 4.0.1 kernel pvops 2.6.32.24
Hello list, I have two differents installation Xen Hypervisor on two identical physical server, on the same switch : The problem is on my new server (Xen 4.0.1 with pvops kernel 2.6.32.24), I have bad performance on bandwidth I have test with a files copy and "iperf". Result iperf average: Transfert Bandwidth XEN-A -> Windows
2020 Oct 12
1
samba AD problem after re-join domain
On 12/10/2020 16:11, Jason Keltz wrote: > >> Hi Rowland, >> >> I did not leave the domain, but I did delete the entry by either the >> Windows AD tool or "samba-tool computer delete" option.? I can't >> remember which one at this point.? I think that clears up all the >> bits.? Is that correct?? On the local host, I also deleted the >>
2010 Aug 03
1
performance with libvirt and kvm
Hi, I am seeing a performance degradation while using libvirt to start my vm (kvm). vm is fedora 12 and host is also fedora 12, both with 2.6.32.10-90.fc12.i686. Here are the statistics from iperf : >From VM: [ 3] 0.0-30.0 sec 199 MBytes 55.7 Mbits/sec >From host : [ 3] 0.0-30.0 sec 331 MBytes 92.6 Mbits/sec libvirt command as seen from ps output : /usr/bin/qemu-kvm -S -M
2004 Nov 17
9
serious networking (em) performance (ggate and NFS) problem
Dear best guys, I really love 5.3 in many ways but here're some unbelievable transfer rates, after I went out and bought a pair of Intel GigaBit Ethernet Cards to solve my performance problem (*laugh*): (In short, see *** below) Tests were done with two Intel GigaBit Ethernet cards (82547EI, 32bit PCI Desktop adapter MT) connected directly without a switch/hub and "device
2008 Jul 11
8
Another GPLPV pre-release 0.9.11-pre7
I''ve just uploaded 0.9.11-pre7. save/restore should be working for 32 bits on both SMP and UP, and maybe for 64 bits although it''s not tested. If someone could test migration it would be much appreciated. The installer seems to not install the drivers under 64 bit environment... but they can then be installed manually. Not sure why at this point.
2008 Jul 11
8
Another GPLPV pre-release 0.9.11-pre7
I''ve just uploaded 0.9.11-pre7. save/restore should be working for 32 bits on both SMP and UP, and maybe for 64 bits although it''s not tested. If someone could test migration it would be much appreciated. The installer seems to not install the drivers under 64 bit environment... but they can then be installed manually. Not sure why at this point.
2014 Apr 29
2
Degraded performance when using GRE over tinc
Hi, In a setup where OpenVSwitch is used with GRE tunels on top of an interface provided by tinc, I'm experiencing significant performance degradation problems (from 100Mb/s down to 1Mb/s in the worst case) and I'm not sure how to fix this. The manifestation of the problem is, from the user point of view, iperf reports ~100Mb/s and rsync reports ~1Mb/s: $ iperf -c 91.224.149.132
2012 Dec 03
1
Strange QoS behavior
Hi, I'm having some weird problem with the setup of the QoS on a bridged network. As the docs states, outbound/inbound average speed should be expressed in KBps (KBytes per second) but in order to get a maximum speed of 10Mbps (megabits per second) surprising enough I have to use 2560 on the guest (not 1280 as expected). Using 1280 units I get a speed og 5Mbps. I'm aware of peak and
2012 Dec 03
1
Strange behavior of QoS
Hi, I'm having some weird problem with the setup of the QoS on a bridged network. As the docs states, outbound/inbound average speed should be expressed in KBps (KBytes per second) but in order to get a maximum speed of 10Mbps (megabits per second) surprising enough I have to use 2560 on the guest (not 1280 as expected). Using 1280 units I get a speed og 5Mbps. I'm aware of peak and
2010 Mar 02
9
Filebench Performance is weird
Greeting All I am using Filebench benchmark in an "Interactive mode" to test ZFS performance with randomread wordload. My Filebench setting & run results are as follwos ------------------------------------------------------------------------------------------ filebench> set $filesize=5g filebench> set $dir=/hdd/fs32k filebench> set $iosize=32k filebench> set
2004 Jan 26
3
Samba and Window XP write performance
I did some testing using samba-3.0.0 as a server and two identical clients one Running W2K and other running Win XP pro. If I write a big file using the W2K client, I'm getting about 25 Mbytes/sec but if I run the same testing using the Win XP Pro, this client only is able to get 12.5 Mbytes/sec. There is a problem between XP and samba?
1999 Feb 02
2
Benchmark results
Samba digest 1966, Jeremy Allison wrote: > For people who are looking for some objective > numbers to help recommend Samba to their employers (I > know there are some of you on this list :-) you might > want to look at the following couple of articles. > > The first one is in Smart Reseller (a USA trade press > magazine) at : > >
2006 Jan 04
3
TC/CBQ shaping problems
Hello everyone, I''m a newbie experimenting with CBQ shaping and am facing a few problems. Can any of you please help? TEST SETUP: +---------------+ +----------------+ | 10.0.0.103 |----------->| 10.0.0.102 | +---------------+ +----------------+ 10.0.0.103: Linux, 100Mbit/s NIC 10.0.0.102: Windows, 100Mbit/s NIC, iperf tcp server (ports 2000 and 2001) WHAT I
2004 Jan 23
2
htbinit and redhat-9.0
dear All, I'm a new student and my job is too shapping bandwith for our campus faculty network. I want to implement htb with Redhat-9.0 distro. does this distro kernel support htb and tc good ? or i should apply some patch or upgrade kernel ? regards reza
2014 Nov 28
1
poor throughput with tinc
Hi, I am testing tinc for a very large scale deployment. I am using tinc-1.1 for testing. test results below are for tinc in switch mode. all other settings are default. test is performed in LAN env. 2 different hosts. I am getting only 24.6 Mbits/sec when tinc is used. without tinc on the same hosts/link I get 95 to 100 Mbits/sec using iperf. Over Tinc: iperf -c 192.168.9.9 -b 100m -l 32k -w
2008 Aug 20
44
GPL PV drivers for Windows 0.9.11-pre12
I''ve just uploaded 0.9.11-pre12 of the GPL PV drivers for Windows. Since -pre10 (and -pre11) I''ve fixed a heap of crashes that were plaguing xennet under load, and also rewritten the interrupt/event distribution logic to improve performance. Under windows 2003 I can now get network speeds of 1-2Gbit/second TX and 600Gbit/second RX, which is considerably better than I was
2008 Aug 20
44
GPL PV drivers for Windows 0.9.11-pre12
I''ve just uploaded 0.9.11-pre12 of the GPL PV drivers for Windows. Since -pre10 (and -pre11) I''ve fixed a heap of crashes that were plaguing xennet under load, and also rewritten the interrupt/event distribution logic to improve performance. Under windows 2003 I can now get network speeds of 1-2Gbit/second TX and 600Gbit/second RX, which is considerably better than I was
2006 Apr 04
9
Very slow domU network performance
I set up a domU as a backup server, but it has very, very poor network performance with external computers. I ran some tests with iperf and found some very weird results. Using iperf, I get these approximate numbers (the left column is the iperf client and the right column is the iperf server): domU --> domU 1.77 Gbits/sec (using 127.0.0.1) domU --> domU 1.85 Gbits/sec (using domU
2006 Apr 04
9
Very slow domU network performance
I set up a domU as a backup server, but it has very, very poor network performance with external computers. I ran some tests with iperf and found some very weird results. Using iperf, I get these approximate numbers (the left column is the iperf client and the right column is the iperf server): domU --> domU 1.77 Gbits/sec (using 127.0.0.1) domU --> domU 1.85 Gbits/sec (using domU