Displaying 5 results from an estimated 5 matches for "900mbps".
Did you mean:
100mbps
2013 Apr 04
1
Freenas domU network performance issue
...not reliable. The last issue make it impossible to serve VM disk image from
the nas. Anybody has met the same issue?
Benchmark details below:
1. iperf bench against external host (through physical network).
The bench can fully utilize the Giga-bit network.
ext => nas ~930Mbps
nas => ext ~ 900Mbps
TSO / LRO config has no impact on the bench result.
2. iperf bench against dom0 (through virtual bridge, handled fully by
driver stack)
The bench result highly depends on TSO / LRO config.
dom0 => nas (LRO=1) 60Mbps ~ 300Mbps (vary a lot time to time, xentop
reports low cpu usage in both domain...
2006 Jun 21
1
Linux Qos : PRIO qdisc works
...(pfifo, size 1000 packets ) : flow 1
priority 2 queue (pfifo, size 1000 packets ) : flow 2
priority 3 queue (pfifo, size 1000 packets ) : defaults
I configured PRIO qdisc on Router A''s outer interface to Router B.
the results of test
- TCP throughput about 80Mbps
- UDP throughput about 900Mbps. (UDP try to send 1Gbps)
First question:
The TCP stream with higher priority than UDP stream with lower priority
experienced starvation in stead of UDP stream. Is it correct?
Did you test PRIO qdisc with TCP having high priority and UDP having low priority?
In the below, there is my configurati...
2010 Mar 31
1
Performance issues: have eliminated disk and network as cause
...ware RAID10, 4GB RAM and a
quad-core Intel Xeon processor. It's not live yet, so there's no load from
other tasks.
I've already eliminated the RAID (able to sustain 130-140MB/s for
reads/writes) and the network (GigE, tar | nc to this server and untar'd at
the other end sustains 8-900Mbps) as bottlenecks, which leaves me dealing
with Samba.
Samba is peaking at around 280Mbps (reading and writing a single 500MB file)
and normal performance (which I have benchmarked with a 350MB directory
containing about 1,000 files of various sizes up to 2MB) is closer to
90-100Mbps (write), 117Mbp...
2010 Feb 24
6
Desperately need help with multi-core NIC performance
Hi,
I am running a VOIP application on Centos 5.3. It is a 16 core box
with 12 G of mem and all what it does is passing packets.
What happens is that at around 2K channels running g711 ( 64k) codec,
all eth0 is used up and no more traffic can go through.
I have checked google and it talked about interrupt scheduler.
does anyone know how to configure the kernel to allow it to use all
CPSs for
2004 Nov 17
9
serious networking (em) performance (ggate and NFS) problem
Dear best guys,
I really love 5.3 in many ways but here're some unbelievable transfer rates,
after I went out and bought a pair of Intel GigaBit Ethernet Cards to solve
my performance problem (*laugh*):
(In short, see *** below)
Tests were done with two Intel GigaBit Ethernet cards (82547EI, 32bit PCI
Desktop adapter MT) connected directly without a switch/hub and "device