search for: mbps

Displaying 20 results from an estimated 607 matches for "mbps".

Did you mean: maps
2017 Apr 21
3
[PATCH net-next v2 2/5] virtio-net: transmit napi
...> Yes, I noticed this in the past too. > >> Though this is not limited to napi-tx, it is more >> pronounced in that mode than without napi. >> >> 1x TCP_RR for affinity configuration {process, rx_irq, tx_irq}: >> >> upstream: >> >> 1,1,1: 28985 Mbps, 278 Gcyc >> 1,0,2: 30067 Mbps, 402 Gcyc >> >> napi tx: >> >> 1,1,1: 34492 Mbps, 269 Gcyc >> 1,0,2: 36527 Mbps, 537 Gcyc (!) >> 1,0,1: 36269 Mbps, 394 Gcyc >> 1,0,0: 34674 Mbps, 402 Gcyc >> >> This is a particularly strong example. It is a...
2017 Apr 21
3
[PATCH net-next v2 2/5] virtio-net: transmit napi
...> Yes, I noticed this in the past too. > >> Though this is not limited to napi-tx, it is more >> pronounced in that mode than without napi. >> >> 1x TCP_RR for affinity configuration {process, rx_irq, tx_irq}: >> >> upstream: >> >> 1,1,1: 28985 Mbps, 278 Gcyc >> 1,0,2: 30067 Mbps, 402 Gcyc >> >> napi tx: >> >> 1,1,1: 34492 Mbps, 269 Gcyc >> 1,0,2: 36527 Mbps, 537 Gcyc (!) >> 1,0,1: 36269 Mbps, 394 Gcyc >> 1,0,0: 34674 Mbps, 402 Gcyc >> >> This is a particularly strong example. It is a...
2017 Apr 20
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
...ithout irq affinity. The cycle cost is significant without affinity regardless of whether the optimization is used. Though this is not limited to napi-tx, it is more pronounced in that mode than without napi. 1x TCP_RR for affinity configuration {process, rx_irq, tx_irq}: upstream: 1,1,1: 28985 Mbps, 278 Gcyc 1,0,2: 30067 Mbps, 402 Gcyc napi tx: 1,1,1: 34492 Mbps, 269 Gcyc 1,0,2: 36527 Mbps, 537 Gcyc (!) 1,0,1: 36269 Mbps, 394 Gcyc 1,0,0: 34674 Mbps, 402 Gcyc This is a particularly strong example. It is also representative of most RR tests. It is less pronounced in other streaming tests. 10...
2017 Apr 20
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
...ithout irq affinity. The cycle cost is significant without affinity regardless of whether the optimization is used. Though this is not limited to napi-tx, it is more pronounced in that mode than without napi. 1x TCP_RR for affinity configuration {process, rx_irq, tx_irq}: upstream: 1,1,1: 28985 Mbps, 278 Gcyc 1,0,2: 30067 Mbps, 402 Gcyc napi tx: 1,1,1: 34492 Mbps, 269 Gcyc 1,0,2: 36527 Mbps, 537 Gcyc (!) 1,0,1: 36269 Mbps, 394 Gcyc 1,0,0: 34674 Mbps, 402 Gcyc This is a particularly strong example. It is also representative of most RR tests. It is less pronounced in other streaming tests. 10...
2017 Apr 24
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
...this is not limited to napi-tx, it is more >> >> pronounced in that mode than without napi. >> >> >> >> 1x TCP_RR for affinity configuration {process, rx_irq, tx_irq}: >> >> >> >> upstream: >> >> >> >> 1,1,1: 28985 Mbps, 278 Gcyc >> >> 1,0,2: 30067 Mbps, 402 Gcyc >> >> >> >> napi tx: >> >> >> >> 1,1,1: 34492 Mbps, 269 Gcyc >> >> 1,0,2: 36527 Mbps, 537 Gcyc (!) >> >> 1,0,1: 36269 Mbps, 394 Gcyc >> >> 1,0,0: 34674 Mbps, 4...
2017 Apr 24
2
[PATCH net-next v2 2/5] virtio-net: transmit napi
...this is not limited to napi-tx, it is more >> >> pronounced in that mode than without napi. >> >> >> >> 1x TCP_RR for affinity configuration {process, rx_irq, tx_irq}: >> >> >> >> upstream: >> >> >> >> 1,1,1: 28985 Mbps, 278 Gcyc >> >> 1,0,2: 30067 Mbps, 402 Gcyc >> >> >> >> napi tx: >> >> >> >> 1,1,1: 34492 Mbps, 269 Gcyc >> >> 1,0,2: 36527 Mbps, 537 Gcyc (!) >> >> 1,0,1: 36269 Mbps, 394 Gcyc >> >> 1,0,0: 34674 Mbps, 4...
2010 Dec 27
2
E1000 eth1 link flakiness - causes??
Have you experienced this? What's going on when this occurs? What do I need to do to keep it from occurring? Please advise. Thanks. Dec 4 10:18:17 localhost kernel: e1000: eth1 NIC Link is Down Dec 4 10:18:19 localhost kernel: e1000: eth1 NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX/TX Dec 4 10:18:21 localhost kernel: e1000: eth1 NIC Link is Down Dec 4 10:18:23 localhost kernel: e1000: eth1 NIC Link is Up 100 Mbps Full Duplex, Flow Control: RX/TX Dec 4 10:18:24 localhost kernel: e1000: eth1 NIC Link is Down Dec 4 10:18:25 localhost kernel: e10...
2016 May 10
1
weird network error
...0:34 sg1 kernel: e1000e 0000:03:00.0: eth0: Reset adapter unexpectedly May 9 22:30:35 sg1 abrt-dump-oops: Reported 1 kernel oopses to Abrt May 9 22:30:35 sg1 abrtd: Directory 'oops-2016-05-09-22:30:35-8763-1' creation detected May 9 22:30:38 sg1 kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx May 9 22:30:42 sg1 kernel: Bridge firewalling registered May 9 22:31:27 sg1 kernel: ip_tables: (C) 2000-2006 Netfilter Core Team May 9 22:31:32 sg1 abrtd: Can't find a meaningful backtrace for hashing in '.' May 9 22:31:32 sg1 abrtd: Preserving oop...
2017 Jul 23
2
Slow Samba
Hello friends, I have a Gigabit network with few Windows and Centos 7 machines and I noticed that when copying files via Samba from: Windows to Windows I can copy files with speed of +- 120 MBps (I think this is the max speed gigabit network can provide) But when copying files from: Centos to Centos I get only speeds of about 40 MBps Windows to Centos 40 MBps Centos to Windows 40 MBps I tried to add these lines: use sendfile = yes socket options = TCP_NODELAY IPTOS_LOWDELAY to smb.c...
2006 Jun 20
5
100 Mbps bandwidth
I just using around 7 years ago lan card, can the lan card support external 100 Mbps bandwidth on Shorewall ? Thanks _______________________________________ YM - 離線訊息 就算你沒有上網,你的朋友仍可以留下訊息給你,當你上網時就能立即看到,任何說話都冇走失。 http://messenger.yahoo.com.hk
2009 Sep 04
2
Xen & netperf
...he game server is more network intensive than CPU intensive, and that will be my primary criteria for decided whether I virtualize. I ran some naive benchmarks with netperf on my Dom0 (debian lenny w/ xen 3.2.1), DomU, and my Linux box at home. Dom0 and DomU are connected by a public network (100 Mbps link) and a private network (1 Gbps link). All netperf tests were run with defaults (w/o any extra options). Dom0 to Dom0 (local) TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to () port 0 AF_INET : demo Recv Send Send Socket Socket Message Elapsed Size Size Size Time...
2019 Sep 15
2
nfsmount default timeo=7 causes timeouts on 100 Mbps
I can't explain why 700 msecs aren't enough to avoid timeouts in 100 Mbps networks, but my tests verify it, so I'm writing to the list to request that you increase the default timeo to at least 30, or to 600 which is the default for `mount -t nfs`. How to reproduce: 1) Cabling: server <=> 100 Mbps switch <=> client Alternatively, one can use a 1000 M...
2004 Nov 10
5
etherbridge bottleneck
I ran some iperf tests today and it looks like the etherbridge is the limiting factor on throughput. In the beforetime, I saw great throughput to the VMs; over 800 Mbps. With the bridge, the numbers are in the 400s somewhere. Is this the speed I can expect from the bridge? Is there some tuning I should try, or another way to get more bandwidth into the VMs? This is with xen-2.0, 2.4.27-xen0 and 2.4.27-xenU. My iperf numbers: 940 Mbps stock linux -> sto...
2006 Jun 19
5
Limited write bandwidth from ext3
...storage array. The dual Xeon host (Dell 2650) with 4 GB of memory runs RHEL 4U3 We measured the write bandwidth for writes to the block device corresponding to the lun (e.g. /dev/sdb), a file in an ext2 filesystem and to a file in an ext3 file system. Write b/w for 512 KB writes Block device 312 MBps Ext2 file 247 MBps Ext3 file 130 MBps We are looking for ways to improve the ext3 file write bandwidth. Tracing of I/Os at the storage array shows that in the case of ext3 experiment, the workload does not keep the lun busy enough. Every 5 seconds there is an increase in I/O activity that lasts...
2011 Mar 25
1
Samba Tuning to increase Throughput
...me advice from group. I am trying to use the samba to access a USB disk connected to our evaluation board which has xtensa core running at 400 MHz. Samba 3.5.x is running on the board. We are getting below throughput as tested with the colasoft capsa software on the client PC. Read: 27.9 mbps Write : 24.5 mbps Some memory info on my system? # cat /proc/meminfo MemTotal: 99788 kB MemFree: 16872 kB Buffers: 12 kB Cached: 30504 kB SwapCached: 0 kB Active: 8880 kB Inactive: 27952 kB Active(anon): 6336 kB I...
2013 Jun 06
0
cross link connection fall down
...n reach the other one. After a small time window the interface is down. $ dmesg |grep eth4 igb 0000:41:00.2: added PHC on eth4 igb 0000:41:00.2: eth4: (PCIe:5.0Gb/s:Width x4) igb 0000:41:00.2: eth4: PBA No: G13158-000 8021q: adding VLAN 0 to HW filter on device eth4 igb: eth4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX igb: eth4 NIC Link is Down igb: eth4 NIC Link is Up 10 Mbps Full Duplex, Flow Control: RX/TX igb: eth4 NIC Link is Down igb: eth4 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX igb: eth4 NIC Link is Down igb: eth4 NIC Link is Up 1000 Mbps Full Duplex, Flo...
2017 Apr 21
0
[PATCH net-next v2 2/5] virtio-net: transmit napi
...f whether the > optimization is used. Yes, I noticed this in the past too. > Though this is not limited to napi-tx, it is more > pronounced in that mode than without napi. > > 1x TCP_RR for affinity configuration {process, rx_irq, tx_irq}: > > upstream: > > 1,1,1: 28985 Mbps, 278 Gcyc > 1,0,2: 30067 Mbps, 402 Gcyc > > napi tx: > > 1,1,1: 34492 Mbps, 269 Gcyc > 1,0,2: 36527 Mbps, 537 Gcyc (!) > 1,0,1: 36269 Mbps, 394 Gcyc > 1,0,0: 34674 Mbps, 402 Gcyc > > This is a particularly strong example. It is also representative > of most RR tests...
2017 Apr 24
0
[PATCH net-next v2 2/5] virtio-net: transmit napi
...t; > > >> Though this is not limited to napi-tx, it is more > >> pronounced in that mode than without napi. > >> > >> 1x TCP_RR for affinity configuration {process, rx_irq, tx_irq}: > >> > >> upstream: > >> > >> 1,1,1: 28985 Mbps, 278 Gcyc > >> 1,0,2: 30067 Mbps, 402 Gcyc > >> > >> napi tx: > >> > >> 1,1,1: 34492 Mbps, 269 Gcyc > >> 1,0,2: 36527 Mbps, 537 Gcyc (!) > >> 1,0,1: 36269 Mbps, 394 Gcyc > >> 1,0,0: 34674 Mbps, 402 Gcyc > >> > >&...
2017 Apr 24
0
[PATCH net-next v2 2/5] virtio-net: transmit napi
...is more > >> >> pronounced in that mode than without napi. > >> >> > >> >> 1x TCP_RR for affinity configuration {process, rx_irq, tx_irq}: > >> >> > >> >> upstream: > >> >> > >> >> 1,1,1: 28985 Mbps, 278 Gcyc > >> >> 1,0,2: 30067 Mbps, 402 Gcyc > >> >> > >> >> napi tx: > >> >> > >> >> 1,1,1: 34492 Mbps, 269 Gcyc > >> >> 1,0,2: 36527 Mbps, 537 Gcyc (!) > >> >> 1,0,1: 36269 Mbps, 394 Gcyc &gt...
2008 Oct 05
2
Performance tweaking from Ubuntu to a Macbook vs. Windows through DLink DIR-655
Hi Everyone, I've been happy with my setup the last 2 years, and I just bought a new router - D-Link DIR-655. I was connecting wireless and wired and noticed the following before tweaking: Wired 100 Mbps ============== Macbook to Ubuntu: 56 Mbps Windows to Ubuntu: 72 Mbps Wireless N (Connected at 130 Mbps ... Apple's fault) ================================================== == Macbook to Ubuntu: 20 Mbps Macbook to Windows: 44 Mbps That's a significant change. So I changed the following l...