search for: mpps

Displaying 20 results from an estimated 23 matches for "mpps".

Did you mean: maps
2018 Jan 09
2
[PATCH net-next] vhost_net: batch used ring update in rx
...were done between two machines with 2.40GHz Intel(R) Xeon(R) CPU E5-2630 connected back to back through ixgbe. Traffic were generated on one remote ixgbe through MoonGen and measure the RX pps through testpmd in guest when do xdp_redirect_map from local ixgbe to tap. RX pps were increased from 3.05 Mpps to 4.00 Mpps (about 31% improvement). One possible concern for this is the implications for TCP (especially latency sensitive workload). Result[1] does not show obvious changes for most of the netperf test (RR, TX, and RX). And we do get some improvements for RX on some specific size. Guest RX:...
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
...SG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+% rx-frames = 0 0.91 +0% rx-frames = 4 1.00 +9.8% rx-frames = 8 1.00 +9.8% rx-frames = 16 1.01 +10.9% rx-frames = 32 1.07 +17.5% rx-frames = 48 1.07 +17.5%...
2017 Jan 06
5
[PATCH V4 net-next 0/3] vhost_net tx batching
...SG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+% rx-frames = 0 0.91 +0% rx-frames = 4 1.00 +9.8% rx-frames = 8 1.00 +9.8% rx-frames = 16 1.01 +10.9% rx-frames = 32 1.07 +17.5% rx-frames = 48 1.07 +17.5%...
2006 May 31
14
Linux router performance
Hi, I wonder about the performance of a Linux box used as router (I guest I''m not the first :). Althought I know it mainly depends on the hardware, I''m trying to find some references on the topic or comparations with other routing solutions (FreeBSD box used as router, Cisco, etc). For example, http://facweb.cti.depaul.edu/jyu/Publications/Yu-Linux-TSM2004.pdf (althought is
2016 Dec 30
5
[PATCH net-next V3 0/3] vhost_net tx batching
...s done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+% rx_batched=0 0.90 +0% rx_batched=4 0.97 +7.8% rx_batched=8 0.97 +7.8% rx_batched=16 0.98 +8.9% rx_batched=32 1.03 +14.4% rx_batched=48 1.09 +21.1% rx_batched=64 1.02 +13.3% Changes from V2: - remove uselss queue limitation chec...
2016 Dec 30
5
[PATCH net-next V3 0/3] vhost_net tx batching
...s done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+% rx_batched=0 0.90 +0% rx_batched=4 0.97 +7.8% rx_batched=8 0.97 +7.8% rx_batched=16 0.98 +8.9% rx_batched=32 1.03 +14.4% rx_batched=48 1.09 +21.1% rx_batched=64 1.02 +13.3% Changes from V2: - remove uselss queue limitation chec...
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
...SG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+% rx-frames = 0 0.91 +0% rx-frames = 4 1.00 +9.8% rx-frames = 8 1.00 +9.8% rx-frames = 16 1.01 +10.9% rx-frames = 32 1.07 +17.5% rx-frames = 48 1.07 +17.5%...
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
...SG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+% rx-frames = 0 0.91 +0% rx-frames = 4 1.00 +9.8% rx-frames = 8 1.00 +9.8% rx-frames = 16 1.01 +10.9% rx-frames = 32 1.07 +17.5% rx-frames = 48 1.07 +17.5%...
2016 Dec 28
7
[PATCH net-next V2 0/3] vhost net tx batching
...s done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+% rx_batched=0 0.90 +0% rx_batched=4 0.97 +7.8% rx_batched=8 0.97 +7.8% rx_batched=16 0.98 +8.9% rx_batched=32 1.03 +14.4% rx_batched=48 1.09 +21.1% rx_batched=64 1.02 +13.3% Changes from V1: - drop NAPI handler since we don'...
2016 Dec 28
7
[PATCH net-next V2 0/3] vhost net tx batching
...s done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+% rx_batched=0 0.90 +0% rx_batched=4 0.97 +7.8% rx_batched=8 0.97 +7.8% rx_batched=16 0.98 +8.9% rx_batched=32 1.03 +14.4% rx_batched=48 1.09 +21.1% rx_batched=64 1.02 +13.3% Changes from V1: - drop NAPI handler since we don'...
2016 Dec 30
0
[PATCH net-next V3 3/3] tun: rx batching
...to host network stack. This is done through accepting MSG_MORE as a hint from sendmsg() caller, if it was set, batch the packet temporarily in a linked list and submit them all once MSG_MORE were cleared. Tests were done by pktgen (burst=128) in guest over mlx4(noqueue) on host: Mpps -+% rx_batched=0 0.90 +0% rx_batched=4 0.97 +7.8% rx_batched=8 0.97 +7.8% rx_batched=16 0.98 +8.9% rx_batched=32 1.03 +14.4% rx_batched=48 1.09 +21.1% rx_batched=64 1.02 +13.3% The maximum number of batched packets were specified through a module parameter....
2016 Dec 28
0
[PATCH net-next V2 3/3] tun: rx batching
...to host network stack. This is done through accepting MSG_MORE as a hint from sendmsg() caller, if it was set, batch the packet temporarily in a linked list and submit them all once MSG_MORE were cleared. Tests were done by pktgen (burst=128) in guest over mlx4(noqueue) on host: Mpps -+% rx_batched=0 0.90 +0% rx_batched=4 0.97 +7.8% rx_batched=8 0.97 +7.8% rx_batched=16 0.98 +8.9% rx_batched=32 1.03 +14.4% rx_batched=48 1.09 +21.1% rx_batched=64 1.02 +13.3% The maximum number of batched packets were specified through a module parameter....
2017 Jan 18
0
[PATCH net-next V5 3/3] tun: rx batching
...stack. This is done through accepting MSG_MORE as a hint from sendmsg() caller, if it was set, batch the packet temporarily in a linked list and submit them all once MSG_MORE were cleared. Tests were done by pktgen (burst=128) in guest over mlx4(noqueue) on host: Mpps -+% rx-frames = 0 0.91 +0% rx-frames = 4 1.00 +9.8% rx-frames = 8 1.00 +9.8% rx-frames = 16 1.01 +10.9% rx-frames = 32 1.07 +17.5% rx-frames = 48 1.07 +17.5% rx-frames = 64...
2017 Jan 06
0
[PATCH V4 net-next 3/3] tun: rx batching
...stack. This is done through accepting MSG_MORE as a hint from sendmsg() caller, if it was set, batch the packet temporarily in a linked list and submit them all once MSG_MORE were cleared. Tests were done by pktgen (burst=128) in guest over mlx4(noqueue) on host: Mpps -+% rx-frames = 0 0.91 +0% rx-frames = 4 1.00 +9.8% rx-frames = 8 1.00 +9.8% rx-frames = 16 1.01 +10.9% rx-frames = 32 1.07 +17.5% rx-frames = 48 1.07 +17.5% rx-frames = 64...
2017 Jan 06
2
[PATCH V4 net-next 3/3] tun: rx batching
...ccepting MSG_MORE as a hint from > sendmsg() caller, if it was set, batch the packet temporarily in a > linked list and submit them all once MSG_MORE were cleared. > > Tests were done by pktgen (burst=128) in guest over mlx4(noqueue) on host: > > Mpps -+% > rx-frames = 0 0.91 +0% > rx-frames = 4 1.00 +9.8% > rx-frames = 8 1.00 +9.8% > rx-frames = 16 1.01 +10.9% > rx-frames = 32 1.07 +17.5% > rx-frames = 48 1.07...
2017 Jan 06
2
[PATCH V4 net-next 3/3] tun: rx batching
...ccepting MSG_MORE as a hint from > sendmsg() caller, if it was set, batch the packet temporarily in a > linked list and submit them all once MSG_MORE were cleared. > > Tests were done by pktgen (burst=128) in guest over mlx4(noqueue) on host: > > Mpps -+% > rx-frames = 0 0.91 +0% > rx-frames = 4 1.00 +9.8% > rx-frames = 8 1.00 +9.8% > rx-frames = 16 1.01 +10.9% > rx-frames = 32 1.07 +17.5% > rx-frames = 48 1.07...
2020 Jul 01
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...; a performance increment especially in small packets. The buf api also > increases performance compared with only batching: Sending the minimum > packet size in testpmd makes pps go from 356kpps to 473 kpps. What's your setup for this. The number looks rather low. I'd expected 1-2 Mpps at least. > Sending > 1024 length UDP-PDU makes it go from 570kpps to 64 kpps. > > Something strange I observe in these tests: I get more pps the bigger > the transmitted buffer size is. Not sure why. > > ** Sending from the host to the VM does not make a big change with the...
2007 Jun 07
2
Bridged PRI calls - processor involvement?
On a zaptel TE410p, when a call is bridged PRI - PRI how much involvement does the processor have? We're now seeing chunks of missing audio and I can't tell whether this is due to a kernel upgrade or to a zaptel/libpri/asterisk upgrade. I'm not seeing missed interrupts (from a cat of the proc/zaptel files), any other ideas on how I could go about tracking this down? I'm
2020 Jul 01
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...small packets. The buf api also >>> increases performance compared with only batching: Sending the minimum >>> packet size in testpmd makes pps go from 356kpps to 473 kpps. >> >> What's your setup for this. The number looks rather low. I'd expected >> 1-2 Mpps at least. >> > Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz, 2 NUMA nodes of 16G memory > each, and no device assigned to the NUMA node I'm testing in. Too low > for testpmd AF_PACKET driver too? I don't test AF_PACKET, I guess it should use the V3 which mmap based zerocopy i...
2020 Jul 09
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...> increases performance compared with only batching: Sending the minimum > > >>> packet size in testpmd makes pps go from 356kpps to 473 kpps. > > >> > > >> What's your setup for this. The number looks rather low. I'd expected > > >> 1-2 Mpps at least. > > >> > > > Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz, 2 NUMA nodes of 16G memory > > > each, and no device assigned to the NUMA node I'm testing in. Too low > > > for testpmd AF_PACKET driver too? > > > > > > I don't test...