Displaying 14 results from an estimated 14 matches for "kpps".
Did you mean:
apps
2006 Jun 16
0
Re: Linux router performance (fwd)
...ubject: Re: [LARTC] Linux router performance
Jesper Dangaard Brouer writes:
>
> Hi
>
> I''m sure that Robert can provide us with some interesting numbers.
>
> I have just tested routing performance on a AMD opteron 270 (dual core),
> here I can route 400 kpps (tg3 netcards on PCI-X). I use the kernel
> module "pktgen" to generate the packets (64 bytes in size).
400 kpps is decent but it all depends on your setup what you''re testing.
Single flow?
Number of packets in environment with hi-number of flows. ( Forces
lookup...
2020 Jul 01
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...est
> failure.
>
> * With testpmd and event_idx=off, if I send from the VM to host, I see
> a performance increment especially in small packets. The buf api also
> increases performance compared with only batching: Sending the minimum
> packet size in testpmd makes pps go from 356kpps to 473 kpps. Sending
> 1024 length UDP-PDU makes it go from 570kpps to 64 kpps.
>
> Something strange I observe in these tests: I get more pps the bigger
> the transmitted buffer size is. Not sure why.
>
> ** Sending from the host to the VM does not make a big change with the
&g...
2020 Jul 01
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...test
> failure.
>
> * With testpmd and event_idx=off, if I send from the VM to host, I see
> a performance increment especially in small packets. The buf api also
> increases performance compared with only batching: Sending the minimum
> packet size in testpmd makes pps go from 356kpps to 473 kpps.
What's your setup for this. The number looks rather low. I'd expected
1-2 Mpps at least.
> Sending
> 1024 length UDP-PDU makes it go from 570kpps to 64 kpps.
>
> Something strange I observe in these tests: I get more pps the bigger
> the transmitted buffer s...
2020 Jul 01
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...> * With testpmd and event_idx=off, if I send from the VM to host, I see
>>> a performance increment especially in small packets. The buf api also
>>> increases performance compared with only batching: Sending the minimum
>>> packet size in testpmd makes pps go from 356kpps to 473 kpps.
>>
>> What's your setup for this. The number looks rather low. I'd expected
>> 1-2 Mpps at least.
>>
> Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz, 2 NUMA nodes of 16G memory
> each, and no device assigned to the NUMA node I'm testing in. Too low...
2020 Jul 09
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...idx=off, if I send from the VM to host, I see
> > >>> a performance increment especially in small packets. The buf api also
> > >>> increases performance compared with only batching: Sending the minimum
> > >>> packet size in testpmd makes pps go from 356kpps to 473 kpps.
> > >>
> > >> What's your setup for this. The number looks rather low. I'd expected
> > >> 1-2 Mpps at least.
> > >>
> > > Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz, 2 NUMA nodes of 16G memory
> > > each, and no...
2006 Oct 04
5
Intel or AMD is better processor for router (800+ users)
Hi
I would like to ask you which processor is beter solution for router? Please
shortly explain why?
I have about 800 users. For each I create 2 htb classes and 4 filters.
Moreower router have dhcp serwer and lots of iptables rules.
I''m interested in P4 3Ghz HT and AMD Athlon 64 3000+. What is beter choice for
my needs? What parametrs of processors are important: clock, cache, fsb
2020 Jun 11
27
[PATCH RFC v8 00/11] vhost: ring format independence
This still causes corruption issues for people so don't try
to use in production please. Posting to expedite debugging.
This adds infrastructure required for supporting
multiple ring formats.
The idea is as follows: we convert descriptors to an
independent format first, and process that converting to
iov later.
Used ring is similar: we fetch into an independent struct first,
convert that to
2020 Jun 11
27
[PATCH RFC v8 00/11] vhost: ring format independence
This still causes corruption issues for people so don't try
to use in production please. Posting to expedite debugging.
This adds infrastructure required for supporting
multiple ring formats.
The idea is as follows: we convert descriptors to an
independent format first, and process that converting to
iov later.
Used ring is similar: we fetch into an independent struct first,
convert that to
2006 Sep 19
5
how to setup massive traffic shaping? (2 class B nets)
Hello
I have 2 class-B networks (172.22.0.0/16 and 172.23.0.0/16, over 130k
of ip''s) and need to setup
traffic tbf shapers with 64kb/s for each ip from 172.22.0.0/16 and
128kb/s for each ip from 172.23.0.0/16
just read lartc and don''t understand how to use u32 for decreasing
number of rules and hashing
2006 May 31
14
Linux router performance
Hi,
I wonder about the performance of a Linux box used as router (I guest I''m
not the first :). Althought I know it mainly depends on the hardware, I''m
trying to find some references on the topic or comparations with other
routing solutions (FreeBSD box used as router, Cisco, etc). For example,
http://facweb.cti.depaul.edu/jyu/Publications/Yu-Linux-TSM2004.pdf
(althought is
2012 Dec 07
6
[PATCH net-next v3 0/3] Multiqueue support in virtio-net
...s tap in host to test Guest RX.
2.1 Guest TX: Unfortunately current pktgen does not support virtio-net well
since virtio-net may not free the skb during tx completion. So I test through a
patch (https://lkml.org/lkml/2012/11/26/31) that don't wait for this freeing
with a guest of 4 vcpu:
#q | kpps | +improvement%
1 | 589K | 0%
2 | 952K | 62%
3 | 1290K | 120%
4 | 1578K | 168%
2.2 Guest RX: After commit 5d097109257c03a71845729f8db6b5770c4bbedc (tun: only
queue packets on device), pktgen start to report a unbelievable huge
kpps. (>2099kpps even for one queue). The problem if tun repo...
2012 Dec 07
6
[PATCH net-next v3 0/3] Multiqueue support in virtio-net
...s tap in host to test Guest RX.
2.1 Guest TX: Unfortunately current pktgen does not support virtio-net well
since virtio-net may not free the skb during tx completion. So I test through a
patch (https://lkml.org/lkml/2012/11/26/31) that don't wait for this freeing
with a guest of 4 vcpu:
#q | kpps | +improvement%
1 | 589K | 0%
2 | 952K | 62%
3 | 1290K | 120%
4 | 1578K | 168%
2.2 Guest RX: After commit 5d097109257c03a71845729f8db6b5770c4bbedc (tun: only
queue packets on device), pktgen start to report a unbelievable huge
kpps. (>2099kpps even for one queue). The problem if tun repo...
2012 Oct 30
6
[rfc net-next v6 0/3] Multiqueue virtio-net
...Test environment:
- Intel(R) Xeon(R) CPU E5620 @ 2.40GHz, 8 cores 2 numa nodes
- Two directed connected 82599
- Host/Guest kenrel: net-next with the mq virtio-net patches and mq tuntap
patches
Pktgen test:
- Local host generate 64 byte UDP packet to guest.
- average of 20 runs
20 runs
#q #vcpu kpps +improvement
1q 1vcpu: 264kpps +0%
2q 2vcpu: 451kpps +70%
3q 3vcpu: 661kpps +150%
4q 4vcpu: 941kpps +250%
Netperf Local VM to VM test:
- VM1 and its vcpu/vhost thread in numa node 0
- VM2 and its vcpu/vhost thread in numa node 1
- a script is used to lauch the netperf with demo mode an...
2012 Oct 30
6
[rfc net-next v6 0/3] Multiqueue virtio-net
...Test environment:
- Intel(R) Xeon(R) CPU E5620 @ 2.40GHz, 8 cores 2 numa nodes
- Two directed connected 82599
- Host/Guest kenrel: net-next with the mq virtio-net patches and mq tuntap
patches
Pktgen test:
- Local host generate 64 byte UDP packet to guest.
- average of 20 runs
20 runs
#q #vcpu kpps +improvement
1q 1vcpu: 264kpps +0%
2q 2vcpu: 451kpps +70%
3q 3vcpu: 661kpps +150%
4q 4vcpu: 941kpps +250%
Netperf Local VM to VM test:
- VM1 and its vcpu/vhost thread in numa node 0
- VM2 and its vcpu/vhost thread in numa node 1
- a script is used to lauch the netperf with demo mode an...