search for: pktgen

Displaying 20 results from an estimated 172 matches for "pktgen".

2012 Nov 26
1
[net-next RFC] pktgen: don't wait for the device who doesn't free skb immediately after sent
Some deivces do not free the old tx skbs immediately after it has been sent (usually in tx interrupt). One such example is virtio-net which optimizes for virt and only free the possible old tx skbs during the next packet sending. This would lead the pktgen to wait forever in the refcount of the skb if no other pakcet will be sent afterwards. Solving this issue by introducing a new flag IFF_TX_SKB_FREE_DELAY which could notify the pktgen that the device does not free skb immediately after it has been sent and let it not to wait for the refcount to be...
2012 Nov 26
1
[net-next RFC] pktgen: don't wait for the device who doesn't free skb immediately after sent
Some deivces do not free the old tx skbs immediately after it has been sent (usually in tx interrupt). One such example is virtio-net which optimizes for virt and only free the possible old tx skbs during the next packet sending. This would lead the pktgen to wait forever in the refcount of the skb if no other pakcet will be sent afterwards. Solving this issue by introducing a new flag IFF_TX_SKB_FREE_DELAY which could notify the pktgen that the device does not free skb immediately after it has been sent and let it not to wait for the refcount to be...
2014 Sep 03
8
[PATCH 0/3] virtio: simplify virtio_ring.
I resurrected these patches after prompting from Andy Lutomirski's recent patches. I put them on the back-burner because vring_bench had a 15% slowdown on my laptop: pktgen testing revealed a speedup, if anything, so I've cleaned them up. Rusty Russell (3): virtio_net: pass well-formed sgs to virtqueue_add_*() virtio_ring: assume sgs are always well-formed. virtio_ring: unify direct/indirect code paths. drivers/net/virtio_net.c | 5 +- drivers/virti...
2014 Sep 03
8
[PATCH 0/3] virtio: simplify virtio_ring.
I resurrected these patches after prompting from Andy Lutomirski's recent patches. I put them on the back-burner because vring_bench had a 15% slowdown on my laptop: pktgen testing revealed a speedup, if anything, so I've cleaned them up. Rusty Russell (3): virtio_net: pass well-formed sgs to virtqueue_add_*() virtio_ring: assume sgs are always well-formed. virtio_ring: unify direct/indirect code paths. drivers/net/virtio_net.c | 5 +- drivers/virti...
2014 Sep 03
0
[PATCH 1/3] virtio_net: pass well-formed sgs to virtqueue_add_*()
This is the only driver which doesn't hand virtqueue_add_inbuf and virtqueue_add_outbuf a well-formed, well-terminated sg. Fix it, so we can make virtio_add_* simpler. pktgen results: modprobe pktgen echo 'add_device eth0' > /proc/net/pktgen/kpktgend_0 echo nowait 1 > /proc/net/pktgen/eth0 echo count 1000000 > /proc/net/pktgen/eth0 echo clone_skb 100000 > /proc/net/pktgen/eth0 echo dst_mac 4e:14:25:a9:30:ac > /proc/net/pktgen/eth0 echo dst...
2020 Jul 21
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...gt;> and testing >>> the pps as previous mail says. This means that we have either only >>> vhost_net batching (in base testing, like previously to apply this >>> patch) or both batching sizes the same. >>> >>> I've checked that vhost process (and pktgen) goes 100% cpu also. >>> >>> For tx: Batching decrements always the performance, in all cases. Not >>> sure why bufapi made things better the last time. >>> >>> Batching makes improvements until 64 bufs, I see increments of pps but like 1%. >>>...
2020 Jul 20
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...testing > > > the pps as previous mail says. This means that we have either only > > > vhost_net batching (in base testing, like previously to apply this > > > patch) or both batching sizes the same. > > > > > > I've checked that vhost process (and pktgen) goes 100% cpu also. > > > > > > For tx: Batching decrements always the performance, in all cases. Not > > > sure why bufapi made things better the last time. > > > > > > Batching makes improvements until 64 bufs, I see increments of pps but like 1%. &...
2020 Jul 09
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...t;> > > >>>>>>>>> It was tested for throughput with DPDK's testpmd (as described in > > >>>>>>>>> http://doc.dpdk.org/guides/howto/virtio_user_as_exceptional_path.html) > > >>>>>>>>> and kernel pktgen. No latency tests were performed by me. Maybe it is > > >>>>>>>>> interesting to perform a latency test or just a different set of tests > > >>>>>>>>> over a recent version. > > >>>>>>>>> > > &g...
2009 Aug 05
2
bridge vs macvlan performance (was: some veth related issues)
Ben Greear wrote: > Well, it seems we could and should fix veth to work, but it will have > to do equivalent work of copying an skb most likely, so either way > you'll probably get a big performance hit. Using the same pktgen script (i.e with clone=0) I see that a veth-->bridge-->veth configuration gives about 400K PPS forwarding performance where macvlan-->veth-->macvlan gives 680K PPS (again, I made sure that the bridge has applied learning before I start the test). Basically, both the bridge and macvl...
2009 Aug 05
2
bridge vs macvlan performance (was: some veth related issues)
Ben Greear wrote: > Well, it seems we could and should fix veth to work, but it will have > to do equivalent work of copying an skb most likely, so either way > you'll probably get a big performance hit. Using the same pktgen script (i.e with clone=0) I see that a veth-->bridge-->veth configuration gives about 400K PPS forwarding performance where macvlan-->veth-->macvlan gives 680K PPS (again, I made sure that the bridge has applied learning before I start the test). Basically, both the bridge and macvl...
2014 Sep 03
0
[PATCH 3/3] virtio_ring: unify direct/indirect code paths.
...9;ed indirect table where the sg is populated. Previously vring_add_indirect() did the allocation and the simple linear layout. We replace that with alloc_indirect() which allocates the indirect table then chains it like the normal descriptor table so we can reuse the core logic. This slows down pktgen by less than 1/2 a percent (which uses direct descriptors), as well as vring_bench, but it's far neater. vring_bench before: 1061485790-1104800648(1.08254e+09+/-6.6e+06)ns vring_bench after: 1125610268-1183528965(1.14172e+09+/-8e+06)ns pktgen before: 787781-796334(793165+/-2.4e+03)pps 36...
2020 Jun 11
27
[PATCH RFC v8 00/11] vhost: ring format independence
This still causes corruption issues for people so don't try to use in production please. Posting to expedite debugging. This adds infrastructure required for supporting multiple ring formats. The idea is as follows: we convert descriptors to an independent format first, and process that converting to iov later. Used ring is similar: we fetch into an independent struct first, convert that to
2020 Jun 11
27
[PATCH RFC v8 00/11] vhost: ring format independence
This still causes corruption issues for people so don't try to use in production please. Posting to expedite debugging. This adds infrastructure required for supporting multiple ring formats. The idea is as follows: we convert descriptors to an independent format first, and process that converting to iov later. Used ring is similar: we fetch into an independent struct first, convert that to
2020 Jul 20
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...f or the number of batched descriptors? > and testing > the pps as previous mail says. This means that we have either only > vhost_net batching (in base testing, like previously to apply this > patch) or both batching sizes the same. > > I've checked that vhost process (and pktgen) goes 100% cpu also. > > For tx: Batching decrements always the performance, in all cases. Not > sure why bufapi made things better the last time. > > Batching makes improvements until 64 bufs, I see increments of pps but like 1%. > > For rx: Batching always improves performanc...
2018 Aug 04
2
[RFC 0/4] Virtio uses DMA API for all devices
...t; > > > the patches or the approach in general. Thank you. > > > > > > Jason did some work on profiling this. Unfortunately he reports > > > about 4% extra overhead from this switch on x86 with no vIOMMU. > > > > The test is rather simple, just run pktgen (pktgen_sample01_simple.sh) in > > guest and measure PPS on tap on host. > > > > Thanks > > Could you supply host configuration involved please? I wonder how much of that could be caused by Spectre mitigations blowing up indirect function calls... Cheers, Ben.
2018 Aug 04
2
[RFC 0/4] Virtio uses DMA API for all devices
...t; > > > the patches or the approach in general. Thank you. > > > > > > Jason did some work on profiling this. Unfortunately he reports > > > about 4% extra overhead from this switch on x86 with no vIOMMU. > > > > The test is rather simple, just run pktgen (pktgen_sample01_simple.sh) in > > guest and measure PPS on tap on host. > > > > Thanks > > Could you supply host configuration involved please? I wonder how much of that could be caused by Spectre mitigations blowing up indirect function calls... Cheers, Ben.
2020 Jul 20
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...NET_BATCH affects lots of other things. > and testing > the pps as previous mail says. This means that we have either only > vhost_net batching (in base testing, like previously to apply this > patch) or both batching sizes the same. > > I've checked that vhost process (and pktgen) goes 100% cpu also. > > For tx: Batching decrements always the performance, in all cases. Not > sure why bufapi made things better the last time. > > Batching makes improvements until 64 bufs, I see increments of pps but like 1%. > > For rx: Batching always improves perform...
2018 Aug 06
2
[RFC 0/4] Virtio uses DMA API for all devices
...or the approach in general. Thank you. >>>>> >>>>> Jason did some work on profiling this. Unfortunately he reports >>>>> about 4% extra overhead from this switch on x86 with no vIOMMU. >>>> >>>> The test is rather simple, just run pktgen (pktgen_sample01_simple.sh) in >>>> guest and measure PPS on tap on host. >>>> >>>> Thanks >>> >>> Could you supply host configuration involved please? >> >> I wonder how much of that could be caused by Spectre mitigations >> b...
2018 Aug 06
2
[RFC 0/4] Virtio uses DMA API for all devices
...or the approach in general. Thank you. >>>>> >>>>> Jason did some work on profiling this. Unfortunately he reports >>>>> about 4% extra overhead from this switch on x86 with no vIOMMU. >>>> >>>> The test is rather simple, just run pktgen (pktgen_sample01_simple.sh) in >>>> guest and measure PPS on tap on host. >>>> >>>> Thanks >>> >>> Could you supply host configuration involved please? >> >> I wonder how much of that could be caused by Spectre mitigations >> b...
2020 Jul 01
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...>>>>>>>>> >>>>>>>>> It was tested for throughput with DPDK's testpmd (as described in >>>>>>>>> http://doc.dpdk.org/guides/howto/virtio_user_as_exceptional_path.html) >>>>>>>>> and kernel pktgen. No latency tests were performed by me. Maybe it is >>>>>>>>> interesting to perform a latency test or just a different set of tests >>>>>>>>> over a recent version. >>>>>>>>> >>>>>>>>> Thank...