search for: testpmd

Displaying 20 results from an estimated 85 matches for "testpmd".

2020 Jul 09
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...> >>>>>>>>> I tested this version of the patch: > > >>>>>>>>> https://lkml.org/lkml/2019/10/13/42 > > >>>>>>>>> > > >>>>>>>>> It was tested for throughput with DPDK's testpmd (as described in > > >>>>>>>>> http://doc.dpdk.org/guides/howto/virtio_user_as_exceptional_path.html) > > >>>>>>>>> and kernel pktgen. No latency tests were performed by me. Maybe it is > > >>>>>>>>>...
2020 Jul 01
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...t;>>>>>>>> >>>>>>>>> I tested this version of the patch: >>>>>>>>> https://lkml.org/lkml/2019/10/13/42 >>>>>>>>> >>>>>>>>> It was tested for throughput with DPDK's testpmd (as described in >>>>>>>>> http://doc.dpdk.org/guides/howto/virtio_user_as_exceptional_path.html) >>>>>>>>> and kernel pktgen. No latency tests were performed by me. Maybe it is >>>>>>>>> interesting to perform a laten...
2020 Jul 01
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...t;>>>>>> Hi Konrad. >>>>>>> >>>>>>> I tested this version of the patch: >>>>>>> https://lkml.org/lkml/2019/10/13/42 >>>>>>> >>>>>>> It was tested for throughput with DPDK's testpmd (as described in >>>>>>> http://doc.dpdk.org/guides/howto/virtio_user_as_exceptional_path.html) >>>>>>> and kernel pktgen. No latency tests were performed by me. Maybe it is >>>>>>> interesting to perform a latency test or just a differ...
2020 Jun 11
27
[PATCH RFC v8 00/11] vhost: ring format independence
This still causes corruption issues for people so don't try to use in production please. Posting to expedite debugging. This adds infrastructure required for supporting multiple ring formats. The idea is as follows: we convert descriptors to an independent format first, and process that converting to iov later. Used ring is similar: we fetch into an independent struct first, convert that to
2020 Jun 11
27
[PATCH RFC v8 00/11] vhost: ring format independence
This still causes corruption issues for people so don't try to use in production please. Posting to expedite debugging. This adds infrastructure required for supporting multiple ring formats. The idea is as follows: we convert descriptors to an independent format first, and process that converting to iov later. Used ring is similar: we fetch into an independent struct first, convert that to
2020 Jul 01
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...Konrad. > > > > > > > > > > > > > > I tested this version of the patch: > > > > > > > https://lkml.org/lkml/2019/10/13/42 > > > > > > > > > > > > > > It was tested for throughput with DPDK's testpmd (as described in > > > > > > > http://doc.dpdk.org/guides/howto/virtio_user_as_exceptional_path.html) > > > > > > > and kernel pktgen. No latency tests were performed by me. Maybe it is > > > > > > > interesting to perform a latency tes...
2020 Jul 20
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...> > # == > > > # pktgen results (pps) > > > 1223275,1668868,1728794,1769261,1808574,1837252,1846436 > > > 1456924,1797901,1831234,1868746,1877508,1931598,1936402 > > > 1368923,1719716,1794373,1865170,1884803,1916021,1975160 > > > > > > # Testpmd pps results > > > 1222698.143,1670604,1731040.6,1769218,1811206,1839308.75,1848478.75 > > > 1450140.5,1799985.75,1834089.75,1871290,1880005.5,1934147.25,1939034 > > > 1370621,1721858,1796287.75,1866618.5,1885466.5,1918670.75,1976173.5,1988760.75,1978316 > > > &g...
2020 Jul 21
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...;> # Rx >>> # == >>> # pktgen results (pps) >>> 1223275,1668868,1728794,1769261,1808574,1837252,1846436 >>> 1456924,1797901,1831234,1868746,1877508,1931598,1936402 >>> 1368923,1719716,1794373,1865170,1884803,1916021,1975160 >>> >>> # Testpmd pps results >>> 1222698.143,1670604,1731040.6,1769218,1811206,1839308.75,1848478.75 >>> 1450140.5,1799985.75,1834089.75,1871290,1880005.5,1934147.25,1939034 >>> 1370621,1721858,1796287.75,1866618.5,1885466.5,1918670.75,1976173.5,1988760.75,1978316 >>> >>&gt...
2018 Sep 07
2
[virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
...018 at 05:00:40PM +0300, Michael S. Tsirkin wrote: > Are there still plans to test the performance with vost pmd? > vhost doesn't seem to show a performance gain ... > I tried some performance tests with vhost PMD. In guest, the XDP program will return XDP_DROP directly. And in host, testpmd will do txonly fwd. When burst size is 1 and packet size is 64 in testpmd and testpmd needs to iterate 5 Tx queues (but only the first two queues are enabled) to prepare and inject packets, I got ~12% performance boost (5.7Mpps -> 6.4Mpps). And if the vhost PMD is faster (e.g. just need to iter...
2020 Jun 22
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...to that now. > > > > > > What kind of testing? 100GiB? Low latency? > > > > > > > Hi Konrad. > > > > I tested this version of the patch: > > https://lkml.org/lkml/2019/10/13/42 > > > > It was tested for throughput with DPDK's testpmd (as described in > > http://doc.dpdk.org/guides/howto/virtio_user_as_exceptional_path.html) > > and kernel pktgen. No latency tests were performed by me. Maybe it is > > interesting to perform a latency test or just a different set of tests > > over a recent version. > &g...
2018 Sep 10
3
[virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
...> Are there still plans to test the performance with vost pmd? > > > vhost doesn't seem to show a performance gain ... > > > > > > > I tried some performance tests with vhost PMD. In guest, the > > XDP program will return XDP_DROP directly. And in host, testpmd > > will do txonly fwd. > > > > When burst size is 1 and packet size is 64 in testpmd and > > testpmd needs to iterate 5 Tx queues (but only the first two > > queues are enabled) to prepare and inject packets, I got ~12% > > performance boost (5.7Mpps -> 6.4M...
2020 Jun 22
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...tency? > > > > > > > > > > > > > Hi Konrad. > > > > > > > > I tested this version of the patch: > > > > https://lkml.org/lkml/2019/10/13/42 > > > > > > > > It was tested for throughput with DPDK's testpmd (as described in > > > > http://doc.dpdk.org/guides/howto/virtio_user_as_exceptional_path.html) > > > > and kernel pktgen. No latency tests were performed by me. Maybe it is > > > > interesting to perform a latency test or just a different set of tests > >...
2018 Nov 15
3
[PATCH net-next 1/2] vhost_net: mitigate page reference counting during page frag refill
...itigate a per packet atomic operation by maintaining a reference bias which is initially USHRT_MAX. Each time a page is got, instead of calling get_page() we decrease the bias and when we find it's time to use a new page we will decrease the bias at one time through __page_cache_drain_cache(). Testpmd(virtio_user + vhost_net) + XDP_DROP on TAP shows about 1.6% improvement. Before: 4.63Mpps After: 4.71Mpps Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/net.c | 54 ++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 51 insertions(+), 3 deletions(-) diff -...
2018 Feb 14
6
[PATCH RFC 0/2] Packed ring for vhost
...o-docs/blob/master/virtio-v1.1-packed-wd07.pdf . The code were tested with pmd implement by Jens at http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change was needed for pmd codes to kick virtqueue since it assumes a busy polling backend. Test were done between localhost and guest. Testpmd (rxonly) in guest reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps. It's not a complete implemention, here's what were missed: - Device Area - Driver Area - Descriptor indirection - Zerocopy may not be functional - Migration path is not tested - Vhost devices except for net - vIOMMU...
2018 Feb 14
6
[PATCH RFC 0/2] Packed ring for vhost
...o-docs/blob/master/virtio-v1.1-packed-wd07.pdf . The code were tested with pmd implement by Jens at http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change was needed for pmd codes to kick virtqueue since it assumes a busy polling backend. Test were done between localhost and guest. Testpmd (rxonly) in guest reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps. It's not a complete implemention, here's what were missed: - Device Area - Driver Area - Descriptor indirection - Zerocopy may not be functional - Migration path is not tested - Vhost devices except for net - vIOMMU...
2018 Sep 11
2
[virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
...o test the performance with vost pmd? > > > > > vhost doesn't seem to show a performance gain ... > > > > > > > > > I tried some performance tests with vhost PMD. In guest, the > > > > XDP program will return XDP_DROP directly. And in host, testpmd > > > > will do txonly fwd. > > > > > > > > When burst size is 1 and packet size is 64 in testpmd and > > > > testpmd needs to iterate 5 Tx queues (but only the first two > > > > queues are enabled) to prepare and inject packets, I got ~1...
2019 Sep 09
2
[RFC PATCH untested] vhost: block speculation of translated descriptors
...> + (node->userspace_addr + > > + array_index_nospec(addr - node->start, > > + node->size)); > > s += size; > > addr += size; > > ++ret; > > > I've tried this on Kaby Lake smap off metadata acceleration off using > testpmd (virtio-user) + vhost_net. I don't see obvious performance > difference with TX PPS. > > Thanks Should I push this to Linus right now then? It's a security thing so maybe we better do it ASAP ... what's your opinion? -- MST
2019 Sep 09
2
[RFC PATCH untested] vhost: block speculation of translated descriptors
...> + (node->userspace_addr + > > + array_index_nospec(addr - node->start, > > + node->size)); > > s += size; > > addr += size; > > ++ret; > > > I've tried this on Kaby Lake smap off metadata acceleration off using > testpmd (virtio-user) + vhost_net. I don't see obvious performance > difference with TX PPS. > > Thanks Should I push this to Linus right now then? It's a security thing so maybe we better do it ASAP ... what's your opinion? -- MST
2018 Sep 07
0
[virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
...el S. Tsirkin wrote: > > Are there still plans to test the performance with vost pmd? > > vhost doesn't seem to show a performance gain ... > > > > I tried some performance tests with vhost PMD. In guest, the > XDP program will return XDP_DROP directly. And in host, testpmd > will do txonly fwd. > > When burst size is 1 and packet size is 64 in testpmd and > testpmd needs to iterate 5 Tx queues (but only the first two > queues are enabled) to prepare and inject packets, I got ~12% > performance boost (5.7Mpps -> 6.4Mpps). And if the vhost PMD &gt...
2018 Feb 14
0
[PATCH RFC 0/2] Packed ring for vhost
...-packed-wd07.pdf > . The code were tested with pmd implement by Jens at > http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor > change was needed for pmd codes to kick virtqueue since it assumes a > busy polling backend. > > Test were done between localhost and guest. Testpmd (rxonly) in guest > reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps. How does this compare with the split ring design? > It's not a complete implemention, here's what were missed: > > - Device Area > - Driver Area > - Descriptor indirection > - Zerocopy may no...