search for: txon

Displaying 20 results from an estimated 41 matches for "txon".

Did you mean: ton
2017 Mar 08
2
[virtio-dev] packed ring layout proposal - todo list
...already done that in last Nov. I made a very rough (yet hacky) > > > version (only with Tx path) in one day while companying my wife in > > > hospital. > > > > Any performance data? > > A straightfoward implementation only brings 10% performance boost in a > txonly micro benchmarking. But I'm sure there are still plenty of room > for improvement. > > > > If someone are interested in, I could share the code soon. I could > > > even cleanup the code a bit if necessary. > > > > Especially if you don't have time to...
2017 Mar 08
2
[virtio-dev] packed ring layout proposal - todo list
...already done that in last Nov. I made a very rough (yet hacky) > > > version (only with Tx path) in one day while companying my wife in > > > hospital. > > > > Any performance data? > > A straightfoward implementation only brings 10% performance boost in a > txonly micro benchmarking. But I'm sure there are still plenty of room > for improvement. > > > > If someone are interested in, I could share the code soon. I could > > > even cleanup the code a bit if necessary. > > > > Especially if you don't have time to...
2020 Jul 20
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...values I've obtained, in pps, from pktgen and testpmd. This way is easy to plot them. > > Maybe is easier as tables, if mail readers/gmail does not misalign them. > > > > # Tx > > > # === > > Base: With the previous code, not integrating any patch. testpmd is txonly mode, tap interface is XDP_DROP everything. > We vary VHOST_NET_BATCH (1, 16, 32, ...). As Jason put in a previous mail: > > TX: testpmd(txonly) -> virtio-user -> vhost_net -> XDP_DROP on TAP > > > 1 | 16 | 32 | 64 | 128 | 2...
2020 Jul 09
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...t; > > > Yes, both testpmd are using AF_PACKET driver. > > > > > > I see, using AF_PACKET means extra layers of issues need to be analyzed > > which is probably not good. > > > > > > > > > >>> with a > > >>> testpmd txonly and another in rxonly forward mode, and using the > > >>> receiving side packets/bytes data. Guest's rps, xps and interrupts, > > >>> and host's vhost threads affinity were also tuned in each test to > > >>> schedule both testpmd and vhost in d...
2020 Jul 21
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...SV with the values I've obtained, in pps, from pktgen and testpmd. This way is easy to plot them. > > Maybe is easier as tables, if mail readers/gmail does not misalign them. > >>> # Tx >>> # === > Base: With the previous code, not integrating any patch. testpmd is txonly mode, tap interface is XDP_DROP everything. > We vary VHOST_NET_BATCH (1, 16, 32, ...). As Jason put in a previous mail: > > TX: testpmd(txonly) -> virtio-user -> vhost_net -> XDP_DROP on TAP > > > 1 | 16 | 32 | 64 | 128 | 256...
2017 Mar 01
2
[virtio-dev] packed ring layout proposal - todo list
On Tue, Feb 28, 2017 at 12:29:43PM +0800, Yuanhan Liu wrote: > Hi Michael, > > Again, as usual, sorry for being late :/ > > On Wed, Feb 22, 2017 at 06:27:11AM +0200, Michael S. Tsirkin wrote: > > Stage 2: prototype guest/host drivers > > > > At this stage we need real guest and host drivers > > to be able to test real life performance. > > I suggest
2017 Mar 01
2
[virtio-dev] packed ring layout proposal - todo list
On Tue, Feb 28, 2017 at 12:29:43PM +0800, Yuanhan Liu wrote: > Hi Michael, > > Again, as usual, sorry for being late :/ > > On Wed, Feb 22, 2017 at 06:27:11AM +0200, Michael S. Tsirkin wrote: > > Stage 2: prototype guest/host drivers > > > > At this stage we need real guest and host drivers > > to be able to test real life performance. > > I suggest
2020 Jul 01
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...atching+buf api perform similarly in > both directions. > > All of testpmd tests were performed with no linux bridge, just a > host's tap interface (<interface type='ethernet'> in xml), What DPDK driver did you use in the test (AF_PACKET?). > with a > testpmd txonly and another in rxonly forward mode, and using the > receiving side packets/bytes data. Guest's rps, xps and interrupts, > and host's vhost threads affinity were also tuned in each test to > schedule both testpmd and vhost in different processors. My feeling is that if we start...
2020 Jul 01
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...9;> in xml), >> >> What DPDK driver did you use in the test (AF_PACKET?). >> > Yes, both testpmd are using AF_PACKET driver. I see, using AF_PACKET means extra layers of issues need to be analyzed which is probably not good. > >>> with a >>> testpmd txonly and another in rxonly forward mode, and using the >>> receiving side packets/bytes data. Guest's rps, xps and interrupts, >>> and host's vhost threads affinity were also tuned in each test to >>> schedule both testpmd and vhost in different processors. >>...
2018 Feb 14
6
[PATCH RFC 0/2] Packed ring for vhost
...code were tested with pmd implement by Jens at http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change was needed for pmd codes to kick virtqueue since it assumes a busy polling backend. Test were done between localhost and guest. Testpmd (rxonly) in guest reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps. It's not a complete implemention, here's what were missed: - Device Area - Driver Area - Descriptor indirection - Zerocopy may not be functional - Migration path is not tested - Vhost devices except for net - vIOMMU can not work (mainly because the metadata prefet...
2018 Feb 14
6
[PATCH RFC 0/2] Packed ring for vhost
...code were tested with pmd implement by Jens at http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change was needed for pmd codes to kick virtqueue since it assumes a busy polling backend. Test were done between localhost and guest. Testpmd (rxonly) in guest reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps. It's not a complete implemention, here's what were missed: - Device Area - Driver Area - Descriptor indirection - Zerocopy may not be functional - Migration path is not tested - Vhost devices except for net - vIOMMU can not work (mainly because the metadata prefet...
2018 Sep 07
2
[virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
...M +0300, Michael S. Tsirkin wrote: > Are there still plans to test the performance with vost pmd? > vhost doesn't seem to show a performance gain ... > I tried some performance tests with vhost PMD. In guest, the XDP program will return XDP_DROP directly. And in host, testpmd will do txonly fwd. When burst size is 1 and packet size is 64 in testpmd and testpmd needs to iterate 5 Tx queues (but only the first two queues are enabled) to prepare and inject packets, I got ~12% performance boost (5.7Mpps -> 6.4Mpps). And if the vhost PMD is faster (e.g. just need to iterate the first...
2017 Mar 08
0
[virtio-dev] packed ring layout proposal - todo list
...> > > > > I have already done that in last Nov. I made a very rough (yet hacky) > > version (only with Tx path) in one day while companying my wife in > > hospital. > > Any performance data? A straightfoward implementation only brings 10% performance boost in a txonly micro benchmarking. But I'm sure there are still plenty of room for improvement. > > If someone are interested in, I could share the code soon. I could > > even cleanup the code a bit if necessary. > > Especially if you don't have time to benchmark, I think sharing it...
2017 Mar 29
0
[virtio-dev] packed ring layout proposal - todo list
...de a very rough (yet hacky) > > > > version (only with Tx path) in one day while companying my wife in > > > > hospital. > > > > > > Any performance data? > > > > A straightfoward implementation only brings 10% performance boost in a > > txonly micro benchmarking. But I'm sure there are still plenty of room > > for improvement. > > > > > > If someone are interested in, I could share the code soon. I could > > > > even cleanup the code a bit if necessary. > > > > > > Especially...
2018 Feb 14
0
[PATCH RFC 0/2] Packed ring for vhost
...lement by Jens at > http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor > change was needed for pmd codes to kick virtqueue since it assumes a > busy polling backend. > > Test were done between localhost and guest. Testpmd (rxonly) in guest > reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps. How does this compare with the split ring design? > It's not a complete implemention, here's what were missed: > > - Device Area > - Driver Area > - Descriptor indirection > - Zerocopy may not be functional > - Migration path is not tested &g...
2018 Mar 26
0
[RFC PATCH V2 0/8] Packed ring for vhost
...lement by Jens at > http://dpdk.org/ml/archives/dev/2018-January/089417.html. Minor change > was needed for pmd codes to kick virtqueue since it assumes a busy > polling backend. > > Test were done between localhost and guest. Testpmd (rxonly) in guest > reports 2.4Mpps. Testpmd (txonly) repots about 2.1Mpps. And how does it compare to older ring layout? > > Notes: The event suppression /indirect descriptor support is complied > test only because of lacked driver support. > > Changes from V1: > > - Refactor vhost used elem code to avoid open codi...
2018 Dec 29
0
[RFC PATCH V3 1/5] vhost: generalize adding used elem
Use one generic vhost_copy_to_user() instead of two dedicated accessor. This will simplify the conversion to fine grain accessors. About 2% improvement of PPS were seen during vitio-user txonly test. Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/vhost.c | 11 +---------- 1 file changed, 1 insertion(+), 10 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index 55e5aa662ad5..f179b5ee14c4 100644 --- a/drivers/vhost/vhost.c +++ b/driver...
2019 Jan 07
0
[RFC PATCH V3 1/5] vhost: generalize adding used elem
...On Sat, Dec 29, 2018 at 08:46:52PM +0800, Jason Wang wrote: >>> Use one generic vhost_copy_to_user() instead of two dedicated >>> accessor. This will simplify the conversion to fine grain >>> accessors. About 2% improvement of PPS were seen during vitio-user >>> txonly test. >>> >>> Signed-off-by: Jason Wang <jasowang at redhat.com> >> I don't hve a problem with this patch but do you have >> any idea how come removing what's supposed to be >> an optimization speeds things up? > With SMAP, the 2x vhost_put_use...
2018 Sep 10
3
[virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
...s to test the performance with vost pmd? > > > vhost doesn't seem to show a performance gain ... > > > > > > > I tried some performance tests with vhost PMD. In guest, the > > XDP program will return XDP_DROP directly. And in host, testpmd > > will do txonly fwd. > > > > When burst size is 1 and packet size is 64 in testpmd and > > testpmd needs to iterate 5 Tx queues (but only the first two > > queues are enabled) to prepare and inject packets, I got ~12% > > performance boost (5.7Mpps -> 6.4Mpps). And if the vhost...
2020 Jun 11
27
[PATCH RFC v8 00/11] vhost: ring format independence
This still causes corruption issues for people so don't try to use in production please. Posting to expedite debugging. This adds infrastructure required for supporting multiple ring formats. The idea is as follows: we convert descriptors to an independent format first, and process that converting to iov later. Used ring is similar: we fetch into an independent struct first, convert that to