similar to: [PATCH 0/6] vhost code cleanup and minor enhancement

Displaying 20 results from an estimated 6000 matches similar to: "[PATCH 0/6] vhost code cleanup and minor enhancement"

2013 Aug 30
12
[PATCH V2 0/6] vhost code cleanup and minor enhancement
Hi all: This series tries to unify and simplify vhost codes especially for zerocopy. With this series, 5% - 10% improvement for per cpu throughput were seen during netperf guest sending test. Plase review. Changes from V1: - Fix the zerocopy enabling check by changing the check of upend_idx != done_idx to (upend_idx + 1) % UIO_MAXIOV == done_idx. - Switch to use put_user() in
2013 Aug 30
12
[PATCH V2 0/6] vhost code cleanup and minor enhancement
Hi all: This series tries to unify and simplify vhost codes especially for zerocopy. With this series, 5% - 10% improvement for per cpu throughput were seen during netperf guest sending test. Plase review. Changes from V1: - Fix the zerocopy enabling check by changing the check of upend_idx != done_idx to (upend_idx + 1) % UIO_MAXIOV == done_idx. - Switch to use put_user() in
2013 Sep 02
8
[PATCH V3 0/6] vhost code cleanup and minor enhancement
This series tries to unify and simplify vhost codes especially for zerocopy. With this series, 5% - 10% improvement for per cpu throughput were seen during netperf guest sending test. Plase review. Changes from V2: - Typo fixes and code style fix - Add performance gain in the commit log of patch 2/6 - Retest the update the result in patch 6/6 Changes from V1: - Fix the zerocopy enabling check
2013 Sep 02
8
[PATCH V3 0/6] vhost code cleanup and minor enhancement
This series tries to unify and simplify vhost codes especially for zerocopy. With this series, 5% - 10% improvement for per cpu throughput were seen during netperf guest sending test. Plase review. Changes from V2: - Typo fixes and code style fix - Add performance gain in the commit log of patch 2/6 - Retest the update the result in patch 6/6 Changes from V1: - Fix the zerocopy enabling check
2013 Sep 04
2
[PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time
On Mon, Sep 02, 2013 at 04:40:59PM +0800, Jason Wang wrote: > Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if > upend_idx != done_idx we still set zcopy_used to true and rollback this choice > later. This could be avoided by determining zerocopy once by checking all > conditions at one time before. > > Signed-off-by: Jason Wang <jasowang at
2013 Sep 04
2
[PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time
On Mon, Sep 02, 2013 at 04:40:59PM +0800, Jason Wang wrote: > Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if > upend_idx != done_idx we still set zcopy_used to true and rollback this choice > later. This could be avoided by determining zerocopy once by checking all > conditions at one time before. > > Signed-off-by: Jason Wang <jasowang at
2013 Aug 30
1
[PATCH V2 4/6] vhost_net: determine whether or not to use zerocopy at one time
Hello. On 08/30/2013 08:29 AM, Jason Wang wrote: > Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if > upend_idx != done_idx we still set zcopy_used to true and rollback this choice > later. This could be avoided by determine zerocopy once by checking all > conditions at one time before. > Signed-off-by: Jason Wang <jasowang at redhat.com> > ---
2013 Aug 30
1
[PATCH V2 4/6] vhost_net: determine whether or not to use zerocopy at one time
Hello. On 08/30/2013 08:29 AM, Jason Wang wrote: > Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if > upend_idx != done_idx we still set zcopy_used to true and rollback this choice > later. This could be avoided by determine zerocopy once by checking all > conditions at one time before. > Signed-off-by: Jason Wang <jasowang at redhat.com> > ---
2017 Sep 22
17
[PATCH net-next RFC 0/5] batched tx processing in vhost_net
Hi: This series tries to implement basic tx batched processing. This is done by prefetching descriptor indices and update used ring in a batch. This intends to speed up used ring updating and improve the cache utilization. Test shows about ~22% improvement in tx pss. Please review. Jason Wang (5): vhost: split out ring head fetching logic vhost: introduce helper to prefetch desc index
2017 Sep 22
17
[PATCH net-next RFC 0/5] batched tx processing in vhost_net
Hi: This series tries to implement basic tx batched processing. This is done by prefetching descriptor indices and update used ring in a batch. This intends to speed up used ring updating and improve the cache utilization. Test shows about ~22% improvement in tx pss. Please review. Jason Wang (5): vhost: split out ring head fetching logic vhost: introduce helper to prefetch desc index
2020 Jun 03
1
[PATCH RFC 08/13] vhost/net: convert to new API: heads->bufs
On 2020/6/2 ??9:06, Michael S. Tsirkin wrote: > Convert vhost net to use the new format-agnostic API. > In particular, don't poke at vq internals such as the > heads array. > > Signed-off-by: Michael S. Tsirkin <mst at redhat.com> > --- > drivers/vhost/net.c | 153 +++++++++++++++++++++++--------------------- > 1 file changed, 81 insertions(+), 72 deletions(-)
2012 Nov 01
9
[PATCHv3 net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b, however you can only enable this mode if you know your workload does not trigger heavy guest to host/host to guest traffic - otherwise you get a (minor) performance regression. This patchset addresses this problem by notifying the owner device when callback is invoked because of a data copy. This makes it possible to
2012 Nov 01
9
[PATCHv3 net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b, however you can only enable this mode if you know your workload does not trigger heavy guest to host/host to guest traffic - otherwise you get a (minor) performance regression. This patchset addresses this problem by notifying the owner device when callback is invoked because of a data copy. This makes it possible to
2012 Oct 31
8
[PATCHv2 net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b, however you can only enable this mode if you know your workload does not trigger heavy guest to host/host to guest traffic - otherwise you get a (minor) performance regression. This patchset addresses this problem by notifying the owner device when callback is invoked because of a data copy. This makes it possible to
2012 Oct 29
9
[PATCH net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b, however you can only enable this mode if you know your workload does not trigger heavy guest to host/host to guest traffic - otherwise you get a (minor) performance regression. This patchset addresses this problem by notifying the owner device when callback is invoked because of a data copy. This makes it possible to
2012 Oct 31
8
[PATCHv2 net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b, however you can only enable this mode if you know your workload does not trigger heavy guest to host/host to guest traffic - otherwise you get a (minor) performance regression. This patchset addresses this problem by notifying the owner device when callback is invoked because of a data copy. This makes it possible to
2012 Oct 29
9
[PATCH net-next 0/8] enable/disable zero copy tx dynamically
tun supports zero copy transmit since 0690899b4d4501b3505be069b9a687e68ccbe15b, however you can only enable this mode if you know your workload does not trigger heavy guest to host/host to guest traffic - otherwise you get a (minor) performance regression. This patchset addresses this problem by notifying the owner device when callback is invoked because of a data copy. This makes it possible to
2013 Sep 23
2
[PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time
On Thu, Sep 05, 2013 at 10:54:44AM +0800, Jason Wang wrote: > On 09/04/2013 07:59 PM, Michael S. Tsirkin wrote: > > On Mon, Sep 02, 2013 at 04:40:59PM +0800, Jason Wang wrote: > >> Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if > >> upend_idx != done_idx we still set zcopy_used to true and rollback this choice > >> later. This could
2013 Sep 23
2
[PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time
On Thu, Sep 05, 2013 at 10:54:44AM +0800, Jason Wang wrote: > On 09/04/2013 07:59 PM, Michael S. Tsirkin wrote: > > On Mon, Sep 02, 2013 at 04:40:59PM +0800, Jason Wang wrote: > >> Currently, even if the packet length is smaller than VHOST_GOODCOPY_LEN, if > >> upend_idx != done_idx we still set zcopy_used to true and rollback this choice > >> later. This could
2014 Feb 25
2
[PATCH net] vhost: net: switch to use data copy if pending DMAs exceed the limit
We used to stop the handling of tx when the number of pending DMAs exceeds VHOST_MAX_PEND. This is used to reduce the memory occupation of both host and guest. But it was too aggressive in some cases, since any delay or blocking of a single packet may delay or block the guest transmission. Consider the following setup: +-----+ +-----+ | VM1 | | VM2 | +--+--+