similar to: [PATCH V2 0/2] vhost_net polling optimization

Displaying 20 results from an estimated 6000 matches similar to: "[PATCH V2 0/2] vhost_net polling optimization"

2016 May 30
1
[PATCH V2 1/2] vhost_net: stop polling socket during rx processing
On Mon, May 30, 2016 at 02:47:53AM -0400, Jason Wang wrote: > We don't stop rx polling socket during rx processing, this will lead > unnecessary wakeups from under layer net devices (E.g > sock_def_readable() form tun). Rx will be slowed down in this > way. This patch avoids this by stop polling socket during rx > processing. A small drawback is that this introduces some
2016 May 30
1
[PATCH V2 1/2] vhost_net: stop polling socket during rx processing
On Mon, May 30, 2016 at 02:47:53AM -0400, Jason Wang wrote: > We don't stop rx polling socket during rx processing, this will lead > unnecessary wakeups from under layer net devices (E.g > sock_def_readable() form tun). Rx will be slowed down in this > way. This patch avoids this by stop polling socket during rx > processing. A small drawback is that this introduces some
2016 Jun 01
7
[PATCH V3 0/2] vhost_net polling optimization
Hi: This series tries to optimize vhost_net polling at two points: - Stop rx polling for reduicng the unnecessary wakeups during handle_rx(). - Conditonally enable tx polling for reducing the unnecessary traversing and spinlock touching. Test shows about 17% improvement on rx pps. Please review Changes from V2: - Don't enable rx vq if we meet an err or rx vq is empty Changes from V1:
2016 Jun 01
7
[PATCH V3 0/2] vhost_net polling optimization
Hi: This series tries to optimize vhost_net polling at two points: - Stop rx polling for reduicng the unnecessary wakeups during handle_rx(). - Conditonally enable tx polling for reducing the unnecessary traversing and spinlock touching. Test shows about 17% improvement on rx pps. Please review Changes from V2: - Don't enable rx vq if we meet an err or rx vq is empty Changes from V1:
2018 Jul 03
11
[PATCH v2 net-next 0/4] vhost_net: Avoid vq kicks during busyloop
Under heavy load vhost tx busypoll tend not to suppress vq kicks, which causes poor guest tx performance. The detailed scenario is described in commitlog of patch 2. Rx seems not to have that serious problem, but for consistency I made a similar change on rx to avoid rx wakeups (patch 3). Additionary patch 4 is to avoid rx kicks under heavy load during busypoll. Tx performance is greatly improved
2016 May 30
1
[PATCH V2 2/2] vhost_net: conditionally enable tx polling
On Mon, May 30, 2016 at 02:47:54AM -0400, Jason Wang wrote: > We always poll tx for socket, this is sub optimal since: > > - it will be only used when we exceed the sndbuf of the socket. > - since we use two independent polls for tx and vq, this will slightly > increase the waitqueue traversing time and more important, vhost > could not benefit from commit >
2016 May 30
1
[PATCH V2 2/2] vhost_net: conditionally enable tx polling
On Mon, May 30, 2016 at 02:47:54AM -0400, Jason Wang wrote: > We always poll tx for socket, this is sub optimal since: > > - it will be only used when we exceed the sndbuf of the socket. > - since we use two independent polls for tx and vq, this will slightly > increase the waitqueue traversing time and more important, vhost > could not benefit from commit >
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi: This series implement batch updating of used ring for TX. This help to reduce the cache contention on used ring. The idea is first split datacopy path from zerocopy, and do only batching for datacopy. This is because zercopy had already supported its own batching. TX PPS was increased 25.8% and Netperf TCP does not show obvious differences. The split of datapath will also be helpful for
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi: This series implement batch updating of used ring for TX. This help to reduce the cache contention on used ring. The idea is first split datacopy path from zerocopy, and do only batching for datacopy. This is because zercopy had already supported its own batching. TX PPS was increased 25.8% and Netperf TCP does not show obvious differences. The split of datapath will also be helpful for
2017 Oct 31
2
[PATCH net-next] vhost_net: conditionally enable tx polling
We always poll tx for socket, this is sub optimal since: - we only want to be notified when sndbuf is available - this will slightly increase the waitqueue traversing time and more important, vhost could not benefit from commit commit 9e641bdcfa4e ("net-tun: restructure tun_do_read for better sleep/wakeup efficiency") even if we've stopped rx polling during handle_rx() since
2017 Oct 31
2
[PATCH net-next] vhost_net: conditionally enable tx polling
We always poll tx for socket, this is sub optimal since: - we only want to be notified when sndbuf is available - this will slightly increase the waitqueue traversing time and more important, vhost could not benefit from commit commit 9e641bdcfa4e ("net-tun: restructure tun_do_read for better sleep/wakeup efficiency") even if we've stopped rx polling during handle_rx() since
2018 Jun 29
5
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
Under heavy load vhost busypoll may run without suppressing notification. For example tx zerocopy callback can push tx work while handle_tx() is running, then busyloop exits due to vhost_has_work() condition and enables notification but immediately reenters handle_tx() because the pushed work was tx. In this case handle_tx() tries to disable notification again, but when using event_idx it by
2018 Jun 29
5
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
Under heavy load vhost busypoll may run without suppressing notification. For example tx zerocopy callback can push tx work while handle_tx() is running, then busyloop exits due to vhost_has_work() condition and enables notification but immediately reenters handle_tx() because the pushed work was tx. In this case handle_tx() tries to disable notification again, but when using event_idx it by
2017 Nov 01
2
[PATCH net-next] vhost_net: conditionally enable tx polling
On 2017?11?01? 00:36, Michael S. Tsirkin wrote: > On Tue, Oct 31, 2017 at 06:27:20PM +0800, Jason Wang wrote: >> We always poll tx for socket, this is sub optimal since: >> >> - we only want to be notified when sndbuf is available >> - this will slightly increase the waitqueue traversing time and more >> important, vhost could not benefit from commit >>
2017 Nov 01
2
[PATCH net-next] vhost_net: conditionally enable tx polling
On 2017?11?01? 00:36, Michael S. Tsirkin wrote: > On Tue, Oct 31, 2017 at 06:27:20PM +0800, Jason Wang wrote: >> We always poll tx for socket, this is sub optimal since: >> >> - we only want to be notified when sndbuf is available >> - this will slightly increase the waitqueue traversing time and more >> important, vhost could not benefit from commit >>
2014 Aug 15
2
[PATCH net-next] vhost_net: stop rx net polling when possible
After rx vq was enabled, we never stop polling its socket. This is sub optimal when may lead unnecessary wake-ups after the rx net work has already been queued. This could be optimized by stopping polling the rx net sock when processing both rx and tx and restart it afterward. This could save unnecessary wake-ups and even unnecessary spin locks acquiring with the help of commit
2014 Aug 15
2
[PATCH net-next] vhost_net: stop rx net polling when possible
After rx vq was enabled, we never stop polling its socket. This is sub optimal when may lead unnecessary wake-ups after the rx net work has already been queued. This could be optimized by stopping polling the rx net sock when processing both rx and tx and restart it afterward. This could save unnecessary wake-ups and even unnecessary spin locks acquiring with the help of commit
2012 Dec 27
3
[PATCH 1/2] vhost_net: correct error hanlding in vhost_net_set_backend()
Fix the leaking of oldubufs and fd refcnt when fail to initialized used ring. Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/net.c | 14 +++++++++++--- 1 files changed, 11 insertions(+), 3 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index ebd08b2..629d6b5 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -834,8 +834,10 @@ static
2012 Dec 27
3
[PATCH 1/2] vhost_net: correct error hanlding in vhost_net_set_backend()
Fix the leaking of oldubufs and fd refcnt when fail to initialized used ring. Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/vhost/net.c | 14 +++++++++++--- 1 files changed, 11 insertions(+), 3 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index ebd08b2..629d6b5 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -834,8 +834,10 @@ static
2013 May 07
5
[PATCH 0/4] vhost private_data rcu removal
Asias He (4): vhost-net: Always access vq->private_data under vq mutex vhost-test: Always access vq->private_data under vq mutex vhost-scsi: Always access vq->private_data under vq mutex vhost: Remove custom vhost rcu usage drivers/vhost/net.c | 37 ++++++++++++++++--------------------- drivers/vhost/scsi.c | 17 ++++++----------- drivers/vhost/test.c | 20