Displaying 20 results from an estimated 10000 matches similar to: "[PATCH V3 0/3] basic busy polling support for vhost_net"
2018 Jun 29
5
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
Under heavy load vhost busypoll may run without suppressing
notification. For example tx zerocopy callback can push tx work while
handle_tx() is running, then busyloop exits due to vhost_has_work()
condition and enables notification but immediately reenters handle_tx()
because the pushed work was tx. In this case handle_tx() tries to
disable notification again, but when using event_idx it by
2018 Jun 29
5
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
Under heavy load vhost busypoll may run without suppressing
notification. For example tx zerocopy callback can push tx work while
handle_tx() is running, then busyloop exits due to vhost_has_work()
condition and enables notification but immediately reenters handle_tx()
because the pushed work was tx. In this case handle_tx() tries to
disable notification again, but when using event_idx it by
2018 Jul 03
11
[PATCH v2 net-next 0/4] vhost_net: Avoid vq kicks during busyloop
Under heavy load vhost tx busypoll tend not to suppress vq kicks, which
causes poor guest tx performance. The detailed scenario is described in
commitlog of patch 2.
Rx seems not to have that serious problem, but for consistency I made a
similar change on rx to avoid rx wakeups (patch 3).
Additionary patch 4 is to avoid rx kicks under heavy load during
busypoll.
Tx performance is greatly improved
2015 Dec 01
5
[PATCH V2 0/3] basic busy polling support for vhost_net
Hi all:
This series tries to add basic busy polling for vhost net. The idea is
simple: at the end of tx/rx processing, busy polling for new tx added
descriptor and rx receive socket for a while. The maximum number of
time (in us) could be spent on busy polling was specified ioctl.
Test A were done through:
- 50 us as busy loop timeout
- Netperf 2.6
- Two machines with back to back connected
2015 Dec 01
5
[PATCH V2 0/3] basic busy polling support for vhost_net
Hi all:
This series tries to add basic busy polling for vhost net. The idea is
simple: at the end of tx/rx processing, busy polling for new tx added
descriptor and rx receive socket for a while. The maximum number of
time (in us) could be spent on busy polling was specified ioctl.
Test A were done through:
- 50 us as busy loop timeout
- Netperf 2.6
- Two machines with back to back connected
2016 Jan 20
3
[PATCH V2 3/3] vhost_net: basic polling support
On Tue, Dec 01, 2015 at 02:39:45PM +0800, Jason Wang wrote:
> This patch tries to poll for new added tx buffer or socket receive
> queue for a while at the end of tx/rx processing. The maximum time
> spent on polling were specified through a new kind of vring ioctl.
>
> Signed-off-by: Jason Wang <jasowang at redhat.com>
> ---
> drivers/vhost/net.c | 72
2016 Jan 20
3
[PATCH V2 3/3] vhost_net: basic polling support
On Tue, Dec 01, 2015 at 02:39:45PM +0800, Jason Wang wrote:
> This patch tries to poll for new added tx buffer or socket receive
> queue for a while at the end of tx/rx processing. The maximum time
> spent on polling were specified through a new kind of vring ioctl.
>
> Signed-off-by: Jason Wang <jasowang at redhat.com>
> ---
> drivers/vhost/net.c | 72
2016 Mar 04
6
[PATCH V4 0/3] basic busy polling support for vhost_net
This series tries to add basic busy polling for vhost net. The idea is
simple: at the end of tx/rx processing, busy polling for new tx added
descriptor and rx receive socket for a while. The maximum number of
time (in us) could be spent on busy polling was specified ioctl.
Test A were done through:
- 50 us as busy loop timeout
- Netperf 2.6
- Two machines with back to back connected mlx4
- Guest
2016 Mar 04
6
[PATCH V4 0/3] basic busy polling support for vhost_net
This series tries to add basic busy polling for vhost net. The idea is
simple: at the end of tx/rx processing, busy polling for new tx added
descriptor and rx receive socket for a while. The maximum number of
time (in us) could be spent on busy polling was specified ioctl.
Test A were done through:
- 50 us as busy loop timeout
- Netperf 2.6
- Two machines with back to back connected mlx4
- Guest
2018 Aug 01
5
[PATCH net-next v7 0/4] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com>
This patches improve the guest receive performance.
On the handle_tx side, we poll the sock receive queue
at the same time. handle_rx do that in the same way.
For more performance report, see patch 4.
v6->v7:
fix issue and rebase codes:
1. on tx, busypoll will vhost_net_disable/enable_vq rx vq.
[This is suggested by Toshiaki Makita
2018 Jul 02
5
[PATCH net-next v4 0/4] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com>
This patches improve the guest receive and transmit performance.
On the handle_tx side, we poll the sock receive queue at the same time.
handle_rx do that in the same way.
For more performance report, see patch 4.
v3 -> v4:
fix some issues
v2 -> v3:
This patches are splited from previous big patch:
2018 Jun 30
9
[PATCH net-next v3 0/4] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com>
This patches improve the guest receive and transmit performance.
On the handle_tx side, we poll the sock receive queue at the same time.
handle_rx do that in the same way.
This patches are splited from previous big patch:
http://patchwork.ozlabs.org/patch/934673/
For more performance report, see patch 4.
Tonghao Zhang (4):
net: vhost:
2018 Jul 21
7
[PATCH net-next v6 0/4] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com>
This patches improve the guest receive performance.
On the handle_tx side, we poll the sock receive queue
at the same time. handle_rx do that in the same way.
For more performance report, see patch 4.
v5->v6:
rebase the codes.
Tonghao Zhang (4):
net: vhost: lock the vqs one by one
net: vhost: replace magic number of lock annotation
2018 Sep 25
6
[REBASE PATCH net-next v9 0/4] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com>
This patches improve the guest receive performance.
On the handle_tx side, we poll the sock receive queue
at the same time. handle_rx do that in the same way.
For more performance report, see patch 4
Tonghao Zhang (4):
net: vhost: lock the vqs one by one
net: vhost: replace magic number of lock annotation
net: vhost: factor out busy
2018 Sep 09
7
[PATCH net-next v9 0/6] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com>
This patches improve the guest receive performance.
On the handle_tx side, we poll the sock receive queue
at the same time. handle_rx do that in the same way.
For more performance report, see patch 4, 5, 6
Tonghao Zhang (6):
net: vhost: lock the vqs one by one
net: vhost: replace magic number of lock annotation
net: vhost: factor out
2018 Sep 09
7
[PATCH net-next v9 0/6] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com>
This patches improve the guest receive performance.
On the handle_tx side, we poll the sock receive queue
at the same time. handle_rx do that in the same way.
For more performance report, see patch 4, 5, 6
Tonghao Zhang (6):
net: vhost: lock the vqs one by one
net: vhost: replace magic number of lock annotation
net: vhost: factor out
2018 Jul 04
8
[PATCH net-next v5 0/4] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com>
This patches improve the guest receive and transmit performance.
On the handle_tx side, we poll the sock receive queue at the same time.
handle_rx do that in the same way.
For more performance report, see patch 4.
v4 -> v5:
fix some issues
v3 -> v4:
fix some issues
v2 -> v3:
This patches are splited from previous big patch:
2018 Jul 04
8
[PATCH net-next v5 0/4] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com>
This patches improve the guest receive and transmit performance.
On the handle_tx side, we poll the sock receive queue at the same time.
handle_rx do that in the same way.
For more performance report, see patch 4.
v4 -> v5:
fix some issues
v3 -> v4:
fix some issues
v2 -> v3:
This patches are splited from previous big patch:
2018 Jul 02
2
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
Hi Jason,
On 2018/06/29 18:30, Jason Wang wrote:
> On 2018?06?29? 16:09, Toshiaki Makita wrote:
...
>> To fix this, poll the work instead of enabling notification when
>> busypoll is interrupted by something. IMHO signal_pending() and
>> vhost_has_work() are kind of interruptions rather than signals to
>> completely cancel the busypoll, so let's run busypoll after
2018 Jul 02
2
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
Hi Jason,
On 2018/06/29 18:30, Jason Wang wrote:
> On 2018?06?29? 16:09, Toshiaki Makita wrote:
...
>> To fix this, poll the work instead of enabling notification when
>> busypoll is interrupted by something. IMHO signal_pending() and
>> vhost_has_work() are kind of interruptions rather than signals to
>> completely cancel the busypoll, so let's run busypoll after