similar to: [PATCH V4 0/3] basic busy polling support for vhost_net

Displaying 20 results from an estimated 10000 matches similar to: "[PATCH V4 0/3] basic busy polling support for vhost_net"

2015 Nov 12
5
[PATCH net-next RFC V3 0/3] basic busy polling support for vhost_net
Hi all: This series tries to add basic busy polling for vhost net. The idea is simple: at the end of tx/rx processing, busy polling for new tx added descriptor and rx receive socket for a while. The maximum number of time (in us) could be spent on busy polling was specified ioctl. Test were done through: - 50 us as busy loop timeout - Netperf 2.6 - Two machines with back to back connected ixgbe
2015 Nov 12
5
[PATCH net-next RFC V3 0/3] basic busy polling support for vhost_net
Hi all: This series tries to add basic busy polling for vhost net. The idea is simple: at the end of tx/rx processing, busy polling for new tx added descriptor and rx receive socket for a while. The maximum number of time (in us) could be spent on busy polling was specified ioctl. Test were done through: - 50 us as busy loop timeout - Netperf 2.6 - Two machines with back to back connected ixgbe
2015 Nov 12
2
[PATCH net-next RFC V3 0/3] basic busy polling support for vhost_net
Hi Jason, I understand your busy loop timeout is quite conservative at 50us. Did you try any other values? Also, did you measure how polling affects many VMs talking to each other (e.g. 20 VMs on each host, perhaps with several vNICs each, transmitting to a corresponding VM/vNIC pair on another host)? On a complete separate experiment (busy waiting on storage I/O rings on Xen), I have observed
2015 Nov 12
2
[PATCH net-next RFC V3 0/3] basic busy polling support for vhost_net
Hi Jason, I understand your busy loop timeout is quite conservative at 50us. Did you try any other values? Also, did you measure how polling affects many VMs talking to each other (e.g. 20 VMs on each host, perhaps with several vNICs each, transmitting to a corresponding VM/vNIC pair on another host)? On a complete separate experiment (busy waiting on storage I/O rings on Xen), I have observed
2016 Feb 26
7
[PATCH V3 0/3] basic busy polling support for vhost_net
This series tries to add basic busy polling for vhost net. The idea is simple: at the end of tx/rx processing, busy polling for new tx added descriptor and rx receive socket for a while. The maximum number of time (in us) could be spent on busy polling was specified ioctl. Test A were done through: - 50 us as busy loop timeout - Netperf 2.6 - Two machines with back to back connected mlx4 - Guest
2016 Feb 26
7
[PATCH V3 0/3] basic busy polling support for vhost_net
This series tries to add basic busy polling for vhost net. The idea is simple: at the end of tx/rx processing, busy polling for new tx added descriptor and rx receive socket for a while. The maximum number of time (in us) could be spent on busy polling was specified ioctl. Test A were done through: - 50 us as busy loop timeout - Netperf 2.6 - Two machines with back to back connected mlx4 - Guest
2015 Dec 01
5
[PATCH V2 0/3] basic busy polling support for vhost_net
Hi all: This series tries to add basic busy polling for vhost net. The idea is simple: at the end of tx/rx processing, busy polling for new tx added descriptor and rx receive socket for a while. The maximum number of time (in us) could be spent on busy polling was specified ioctl. Test A were done through: - 50 us as busy loop timeout - Netperf 2.6 - Two machines with back to back connected
2015 Dec 01
5
[PATCH V2 0/3] basic busy polling support for vhost_net
Hi all: This series tries to add basic busy polling for vhost net. The idea is simple: at the end of tx/rx processing, busy polling for new tx added descriptor and rx receive socket for a while. The maximum number of time (in us) could be spent on busy polling was specified ioctl. Test A were done through: - 50 us as busy loop timeout - Netperf 2.6 - Two machines with back to back connected
2018 Jun 29
5
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
Under heavy load vhost busypoll may run without suppressing notification. For example tx zerocopy callback can push tx work while handle_tx() is running, then busyloop exits due to vhost_has_work() condition and enables notification but immediately reenters handle_tx() because the pushed work was tx. In this case handle_tx() tries to disable notification again, but when using event_idx it by
2018 Jun 29
5
[PATCH vhost] vhost_net: Fix too many vring kick on busypoll
Under heavy load vhost busypoll may run without suppressing notification. For example tx zerocopy callback can push tx work while handle_tx() is running, then busyloop exits due to vhost_has_work() condition and enables notification but immediately reenters handle_tx() because the pushed work was tx. In this case handle_tx() tries to disable notification again, but when using event_idx it by
2018 Aug 01
5
[PATCH net-next v7 0/4] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com> This patches improve the guest receive performance. On the handle_tx side, we poll the sock receive queue at the same time. handle_rx do that in the same way. For more performance report, see patch 4. v6->v7: fix issue and rebase codes: 1. on tx, busypoll will vhost_net_disable/enable_vq rx vq. [This is suggested by Toshiaki Makita
2018 Jul 21
7
[PATCH net-next v6 0/4] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com> This patches improve the guest receive performance. On the handle_tx side, we poll the sock receive queue at the same time. handle_rx do that in the same way. For more performance report, see patch 4. v5->v6: rebase the codes. Tonghao Zhang (4): net: vhost: lock the vqs one by one net: vhost: replace magic number of lock annotation
2018 Sep 09
7
[PATCH net-next v9 0/6] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com> This patches improve the guest receive performance. On the handle_tx side, we poll the sock receive queue at the same time. handle_rx do that in the same way. For more performance report, see patch 4, 5, 6 Tonghao Zhang (6): net: vhost: lock the vqs one by one net: vhost: replace magic number of lock annotation net: vhost: factor out
2018 Sep 09
7
[PATCH net-next v9 0/6] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com> This patches improve the guest receive performance. On the handle_tx side, we poll the sock receive queue at the same time. handle_rx do that in the same way. For more performance report, see patch 4, 5, 6 Tonghao Zhang (6): net: vhost: lock the vqs one by one net: vhost: replace magic number of lock annotation net: vhost: factor out
2018 Sep 25
6
[REBASE PATCH net-next v9 0/4] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com> This patches improve the guest receive performance. On the handle_tx side, we poll the sock receive queue at the same time. handle_rx do that in the same way. For more performance report, see patch 4 Tonghao Zhang (4): net: vhost: lock the vqs one by one net: vhost: replace magic number of lock annotation net: vhost: factor out busy
2016 Jan 20
3
[PATCH V2 3/3] vhost_net: basic polling support
On Tue, Dec 01, 2015 at 02:39:45PM +0800, Jason Wang wrote: > This patch tries to poll for new added tx buffer or socket receive > queue for a while at the end of tx/rx processing. The maximum time > spent on polling were specified through a new kind of vring ioctl. > > Signed-off-by: Jason Wang <jasowang at redhat.com> > --- > drivers/vhost/net.c | 72
2016 Jan 20
3
[PATCH V2 3/3] vhost_net: basic polling support
On Tue, Dec 01, 2015 at 02:39:45PM +0800, Jason Wang wrote: > This patch tries to poll for new added tx buffer or socket receive > queue for a while at the end of tx/rx processing. The maximum time > spent on polling were specified through a new kind of vring ioctl. > > Signed-off-by: Jason Wang <jasowang at redhat.com> > --- > drivers/vhost/net.c | 72
2018 Jul 03
11
[PATCH v2 net-next 0/4] vhost_net: Avoid vq kicks during busyloop
Under heavy load vhost tx busypoll tend not to suppress vq kicks, which causes poor guest tx performance. The detailed scenario is described in commitlog of patch 2. Rx seems not to have that serious problem, but for consistency I made a similar change on rx to avoid rx wakeups (patch 3). Additionary patch 4 is to avoid rx kicks under heavy load during busypoll. Tx performance is greatly improved
2015 Oct 29
4
[PATCH net-next rfc V2 0/2] basic busy polling support for vhost_net
Hi all: This series tries to add basic busy polling for vhost net. The idea is simple: at the end of tx processing, busy polling for new tx added descriptor and rx receive socket for a while. The maximum number of time (in us) could be spent on busy polling was specified through module parameter. Test were done through: - 50 us as busy loop timeout - Netperf 2.6 - Two machines with back to back
2015 Oct 29
4
[PATCH net-next rfc V2 0/2] basic busy polling support for vhost_net
Hi all: This series tries to add basic busy polling for vhost net. The idea is simple: at the end of tx processing, busy polling for new tx added descriptor and rx receive socket for a while. The maximum number of time (in us) could be spent on busy polling was specified through module parameter. Test were done through: - 50 us as busy loop timeout - Netperf 2.6 - Two machines with back to back