Displaying 10 results from an estimated 10 matches for "b224036".
2018 Jul 03
2
[PATCH v2 net-next 4/4] vhost_net: Avoid rx vring kicks during busyloop
...er it was empty in fact?
>
> Signed-off-by: Toshiaki Makita <makita.toshiaki at lab.ntt.co.jp>
> ---
> drivers/vhost/net.c | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index 791bc8b..b224036 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -658,6 +658,7 @@ static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk,
> {
> struct vhost_net_virtqueue *rnvq = &net->vqs[VHOST_NET_VQ_RX];
> struct vhost_net_virtqueue *tnvq...
2018 Jul 03
2
[PATCH v2 net-next 4/4] vhost_net: Avoid rx vring kicks during busyloop
...er it was empty in fact?
>
> Signed-off-by: Toshiaki Makita <makita.toshiaki at lab.ntt.co.jp>
> ---
> drivers/vhost/net.c | 10 +++++++---
> 1 file changed, 7 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index 791bc8b..b224036 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -658,6 +658,7 @@ static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk,
> {
> struct vhost_net_virtqueue *rnvq = &net->vqs[VHOST_NET_VQ_RX];
> struct vhost_net_virtqueue *tnvq...
2018 Jul 03
0
[PATCH v2 net-next 4/4] vhost_net: Avoid rx vring kicks during busyloop
...prematurely. Avoid this by
checking rx ring full during busypoll.
Signed-off-by: Toshiaki Makita <makita.toshiaki at lab.ntt.co.jp>
---
drivers/vhost/net.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 791bc8b..b224036 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -658,6 +658,7 @@ static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk,
{
struct vhost_net_virtqueue *rnvq = &net->vqs[VHOST_NET_VQ_RX];
struct vhost_net_virtqueue *tnvq = &net->vqs[VHOST_NET_VQ_...
2018 Jul 04
0
[PATCH v2 net-next 4/4] vhost_net: Avoid rx vring kicks during busyloop
....
>> Signed-off-by: Toshiaki Makita <makita.toshiaki at lab.ntt.co.jp>
>> ---
>> ? drivers/vhost/net.c | 10 +++++++---
>> ? 1 file changed, 7 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
>> index 791bc8b..b224036 100644
>> --- a/drivers/vhost/net.c
>> +++ b/drivers/vhost/net.c
>> @@ -658,6 +658,7 @@ static int vhost_net_rx_peek_head_len(struct
>> vhost_net *net, struct sock *sk,
>> ? {
>> ????? struct vhost_net_virtqueue *rnvq = &net->vqs[VHOST_NET_VQ_RX];
>>...
2018 Jul 03
11
[PATCH v2 net-next 0/4] vhost_net: Avoid vq kicks during busyloop
Under heavy load vhost tx busypoll tend not to suppress vq kicks, which
causes poor guest tx performance. The detailed scenario is described in
commitlog of patch 2.
Rx seems not to have that serious problem, but for consistency I made a
similar change on rx to avoid rx wakeups (patch 3).
Additionary patch 4 is to avoid rx kicks under heavy load during
busypoll.
Tx performance is greatly improved
2018 Jul 21
7
[PATCH net-next v6 0/4] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com>
This patches improve the guest receive performance.
On the handle_tx side, we poll the sock receive queue
at the same time. handle_rx do that in the same way.
For more performance report, see patch 4.
v5->v6:
rebase the codes.
Tonghao Zhang (4):
net: vhost: lock the vqs one by one
net: vhost: replace magic number of lock annotation
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi:
This series implement batch updating of used ring for TX. This help to
reduce the cache contention on used ring. The idea is first split
datacopy path from zerocopy, and do only batching for datacopy. This
is because zercopy had already supported its own batching.
TX PPS was increased 25.8% and Netperf TCP does not show obvious
differences.
The split of datapath will also be helpful for
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi:
This series implement batch updating of used ring for TX. This help to
reduce the cache contention on used ring. The idea is first split
datacopy path from zerocopy, and do only batching for datacopy. This
is because zercopy had already supported its own batching.
TX PPS was increased 25.8% and Netperf TCP does not show obvious
differences.
The split of datapath will also be helpful for
2018 Jul 16
11
[PATCH net-next V2 0/8] Packed virtqueue support for vhost
Hi all:
This series implements packed virtqueues. The code were tested with
Tiwei's guest driver series at https://patchwork.ozlabs.org/cover/942297/
Pktgen test for both RX and TX does not show obvious difference with
split virtqueues. The main bottleneck is the guest Linux driver, since
it can not stress vhost for a 100% CPU utilization. A full TCP
benchmark is ongoing. Will test
2018 Jul 16
11
[PATCH net-next V2 0/8] Packed virtqueue support for vhost
Hi all:
This series implements packed virtqueues. The code were tested with
Tiwei's guest driver series at https://patchwork.ozlabs.org/cover/942297/
Pktgen test for both RX and TX does not show obvious difference with
split virtqueues. The main bottleneck is the guest Linux driver, since
it can not stress vhost for a 100% CPU utilization. A full TCP
benchmark is ongoing. Will test