search for: vhost_exceeds_maxpend

Displaying 20 results from an estimated 48 matches for "vhost_exceeds_maxpend".

2017 Sep 28
9
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
...gress redirect dev ifb0 Before the delay, both flows process around 80K pps. With the delay, before this patch, both process around 400. After this patch, the large flow is still rate limited, while the small reverts to its original rate. See also discussion in the first link, below. The limit in vhost_exceeds_maxpend must be carefully chosen. When vq->num >> 1, the flows remain correlated. This value happens to correspond to VHOST_MAX_PENDING for vq->num == 256. Allow smaller fractions and ensure correctness also for much smaller values of vq->num, by testing the min() of both explicitly. See als...
2017 Sep 28
9
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
...gress redirect dev ifb0 Before the delay, both flows process around 80K pps. With the delay, before this patch, both process around 400. After this patch, the large flow is still rate limited, while the small reverts to its original rate. See also discussion in the first link, below. The limit in vhost_exceeds_maxpend must be carefully chosen. When vq->num >> 1, the flows remain correlated. This value happens to correspond to VHOST_MAX_PENDING for vq->num == 256. Allow smaller fractions and ensure correctness also for much smaller values of vq->num, by testing the min() of both explicitly. See als...
2017 Oct 06
1
[PATCH net-next v2] vhost_net: do not stall on zerocopy depletion
...y, before this patch, both process around 400. After this patch, the large flow is still rate limited, while the small reverts to its original rate. See also discussion in the first link, below. Without rate limiting, {1, 10, 100}x TCP_STREAM tests continued to send at 100% zerocopy. The limit in vhost_exceeds_maxpend must be carefully chosen. With vq->num >> 1, the flows remain correlated. This value happens to correspond to VHOST_MAX_PENDING for vq->num == 256. Allow smaller fractions and ensure correctness also for much smaller values of vq->num, by testing the min() of both explicitly. See als...
2017 Oct 06
1
[PATCH net-next v2] vhost_net: do not stall on zerocopy depletion
...y, before this patch, both process around 400. After this patch, the large flow is still rate limited, while the small reverts to its original rate. See also discussion in the first link, below. Without rate limiting, {1, 10, 100}x TCP_STREAM tests continued to send at 100% zerocopy. The limit in vhost_exceeds_maxpend must be carefully chosen. With vq->num >> 1, the flows remain correlated. This value happens to correspond to VHOST_MAX_PENDING for vq->num == 256. Allow smaller fractions and ensure correctness also for much smaller values of vq->num, by testing the min() of both explicitly. See als...
2017 Sep 28
0
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
...Before the delay, both flows process around 80K pps. With the delay, > before this patch, both process around 400. After this patch, the > large flow is still rate limited, while the small reverts to its > original rate. See also discussion in the first link, below. > > The limit in vhost_exceeds_maxpend must be carefully chosen. When > vq->num >> 1, the flows remain correlated. This value happens to > correspond to VHOST_MAX_PENDING for vq->num == 256. Have you tested e.g vq->num = 512 or 1024? > Allow smaller > fractions and ensure correctness also for much smaller...
2017 Sep 29
0
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
...Before the delay, both flows process around 80K pps. With the delay, > before this patch, both process around 400. After this patch, the > large flow is still rate limited, while the small reverts to its > original rate. See also discussion in the first link, below. > > The limit in vhost_exceeds_maxpend must be carefully chosen. When > vq->num >> 1, the flows remain correlated. This value happens to > correspond to VHOST_MAX_PENDING for vq->num == 256. Allow smaller > fractions and ensure correctness also for much smaller values of > vq->num, by testing the min() of both...
2018 May 21
1
[RFC PATCH net-next 03/12] vhost_net: introduce vhost_has_more_pkts()
...ight(int pkts, int total_len) > unlikely(pkts >= VHOST_NET_PKT_WEIGHT); > } > > +static bool vhost_has_more_pkts(struct vhost_net *net, > + struct vhost_virtqueue *vq) > +{ > + return !vhost_vq_avail_empty(&net->dev, vq) && > + likely(!vhost_exceeds_maxpend(net)); This really seems like mis-use of likely/unlikely, in the middle of a sequence of operations that will always be run when this function is called. I think you should remove the likely from this helper, especially, and control the branch from the branch point. > +} > + > /* Expe...
2018 May 21
1
[RFC PATCH net-next 01/12] vhost_net: introduce helper to initialize tx iov iter
....c | 34 +++++++++++++++++++++++----------- > 1 file changed, 23 insertions(+), 11 deletions(-) > > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > index c4b49fc..15d191a 100644 > --- a/drivers/vhost/net.c > +++ b/drivers/vhost/net.c > @@ -459,6 +459,26 @@ static bool vhost_exceeds_maxpend(struct vhost_net *net) > min_t(unsigned int, VHOST_MAX_PEND, vq->num >> 2); > } > > +static size_t init_iov_iter(struct vhost_virtqueue *vq, struct iov_iter *iter, > + size_t hdr_size, int out) > +{ > + /* Skip header. TODO: support TSO. */ > + siz...
2017 Sep 28
1
[PATCH net-next RFC 5/5] vhost_net: basic tx virtqueue batched processing
...te_data; > @@ -475,6 +475,12 @@ static void handle_tx(struct vhost_net *net) > hdr_size = nvq->vhost_hlen; > zcopy = nvq->ubufs; > > + /* Disable zerocopy batched fetching for simplicity */ This special case can perhaps be avoided if we no longer block on vhost_exceeds_maxpend, but revert to copying. > + if (zcopy) { > + heads = &used; Can this special case of batchsize 1 not use vq->heads? > + batched = 1; > + } > + > for (;;) { > /* Release DMAs done buffers first */ >...
2017 Sep 28
1
[PATCH net-next RFC 5/5] vhost_net: basic tx virtqueue batched processing
...te_data; > @@ -475,6 +475,12 @@ static void handle_tx(struct vhost_net *net) > hdr_size = nvq->vhost_hlen; > zcopy = nvq->ubufs; > > + /* Disable zerocopy batched fetching for simplicity */ This special case can perhaps be avoided if we no longer block on vhost_exceeds_maxpend, but revert to copying. > + if (zcopy) { > + heads = &used; Can this special case of batchsize 1 not use vq->heads? > + batched = 1; > + } > + > for (;;) { > /* Release DMAs done buffers first */ >...
2016 Dec 28
0
[PATCH net-next V2 2/3] vhost_net: tx batching
...- 1 file changed, 20 insertions(+), 3 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 5dc3465..c42e9c3 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -351,6 +351,15 @@ static int vhost_net_tx_get_vq_desc(struct vhost_net *net, return r; } +static bool vhost_exceeds_maxpend(struct vhost_net *net) +{ + struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_TX]; + struct vhost_virtqueue *vq = &nvq->vq; + + return (nvq->upend_idx + vq->num - VHOST_MAX_PEND) % UIO_MAXIOV + == nvq->done_idx; +} + /* Expects to be always run from workqueue - which...
2017 Jan 18
0
[PATCH net-next V5 2/3] vhost_net: tx batching
...- 1 file changed, 20 insertions(+), 3 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 5dc3465..c42e9c3 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -351,6 +351,15 @@ static int vhost_net_tx_get_vq_desc(struct vhost_net *net, return r; } +static bool vhost_exceeds_maxpend(struct vhost_net *net) +{ + struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_TX]; + struct vhost_virtqueue *vq = &nvq->vq; + + return (nvq->upend_idx + vq->num - VHOST_MAX_PEND) % UIO_MAXIOV + == nvq->done_idx; +} + /* Expects to be always run from workqueue - which...
2018 May 21
0
[RFC PATCH net-next 04/12] vhost_net: split out datacopy logic
...++++++++++++----- 1 file changed, 102 insertions(+), 9 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 4ebac76..4682fcc 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -492,9 +492,95 @@ static bool vhost_has_more_pkts(struct vhost_net *net, likely(!vhost_exceeds_maxpend(net)); } +static void handle_tx_copy(struct vhost_net *net) +{ + struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_TX]; + struct vhost_virtqueue *vq = &nvq->vq; + unsigned out, in; + int head; + struct msghdr msg = { + .msg_name = NULL, + .msg_namelen = 0, + .msg_control...
2017 Sep 22
0
[PATCH net-next RFC 5/5] vhost_net: basic tx virtqueue batched processing
...)) + return true; + + preempt_disable(); + endtime = busy_clock() + vq->busyloop_timeout; + while (vhost_can_busy_poll(vq->dev, endtime) && + vhost_vq_avail_empty(vq->dev, vq)) + cpu_relax(); + preempt_enable(); + + return !vhost_vq_avail_empty(vq->dev, vq); } static bool vhost_exceeds_maxpend(struct vhost_net *net) @@ -446,8 +444,9 @@ static void handle_tx(struct vhost_net *net) { struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_TX]; struct vhost_virtqueue *vq = &nvq->vq; + struct vring_used_elem used, *heads = vq->heads; unsigned out, in; - int head; + i...
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2017 Jan 18
7
[PATCH net-next V5 0/3] vhost_net tx batching
Hi: This series tries to implement tx batching support for vhost. This was done by using MSG_MORE as a hint for under layer socket. The backend (e.g tap) can then batch the packets temporarily in a list and submit it all once the number of bacthed exceeds a limitation. Tests shows obvious improvement on guest pktgen over over mlx4(noqueue) on host: Mpps -+%
2018 May 21
20
[RFC PATCH net-next 00/12] XDP batching for TUN/vhost_net
Hi all: We do not support XDP batching for TUN since it can only receive one packet a time from vhost_net. This series tries to remove this limitation by: - introduce a TUN specific msg_control that can hold a pointer to an array of XDP buffs - try copy and build XDP buff in vhost_net - store XDP buffs in an array and submit them once for every N packets from vhost_net - since TUN can only
2017 Oct 01
1
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
...=x86_64 allmodconfig > make C=1 CF=-D__CHECK_ENDIAN__ BTW __CHECK_ENDIAN__ is the default now, I think you can drop it from your scripts. > > sparse warnings: (new ones prefixed by >>) > > > vim +440 drivers/vhost/net.c > > 433 > 434 static bool vhost_exceeds_maxpend(struct vhost_net *net) > 435 { > 436 struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_TX]; > 437 struct vhost_virtqueue *vq = &nvq->vq; > 438 > 439 return (nvq->upend_idx + UIO_MAXIOV - nvq->done_idx) % UIO_MAXIOV > > > 440...
2018 May 21
0
[RFC PATCH net-next 01/12] vhost_net: introduce helper to initialize tx iov iter
...dhat.com> --- drivers/vhost/net.c | 34 +++++++++++++++++++++++----------- 1 file changed, 23 insertions(+), 11 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index c4b49fc..15d191a 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -459,6 +459,26 @@ static bool vhost_exceeds_maxpend(struct vhost_net *net) min_t(unsigned int, VHOST_MAX_PEND, vq->num >> 2); } +static size_t init_iov_iter(struct vhost_virtqueue *vq, struct iov_iter *iter, + size_t hdr_size, int out) +{ + /* Skip header. TODO: support TSO. */ + size_t len = iov_length(vq->iov, out); +...
2018 May 21
0
[RFC PATCH net-next 03/12] vhost_net: introduce vhost_has_more_pkts()
...+485,13 @@ static bool vhost_exceeds_weight(int pkts, int total_len) unlikely(pkts >= VHOST_NET_PKT_WEIGHT); } +static bool vhost_has_more_pkts(struct vhost_net *net, + struct vhost_virtqueue *vq) +{ + return !vhost_vq_avail_empty(&net->dev, vq) && + likely(!vhost_exceeds_maxpend(net)); +} + /* Expects to be always run from workqueue - which acts as * read-size critical section for our kind of RCU. */ static void handle_tx(struct vhost_net *net) @@ -578,8 +585,7 @@ static void handle_tx(struct vhost_net *net) } total_len += len; if (total_len < VHOST_NET_WEI...