search for: nvq

Displaying 20 results from an estimated 362 matches for "nvq".

Did you mean: nvqs
2013 Sep 04
2
[PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time
...ed, 20 insertions(+), 27 deletions(-) > > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > index 8a6dd0d..3f89dea 100644 > --- a/drivers/vhost/net.c > +++ b/drivers/vhost/net.c > @@ -404,43 +404,36 @@ static void handle_tx(struct vhost_net *net) > iov_length(nvq->hdr, s), hdr_size); > break; > } > - zcopy_used = zcopy && (len >= VHOST_GOODCOPY_LEN || > - nvq->upend_idx != nvq->done_idx); > + > + zcopy_used = zcopy && len >= VHOST_GOODCOPY_LEN > + && (nvq->upend_idx + 1)...
2013 Sep 04
2
[PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time
...ed, 20 insertions(+), 27 deletions(-) > > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > index 8a6dd0d..3f89dea 100644 > --- a/drivers/vhost/net.c > +++ b/drivers/vhost/net.c > @@ -404,43 +404,36 @@ static void handle_tx(struct vhost_net *net) > iov_length(nvq->hdr, s), hdr_size); > break; > } > - zcopy_used = zcopy && (len >= VHOST_GOODCOPY_LEN || > - nvq->upend_idx != nvq->done_idx); > + > + zcopy_used = zcopy && len >= VHOST_GOODCOPY_LEN > + && (nvq->upend_idx + 1)...
2020 Jun 03
1
[PATCH RFC 08/13] vhost/net: convert to new API: heads->bufs
...NEL); > + if (!n->vqs[i].bufs) > + goto err; > + > zcopy = vhost_net_zcopy_mask & (0x1 << i); > if (!zcopy) > continue; > @@ -364,18 +375,18 @@ static void vhost_zerocopy_signal_used(struct vhost_net *net, > int j = 0; > > for (i = nvq->done_idx; i != nvq->upend_idx; i = (i + 1) % UIO_MAXIOV) { > - if (vq->heads[i].len == VHOST_DMA_FAILED_LEN) > + if (nvq->bufs[i].in_len == VHOST_DMA_FAILED_LEN) > vhost_net_tx_err(net); > - if (VHOST_DMA_IS_DONE(vq->heads[i].len)) { > - vq->heads[i].len...
2020 Jun 02
0
[PATCH RFC 08/13] vhost/net: convert to new API: heads->bufs
...sizeof(*n->vqs[i].bufs), + GFP_KERNEL); + if (!n->vqs[i].bufs) + goto err; + zcopy = vhost_net_zcopy_mask & (0x1 << i); if (!zcopy) continue; @@ -364,18 +375,18 @@ static void vhost_zerocopy_signal_used(struct vhost_net *net, int j = 0; for (i = nvq->done_idx; i != nvq->upend_idx; i = (i + 1) % UIO_MAXIOV) { - if (vq->heads[i].len == VHOST_DMA_FAILED_LEN) + if (nvq->bufs[i].in_len == VHOST_DMA_FAILED_LEN) vhost_net_tx_err(net); - if (VHOST_DMA_IS_DONE(vq->heads[i].len)) { - vq->heads[i].len = VHOST_DMA_CLEAR_LEN; +...
2013 Sep 02
8
[PATCH V3 0/6] vhost code cleanup and minor enhancement
This series tries to unify and simplify vhost codes especially for zerocopy. With this series, 5% - 10% improvement for per cpu throughput were seen during netperf guest sending test. Plase review. Changes from V2: - Typo fixes and code style fix - Add performance gain in the commit log of patch 2/6 - Retest the update the result in patch 6/6 Changes from V1: - Fix the zerocopy enabling check
2013 Sep 02
8
[PATCH V3 0/6] vhost code cleanup and minor enhancement
This series tries to unify and simplify vhost codes especially for zerocopy. With this series, 5% - 10% improvement for per cpu throughput were seen during netperf guest sending test. Plase review. Changes from V2: - Typo fixes and code style fix - Add performance gain in the commit log of patch 2/6 - Retest the update the result in patch 6/6 Changes from V1: - Fix the zerocopy enabling check
2018 Sep 06
2
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
...rder in lower device driver for some reason. > * upend_idx is used to track end of used idx, done_idx is used to track head > * of used idx. Once lower device DMA done contiguously, we will signal KVM > @@ -444,10 +451,36 @@ static void vhost_net_signal_used(struct vhost_net_virtqueue *nvq) > nvq->done_idx = 0; > } > > +static void vhost_tx_batch(struct vhost_net *net, > + struct vhost_net_virtqueue *nvq, > + struct socket *sock, > + struct msghdr *msghdr) > +{ > + struct tun_msg_ctl ctl = { > + .type = nvq->batched_xdp <&lt...
2018 Sep 06
2
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
...rder in lower device driver for some reason. > * upend_idx is used to track end of used idx, done_idx is used to track head > * of used idx. Once lower device DMA done contiguously, we will signal KVM > @@ -444,10 +451,36 @@ static void vhost_net_signal_used(struct vhost_net_virtqueue *nvq) > nvq->done_idx = 0; > } > > +static void vhost_tx_batch(struct vhost_net *net, > + struct vhost_net_virtqueue *nvq, > + struct socket *sock, > + struct msghdr *msghdr) > +{ > + struct tun_msg_ctl ctl = { > + .type = nvq->batched_xdp <&lt...
2013 Sep 23
2
[PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time
...gt; diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > >> index 8a6dd0d..3f89dea 100644 > >> --- a/drivers/vhost/net.c > >> +++ b/drivers/vhost/net.c > >> @@ -404,43 +404,36 @@ static void handle_tx(struct vhost_net *net) > >> iov_length(nvq->hdr, s), hdr_size); > >> break; > >> } > >> - zcopy_used = zcopy && (len >= VHOST_GOODCOPY_LEN || > >> - nvq->upend_idx != nvq->done_idx); > >> + > >> + zcopy_used = zcopy && len >= VHOST_GOODCO...
2013 Sep 23
2
[PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time
...gt; diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > >> index 8a6dd0d..3f89dea 100644 > >> --- a/drivers/vhost/net.c > >> +++ b/drivers/vhost/net.c > >> @@ -404,43 +404,36 @@ static void handle_tx(struct vhost_net *net) > >> iov_length(nvq->hdr, s), hdr_size); > >> break; > >> } > >> - zcopy_used = zcopy && (len >= VHOST_GOODCOPY_LEN || > >> - nvq->upend_idx != nvq->done_idx); > >> + > >> + zcopy_used = zcopy && len >= VHOST_GOODCO...
2018 Sep 06
0
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
...A done not in order in lower device driver for some reason. * upend_idx is used to track end of used idx, done_idx is used to track head * of used idx. Once lower device DMA done contiguously, we will signal KVM @@ -444,10 +451,36 @@ static void vhost_net_signal_used(struct vhost_net_virtqueue *nvq) nvq->done_idx = 0; } +static void vhost_tx_batch(struct vhost_net *net, + struct vhost_net_virtqueue *nvq, + struct socket *sock, + struct msghdr *msghdr) +{ + struct tun_msg_ctl ctl = { + .type = nvq->batched_xdp << 16 | TUN_MSG_PTR, + .ptr = nvq->xdp, + }; +...
2013 Sep 05
0
[PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time
...letions(-) >> >> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c >> index 8a6dd0d..3f89dea 100644 >> --- a/drivers/vhost/net.c >> +++ b/drivers/vhost/net.c >> @@ -404,43 +404,36 @@ static void handle_tx(struct vhost_net *net) >> iov_length(nvq->hdr, s), hdr_size); >> break; >> } >> - zcopy_used = zcopy && (len >= VHOST_GOODCOPY_LEN || >> - nvq->upend_idx != nvq->done_idx); >> + >> + zcopy_used = zcopy && len >= VHOST_GOODCOPY_LEN >> + &&...
2013 Aug 30
0
[PATCH V2 4/6] vhost_net: determine whether or not to use zerocopy at one time
...-------------------- 1 files changed, 19 insertions(+), 27 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 8a6dd0d..ff60c2a 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -404,43 +404,35 @@ static void handle_tx(struct vhost_net *net) iov_length(nvq->hdr, s), hdr_size); break; } - zcopy_used = zcopy && (len >= VHOST_GOODCOPY_LEN || - nvq->upend_idx != nvq->done_idx); + + zcopy_used = zcopy && len >= VHOST_GOODCOPY_LEN + && (nvq->upend_idx + 1) % UIO_MAXIOV != nvq->done_idx + &...
2013 Sep 02
0
[PATCH V3 4/6] vhost_net: determine whether or not to use zerocopy at one time
...-------------------- 1 files changed, 20 insertions(+), 27 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 8a6dd0d..3f89dea 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -404,43 +404,36 @@ static void handle_tx(struct vhost_net *net) iov_length(nvq->hdr, s), hdr_size); break; } - zcopy_used = zcopy && (len >= VHOST_GOODCOPY_LEN || - nvq->upend_idx != nvq->done_idx); + + zcopy_used = zcopy && len >= VHOST_GOODCOPY_LEN + && (nvq->upend_idx + 1) % UIO_MAXIOV != + nvq-&gt...
2013 Aug 30
12
[PATCH V2 0/6] vhost code cleanup and minor enhancement
Hi all: This series tries to unify and simplify vhost codes especially for zerocopy. With this series, 5% - 10% improvement for per cpu throughput were seen during netperf guest sending test. Plase review. Changes from V1: - Fix the zerocopy enabling check by changing the check of upend_idx != done_idx to (upend_idx + 1) % UIO_MAXIOV == done_idx. - Switch to use put_user() in
2013 Aug 30
12
[PATCH V2 0/6] vhost code cleanup and minor enhancement
Hi all: This series tries to unify and simplify vhost codes especially for zerocopy. With this series, 5% - 10% improvement for per cpu throughput were seen during netperf guest sending test. Plase review. Changes from V1: - Fix the zerocopy enabling check by changing the check of upend_idx != done_idx to (upend_idx + 1) % UIO_MAXIOV == done_idx. - Switch to use put_user() in
2018 Sep 12
0
[PATCH net-next V2 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
...A done not in order in lower device driver for some reason. * upend_idx is used to track end of used idx, done_idx is used to track head * of used idx. Once lower device DMA done contiguously, we will signal KVM @@ -444,10 +453,37 @@ static void vhost_net_signal_used(struct vhost_net_virtqueue *nvq) nvq->done_idx = 0; } +static void vhost_tx_batch(struct vhost_net *net, + struct vhost_net_virtqueue *nvq, + struct socket *sock, + struct msghdr *msghdr) +{ + struct tun_msg_ctl ctl = { + .type = TUN_MSG_PTR, + .num = nvq->batched_xdp, + .ptr = nvq->xdp, + }; + int...
2018 Sep 07
0
[PATCH net-next 11/11] vhost_net: batch submitting XDP buffers to underlayer sockets
...device driver for some reason. >> * upend_idx is used to track end of used idx, done_idx is used to track head >> * of used idx. Once lower device DMA done contiguously, we will signal KVM >> @@ -444,10 +451,36 @@ static void vhost_net_signal_used(struct vhost_net_virtqueue *nvq) >> nvq->done_idx = 0; >> } >> >> +static void vhost_tx_batch(struct vhost_net *net, >> + struct vhost_net_virtqueue *nvq, >> + struct socket *sock, >> + struct msghdr *msghdr) >> +{ >> + struct tun_msg_ctl ctl = { >...
2013 Aug 16
10
[PATCH 0/6] vhost code cleanup and minor enhancement
Hi all: This series tries to unify and simplify vhost codes especially for zerocopy. Plase review. Thanks Jason Wang (6): vhost_net: make vhost_zerocopy_signal_used() returns void vhost_net: use vhost_add_used_and_signal_n() in vhost_zerocopy_signal_used() vhost: switch to use vhost_add_used_n() vhost_net: determine whether or not to use zerocopy at one time vhost_net: poll vhost
2013 Aug 16
10
[PATCH 0/6] vhost code cleanup and minor enhancement
Hi all: This series tries to unify and simplify vhost codes especially for zerocopy. Plase review. Thanks Jason Wang (6): vhost_net: make vhost_zerocopy_signal_used() returns void vhost_net: use vhost_add_used_and_signal_n() in vhost_zerocopy_signal_used() vhost: switch to use vhost_add_used_n() vhost_net: determine whether or not to use zerocopy at one time vhost_net: poll vhost