Willem de Bruijn
2017-Sep-28 00:25 UTC
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
From: Willem de Bruijn <willemb at google.com> Vhost-net has a hard limit on the number of zerocopy skbs in flight. When reached, transmission stalls. Stalls cause latency, as well as head-of-line blocking of other flows that do not use zerocopy. Instead of stalling, revert to copy-based transmission. Tested by sending two udp flows from guest to host, one with payload of VHOST_GOODCOPY_LEN, the other too small for zerocopy (1B). The large flow is redirected to a netem instance with 1MBps rate limit and deep 1000 entry queue. modprobe ifb ip link set dev ifb0 up tc qdisc add dev ifb0 root netem limit 1000 rate 1MBit tc qdisc add dev tap0 ingress tc filter add dev tap0 parent ffff: protocol ip \ u32 match ip dport 8000 0xffff \ action mirred egress redirect dev ifb0 Before the delay, both flows process around 80K pps. With the delay, before this patch, both process around 400. After this patch, the large flow is still rate limited, while the small reverts to its original rate. See also discussion in the first link, below. The limit in vhost_exceeds_maxpend must be carefully chosen. When vq->num >> 1, the flows remain correlated. This value happens to correspond to VHOST_MAX_PENDING for vq->num == 256. Allow smaller fractions and ensure correctness also for much smaller values of vq->num, by testing the min() of both explicitly. See also the discussion in the second link below. Link:http://lkml.kernel.org/r/CAF=yD-+Wk9sc9dXMUq1+x_hh=3ThTXa6BnZkygP3tgVpjbp93g at mail.gmail.com Link:http://lkml.kernel.org/r/20170819064129.27272-1-den at klaipeden.com Signed-off-by: Willem de Bruijn <willemb at google.com> --- drivers/vhost/net.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c index 58585ec8699e..50758602ae9d 100644 --- a/drivers/vhost/net.c +++ b/drivers/vhost/net.c @@ -436,8 +436,8 @@ static bool vhost_exceeds_maxpend(struct vhost_net *net) struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_TX]; struct vhost_virtqueue *vq = &nvq->vq; - return (nvq->upend_idx + vq->num - VHOST_MAX_PEND) % UIO_MAXIOV - == nvq->done_idx; + return (nvq->upend_idx + UIO_MAXIOV - nvq->done_idx) % UIO_MAXIOV > + min(VHOST_MAX_PEND, vq->num >> 2); } /* Expects to be always run from workqueue - which acts as @@ -480,12 +480,6 @@ static void handle_tx(struct vhost_net *net) if (zcopy) vhost_zerocopy_signal_used(net, vq); - /* If more outstanding DMAs, queue the work. - * Handle upend_idx wrap around - */ - if (unlikely(vhost_exceeds_maxpend(net))) - break; - head = vhost_net_tx_get_vq_desc(net, vq, vq->iov, ARRAY_SIZE(vq->iov), &out, &in); @@ -509,6 +503,7 @@ static void handle_tx(struct vhost_net *net) len = iov_length(vq->iov, out); iov_iter_init(&msg.msg_iter, WRITE, vq->iov, out, len); iov_iter_advance(&msg.msg_iter, hdr_size); + /* Sanity check */ if (!msg_data_left(&msg)) { vq_err(vq, "Unexpected header len for TX: " @@ -519,8 +514,7 @@ static void handle_tx(struct vhost_net *net) len = msg_data_left(&msg); zcopy_used = zcopy && len >= VHOST_GOODCOPY_LEN - && (nvq->upend_idx + 1) % UIO_MAXIOV !- nvq->done_idx + && !vhost_exceeds_maxpend(net) && vhost_net_tx_select_zcopy(net); /* use msg_control to pass vhost zerocopy ubuf info to skb */ -- 2.14.2.822.g60be5d43e6-goog
Willem de Bruijn
2017-Sep-28 00:33 UTC
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
On Wed, Sep 27, 2017 at 8:25 PM, Willem de Bruijn <willemdebruijn.kernel at gmail.com> wrote:> From: Willem de Bruijn <willemb at google.com> > > Vhost-net has a hard limit on the number of zerocopy skbs in flight. > When reached, transmission stalls. Stalls cause latency, as well as > head-of-line blocking of other flows that do not use zerocopy. > > Instead of stalling, revert to copy-based transmission. > > Tested by sending two udp flows from guest to host, one with payload > of VHOST_GOODCOPY_LEN, the other too small for zerocopy (1B). The > large flow is redirected to a netem instance with 1MBps rate limit > and deep 1000 entry queue. > > modprobe ifb > ip link set dev ifb0 up > tc qdisc add dev ifb0 root netem limit 1000 rate 1MBit > > tc qdisc add dev tap0 ingress > tc filter add dev tap0 parent ffff: protocol ip \ > u32 match ip dport 8000 0xffff \ > action mirred egress redirect dev ifb0 > > Before the delay, both flows process around 80K pps. With the delay, > before this patch, both process around 400. After this patch, the > large flow is still rate limited, while the small reverts to its > original rate. See also discussion in the first link, below. > > The limit in vhost_exceeds_maxpend must be carefully chosen. When > vq->num >> 1, the flows remain correlated. This value happens to > correspond to VHOST_MAX_PENDING for vq->num == 256. Allow smaller > fractions and ensure correctness also for much smaller values of > vq->num, by testing the min() of both explicitly. See also the > discussion in the second link below. > > Link:http://lkml.kernel.org/r/CAF=yD-+Wk9sc9dXMUq1+x_hh=3ThTXa6BnZkygP3tgVpjbp93g at mail.gmail.com>From the same discussion thread: it would be good to expose statson the number of zerocopy skb sent and number completed without copy. To test this patch, I also added ethtool stats to tun and extended them with two zerocopy counters. Then had tun override the uarg->callback with its own and update the counters before calling the original callback. The one useful datapoint I did not get out of that is why skbs would revert to non-zerocopy: because of size, vhost_exceeds_maxpend or vhost_net_tx_select_zcopy. The simplistic implementation with an extra indirect function call and without percpu counters is also not suitable for submission as is.
Jason Wang
2017-Sep-28 07:41 UTC
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
On 2017?09?28? 08:25, Willem de Bruijn wrote:> From: Willem de Bruijn <willemb at google.com> > > Vhost-net has a hard limit on the number of zerocopy skbs in flight. > When reached, transmission stalls. Stalls cause latency, as well as > head-of-line blocking of other flows that do not use zerocopy. > > Instead of stalling, revert to copy-based transmission. > > Tested by sending two udp flows from guest to host, one with payload > of VHOST_GOODCOPY_LEN, the other too small for zerocopy (1B). The > large flow is redirected to a netem instance with 1MBps rate limit > and deep 1000 entry queue. > > modprobe ifb > ip link set dev ifb0 up > tc qdisc add dev ifb0 root netem limit 1000 rate 1MBit > > tc qdisc add dev tap0 ingress > tc filter add dev tap0 parent ffff: protocol ip \ > u32 match ip dport 8000 0xffff \ > action mirred egress redirect dev ifb0 > > Before the delay, both flows process around 80K pps. With the delay, > before this patch, both process around 400. After this patch, the > large flow is still rate limited, while the small reverts to its > original rate. See also discussion in the first link, below. > > The limit in vhost_exceeds_maxpend must be carefully chosen. When > vq->num >> 1, the flows remain correlated. This value happens to > correspond to VHOST_MAX_PENDING for vq->num == 256.Have you tested e.g vq->num = 512 or 1024?> Allow smaller > fractions and ensure correctness also for much smaller values of > vq->num, by testing the min() of both explicitly. See also the > discussion in the second link below. > > Link:http://lkml.kernel.org/r/CAF=yD-+Wk9sc9dXMUq1+x_hh=3ThTXa6BnZkygP3tgVpjbp93g at mail.gmail.com > Link:http://lkml.kernel.org/r/20170819064129.27272-1-den at klaipeden.com > Signed-off-by: Willem de Bruijn <willemb at google.com> > --- > drivers/vhost/net.c | 14 ++++---------- > 1 file changed, 4 insertions(+), 10 deletions(-) > > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > index 58585ec8699e..50758602ae9d 100644 > --- a/drivers/vhost/net.c > +++ b/drivers/vhost/net.c > @@ -436,8 +436,8 @@ static bool vhost_exceeds_maxpend(struct vhost_net *net) > struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_TX]; > struct vhost_virtqueue *vq = &nvq->vq; > > - return (nvq->upend_idx + vq->num - VHOST_MAX_PEND) % UIO_MAXIOV > - == nvq->done_idx; > + return (nvq->upend_idx + UIO_MAXIOV - nvq->done_idx) % UIO_MAXIOV > > + min(VHOST_MAX_PEND, vq->num >> 2); > } > > /* Expects to be always run from workqueue - which acts as > @@ -480,12 +480,6 @@ static void handle_tx(struct vhost_net *net) > if (zcopy) > vhost_zerocopy_signal_used(net, vq); > > - /* If more outstanding DMAs, queue the work. > - * Handle upend_idx wrap around > - */ > - if (unlikely(vhost_exceeds_maxpend(net))) > - break; > - > head = vhost_net_tx_get_vq_desc(net, vq, vq->iov, > ARRAY_SIZE(vq->iov), > &out, &in); > @@ -509,6 +503,7 @@ static void handle_tx(struct vhost_net *net) > len = iov_length(vq->iov, out); > iov_iter_init(&msg.msg_iter, WRITE, vq->iov, out, len); > iov_iter_advance(&msg.msg_iter, hdr_size); > +Looks unnecessary. Other looks good.> /* Sanity check */ > if (!msg_data_left(&msg)) { > vq_err(vq, "Unexpected header len for TX: " > @@ -519,8 +514,7 @@ static void handle_tx(struct vhost_net *net) > len = msg_data_left(&msg); > > zcopy_used = zcopy && len >= VHOST_GOODCOPY_LEN > - && (nvq->upend_idx + 1) % UIO_MAXIOV !> - nvq->done_idx > + && !vhost_exceeds_maxpend(net) > && vhost_net_tx_select_zcopy(net); > > /* use msg_control to pass vhost zerocopy ubuf info to skb */
Willem de Bruijn
2017-Sep-28 16:05 UTC
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
On Thu, Sep 28, 2017 at 3:41 AM, Jason Wang <jasowang at redhat.com> wrote:> > > On 2017?09?28? 08:25, Willem de Bruijn wrote: >> >> From: Willem de Bruijn <willemb at google.com> >> >> Vhost-net has a hard limit on the number of zerocopy skbs in flight. >> When reached, transmission stalls. Stalls cause latency, as well as >> head-of-line blocking of other flows that do not use zerocopy. >> >> Instead of stalling, revert to copy-based transmission. >> >> Tested by sending two udp flows from guest to host, one with payload >> of VHOST_GOODCOPY_LEN, the other too small for zerocopy (1B). The >> large flow is redirected to a netem instance with 1MBps rate limit >> and deep 1000 entry queue. >> >> modprobe ifb >> ip link set dev ifb0 up >> tc qdisc add dev ifb0 root netem limit 1000 rate 1MBit >> >> tc qdisc add dev tap0 ingress >> tc filter add dev tap0 parent ffff: protocol ip \ >> u32 match ip dport 8000 0xffff \ >> action mirred egress redirect dev ifb0 >> >> Before the delay, both flows process around 80K pps. With the delay, >> before this patch, both process around 400. After this patch, the >> large flow is still rate limited, while the small reverts to its >> original rate. See also discussion in the first link, below. >> >> The limit in vhost_exceeds_maxpend must be carefully chosen. When >> vq->num >> 1, the flows remain correlated. This value happens to >> correspond to VHOST_MAX_PENDING for vq->num == 256. > > > Have you tested e.g vq->num = 512 or 1024?I did test with 1024 previously, but let me run that again with this patch applied.> > >> Allow smaller >> fractions and ensure correctness also for much smaller values of >> vq->num, by testing the min() of both explicitly. See also the >> discussion in the second link below. >> >> >> Link:http://lkml.kernel.org/r/CAF=yD-+Wk9sc9dXMUq1+x_hh=3ThTXa6BnZkygP3tgVpjbp93g at mail.gmail.com >> Link:http://lkml.kernel.org/r/20170819064129.27272-1-den at klaipeden.com >> Signed-off-by: Willem de Bruijn <willemb at google.com> >> --- >> drivers/vhost/net.c | 14 ++++---------- >> 1 file changed, 4 insertions(+), 10 deletions(-) >> >> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c >> index 58585ec8699e..50758602ae9d 100644 >> --- a/drivers/vhost/net.c >> +++ b/drivers/vhost/net.c >> @@ -436,8 +436,8 @@ static bool vhost_exceeds_maxpend(struct vhost_net >> *net) >> struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_TX]; >> struct vhost_virtqueue *vq = &nvq->vq; >> - return (nvq->upend_idx + vq->num - VHOST_MAX_PEND) % UIO_MAXIOV >> - == nvq->done_idx; >> + return (nvq->upend_idx + UIO_MAXIOV - nvq->done_idx) % UIO_MAXIOV >> > >> + min(VHOST_MAX_PEND, vq->num >> 2); >> } >> /* Expects to be always run from workqueue - which acts as >> @@ -480,12 +480,6 @@ static void handle_tx(struct vhost_net *net) >> if (zcopy) >> vhost_zerocopy_signal_used(net, vq); >> - /* If more outstanding DMAs, queue the work. >> - * Handle upend_idx wrap around >> - */ >> - if (unlikely(vhost_exceeds_maxpend(net))) >> - break; >> - >> head = vhost_net_tx_get_vq_desc(net, vq, vq->iov, >> ARRAY_SIZE(vq->iov), >> &out, &in); >> @@ -509,6 +503,7 @@ static void handle_tx(struct vhost_net *net) >> len = iov_length(vq->iov, out); >> iov_iter_init(&msg.msg_iter, WRITE, vq->iov, out, len); >> iov_iter_advance(&msg.msg_iter, hdr_size); >> + > > > Looks unnecessary. Other looks good.Oops, indeed. Thanks.
Michael S. Tsirkin
2017-Sep-29 19:38 UTC
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
On Wed, Sep 27, 2017 at 08:25:56PM -0400, Willem de Bruijn wrote:> From: Willem de Bruijn <willemb at google.com> > > Vhost-net has a hard limit on the number of zerocopy skbs in flight. > When reached, transmission stalls. Stalls cause latency, as well as > head-of-line blocking of other flows that do not use zerocopy. > > Instead of stalling, revert to copy-based transmission. > > Tested by sending two udp flows from guest to host, one with payload > of VHOST_GOODCOPY_LEN, the other too small for zerocopy (1B). The > large flow is redirected to a netem instance with 1MBps rate limit > and deep 1000 entry queue. > > modprobe ifb > ip link set dev ifb0 up > tc qdisc add dev ifb0 root netem limit 1000 rate 1MBit > > tc qdisc add dev tap0 ingress > tc filter add dev tap0 parent ffff: protocol ip \ > u32 match ip dport 8000 0xffff \ > action mirred egress redirect dev ifb0 > > Before the delay, both flows process around 80K pps. With the delay, > before this patch, both process around 400. After this patch, the > large flow is still rate limited, while the small reverts to its > original rate. See also discussion in the first link, below. > > The limit in vhost_exceeds_maxpend must be carefully chosen. When > vq->num >> 1, the flows remain correlated. This value happens to > correspond to VHOST_MAX_PENDING for vq->num == 256. Allow smaller > fractions and ensure correctness also for much smaller values of > vq->num, by testing the min() of both explicitly. See also the > discussion in the second link below. > > Link:http://lkml.kernel.org/r/CAF=yD-+Wk9sc9dXMUq1+x_hh=3ThTXa6BnZkygP3tgVpjbp93g at mail.gmail.com > Link:http://lkml.kernel.org/r/20170819064129.27272-1-den at klaipeden.com > Signed-off-by: Willem de Bruijn <willemb at google.com>I'd like to see the effect on the non rate limited case though. If guest is quick won't we have lots of copies then?> --- > drivers/vhost/net.c | 14 ++++---------- > 1 file changed, 4 insertions(+), 10 deletions(-) > > diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c > index 58585ec8699e..50758602ae9d 100644 > --- a/drivers/vhost/net.c > +++ b/drivers/vhost/net.c > @@ -436,8 +436,8 @@ static bool vhost_exceeds_maxpend(struct vhost_net *net) > struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_TX]; > struct vhost_virtqueue *vq = &nvq->vq; > > - return (nvq->upend_idx + vq->num - VHOST_MAX_PEND) % UIO_MAXIOV > - == nvq->done_idx; > + return (nvq->upend_idx + UIO_MAXIOV - nvq->done_idx) % UIO_MAXIOV > > + min(VHOST_MAX_PEND, vq->num >> 2); > } > > /* Expects to be always run from workqueue - which acts as > @@ -480,12 +480,6 @@ static void handle_tx(struct vhost_net *net) > if (zcopy) > vhost_zerocopy_signal_used(net, vq); > > - /* If more outstanding DMAs, queue the work. > - * Handle upend_idx wrap around > - */ > - if (unlikely(vhost_exceeds_maxpend(net))) > - break; > - > head = vhost_net_tx_get_vq_desc(net, vq, vq->iov, > ARRAY_SIZE(vq->iov), > &out, &in); > @@ -509,6 +503,7 @@ static void handle_tx(struct vhost_net *net) > len = iov_length(vq->iov, out); > iov_iter_init(&msg.msg_iter, WRITE, vq->iov, out, len); > iov_iter_advance(&msg.msg_iter, hdr_size); > + > /* Sanity check */ > if (!msg_data_left(&msg)) { > vq_err(vq, "Unexpected header len for TX: " > @@ -519,8 +514,7 @@ static void handle_tx(struct vhost_net *net) > len = msg_data_left(&msg); > > zcopy_used = zcopy && len >= VHOST_GOODCOPY_LEN > - && (nvq->upend_idx + 1) % UIO_MAXIOV !> - nvq->done_idx > + && !vhost_exceeds_maxpend(net) > && vhost_net_tx_select_zcopy(net); > > /* use msg_control to pass vhost zerocopy ubuf info to skb */ > -- > 2.14.2.822.g60be5d43e6-goog
Willem de Bruijn
2017-Sep-30 01:25 UTC
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
On Fri, Sep 29, 2017 at 3:38 PM, Michael S. Tsirkin <mst at redhat.com> wrote:> On Wed, Sep 27, 2017 at 08:25:56PM -0400, Willem de Bruijn wrote: >> From: Willem de Bruijn <willemb at google.com> >> >> Vhost-net has a hard limit on the number of zerocopy skbs in flight. >> When reached, transmission stalls. Stalls cause latency, as well as >> head-of-line blocking of other flows that do not use zerocopy. >> >> Instead of stalling, revert to copy-based transmission. >> >> Tested by sending two udp flows from guest to host, one with payload >> of VHOST_GOODCOPY_LEN, the other too small for zerocopy (1B). The >> large flow is redirected to a netem instance with 1MBps rate limit >> and deep 1000 entry queue. >> >> modprobe ifb >> ip link set dev ifb0 up >> tc qdisc add dev ifb0 root netem limit 1000 rate 1MBit >> >> tc qdisc add dev tap0 ingress >> tc filter add dev tap0 parent ffff: protocol ip \ >> u32 match ip dport 8000 0xffff \ >> action mirred egress redirect dev ifb0 >> >> Before the delay, both flows process around 80K pps. With the delay, >> before this patch, both process around 400. After this patch, the >> large flow is still rate limited, while the small reverts to its >> original rate. See also discussion in the first link, below. >> >> The limit in vhost_exceeds_maxpend must be carefully chosen. When >> vq->num >> 1, the flows remain correlated. This value happens to >> correspond to VHOST_MAX_PENDING for vq->num == 256. Allow smaller >> fractions and ensure correctness also for much smaller values of >> vq->num, by testing the min() of both explicitly. See also the >> discussion in the second link below. >> >> Link:http://lkml.kernel.org/r/CAF=yD-+Wk9sc9dXMUq1+x_hh=3ThTXa6BnZkygP3tgVpjbp93g at mail.gmail.com >> Link:http://lkml.kernel.org/r/20170819064129.27272-1-den at klaipeden.com >> Signed-off-by: Willem de Bruijn <willemb at google.com> > > I'd like to see the effect on the non rate limited case though. > If guest is quick won't we have lots of copies then?Yes, but not significantly more than without this patch. I ran 1, 10 and 100 flow tcp_stream throughput tests from a sender in the guest to a receiver in the host. To answer the other benchmark question first, I did not see anything noteworthy when increasing vq->num from 256 to 1024. With 1 and 10 flows without this patch all packets use zerocopy. With the patch, less than 1% eschews zerocopy. With 100 flows, even without this patch, 90+% of packets are copied. Some zerocopy packets from vhost_net fail this test in tun.c if (iov_iter_npages(&i, INT_MAX) <= MAX_SKB_FRAGS) Generating packets with up to 21 frags. I'm not sure yet why or what the fraction of these packets is. But this in turn can disable zcopy_used in vhost_net_tx_select_zcopy for a larger share of packets: return !net->tx_flush && net->tx_packets / 64 >= net->tx_zcopy_err; Because the number of copied and zerocopy packets are the same before and after the patch, so are the overall throughput numbers.
kbuild test robot
2017-Sep-30 22:12 UTC
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
Hi Willem, [auto build test WARNING on net-next/master] url: https://github.com/0day-ci/linux/commits/Willem-de-Bruijn/vhost_net-do-not-stall-on-zerocopy-depletion/20171001-054709 config: x86_64-randconfig-x002-201740 (attached as .config) compiler: gcc-6 (Debian 6.2.0-3) 6.2.0 20160901 reproduce: # save the attached .config to linux build tree make ARCH=x86_64 All warnings (new ones prefixed by >>): In file included from include/linux/list.h:8:0, from include/linux/wait.h:6, from include/linux/eventfd.h:12, from drivers/vhost/net.c:10: drivers/vhost/net.c: In function 'vhost_exceeds_maxpend': include/linux/kernel.h:772:16: warning: comparison of distinct pointer types lacks a cast (void) (&min1 == &min2); \ ^ include/linux/kernel.h:775:2: note: in expansion of macro '__min' __min(typeof(x), typeof(y), \ ^~~~~>> drivers/vhost/net.c:440:9: note: in expansion of macro 'min'min(VHOST_MAX_PEND, vq->num >> 2); ^~~ vim +/min +440 drivers/vhost/net.c 433 434 static bool vhost_exceeds_maxpend(struct vhost_net *net) 435 { 436 struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_TX]; 437 struct vhost_virtqueue *vq = &nvq->vq; 438 439 return (nvq->upend_idx + UIO_MAXIOV - nvq->done_idx) % UIO_MAXIOV > > 440 min(VHOST_MAX_PEND, vq->num >> 2); 441 } 442 --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation -------------- next part -------------- A non-text attachment was scrubbed... Name: .config.gz Type: application/gzip Size: 25413 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20171001/8b920379/attachment-0001.bin>
kbuild test robot
2017-Sep-30 22:20 UTC
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
Hi Willem, [auto build test WARNING on net-next/master] url: https://github.com/0day-ci/linux/commits/Willem-de-Bruijn/vhost_net-do-not-stall-on-zerocopy-depletion/20171001-054709 config: tile-allyesconfig (attached as .config) compiler: tilegx-linux-gcc (GCC) 4.6.2 reproduce: wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # save the attached .config to linux build tree make.cross ARCH=tile All warnings (new ones prefixed by >>): drivers/vhost/net.c: In function 'vhost_exceeds_maxpend':>> drivers/vhost/net.c:440:9: warning: comparison of distinct pointer types lacks a cast [enabled by default]vim +440 drivers/vhost/net.c 433 434 static bool vhost_exceeds_maxpend(struct vhost_net *net) 435 { 436 struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_TX]; 437 struct vhost_virtqueue *vq = &nvq->vq; 438 439 return (nvq->upend_idx + UIO_MAXIOV - nvq->done_idx) % UIO_MAXIOV > > 440 min(VHOST_MAX_PEND, vq->num >> 2); 441 } 442 --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation -------------- next part -------------- A non-text attachment was scrubbed... Name: .config.gz Type: application/gzip Size: 50505 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20171001/66a70af7/attachment-0001.bin>
kbuild test robot
2017-Oct-01 00:09 UTC
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
Hi Willem, [auto build test WARNING on net-next/master] url: https://github.com/0day-ci/linux/commits/Willem-de-Bruijn/vhost_net-do-not-stall-on-zerocopy-depletion/20171001-054709 reproduce: # apt-get install sparse make ARCH=x86_64 allmodconfig make C=1 CF=-D__CHECK_ENDIAN__ sparse warnings: (new ones prefixed by >>) vim +440 drivers/vhost/net.c 433 434 static bool vhost_exceeds_maxpend(struct vhost_net *net) 435 { 436 struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_TX]; 437 struct vhost_virtqueue *vq = &nvq->vq; 438 439 return (nvq->upend_idx + UIO_MAXIOV - nvq->done_idx) % UIO_MAXIOV > > 440 min(VHOST_MAX_PEND, vq->num >> 2); 441 } 442 --- 0-DAY kernel test infrastructure Open Source Technology Center https://lists.01.org/pipermail/kbuild-all Intel Corporation
Michael S. Tsirkin
2017-Oct-01 03:20 UTC
[PATCH net-next] vhost_net: do not stall on zerocopy depletion
On Sun, Oct 01, 2017 at 08:09:30AM +0800, kbuild test robot wrote:> Hi Willem, > > [auto build test WARNING on net-next/master] > > url: https://github.com/0day-ci/linux/commits/Willem-de-Bruijn/vhost_net-do-not-stall-on-zerocopy-depletion/20171001-054709 > reproduce: > # apt-get install sparse > make ARCH=x86_64 allmodconfig > make C=1 CF=-D__CHECK_ENDIAN__BTW __CHECK_ENDIAN__ is the default now, I think you can drop it from your scripts.> > sparse warnings: (new ones prefixed by >>) > > > vim +440 drivers/vhost/net.c > > 433 > 434 static bool vhost_exceeds_maxpend(struct vhost_net *net) > 435 { > 436 struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_TX]; > 437 struct vhost_virtqueue *vq = &nvq->vq; > 438 > 439 return (nvq->upend_idx + UIO_MAXIOV - nvq->done_idx) % UIO_MAXIOV > > > 440 min(VHOST_MAX_PEND, vq->num >> 2); > 441 } > 442 > > --- > 0-DAY kernel test infrastructure Open Source Technology Center > https://lists.01.org/pipermail/kbuild-all Intel Corporation
Seemingly Similar Threads
- [PATCH net-next] vhost_net: do not stall on zerocopy depletion
- [PATCH net-next] vhost_net: do not stall on zerocopy depletion
- [PATCH net-next] vhost_net: do not stall on zerocopy depletion
- [PATCH net-next v2] vhost_net: do not stall on zerocopy depletion
- [PATCH net-next v2] vhost_net: do not stall on zerocopy depletion