search for: busyloop_intr

Displaying 20 results from an estimated 74 matches for "busyloop_intr".

2019 Apr 25
2
[PATCH net] vhost_net: fix possible infinite loop
When the rx buffer is too small for a packet, we will discard the vq descriptor and retry it for the next packet: while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk, &busyloop_intr))) { ... /* On overrun, truncate and discard */ if (unlikely(headcount > UIO_MAXIOV)) { iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1); err = sock->ops->recvmsg(sock, &msg, 1, MSG_DONTWAIT | MSG_TRUNC); pr_debug("Discarded rx packet: len %zd\n", sock_l...
2019 Apr 25
2
[PATCH net] vhost_net: fix possible infinite loop
When the rx buffer is too small for a packet, we will discard the vq descriptor and retry it for the next packet: while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk, &busyloop_intr))) { ... /* On overrun, truncate and discard */ if (unlikely(headcount > UIO_MAXIOV)) { iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1); err = sock->ops->recvmsg(sock, &msg, 1, MSG_DONTWAIT | MSG_TRUNC); pr_debug("Discarded rx packet: len %zd\n", sock_l...
2019 Apr 26
2
[PATCH net] vhost_net: fix possible infinite loop
...On Thu, Apr 25, 2019 at 03:33:19AM -0400, Jason Wang wrote: >> When the rx buffer is too small for a packet, we will discard the vq >> descriptor and retry it for the next packet: >> >> while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk, >> &busyloop_intr))) { >> ... >> /* On overrun, truncate and discard */ >> if (unlikely(headcount > UIO_MAXIOV)) { >> iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1); >> err = sock->ops->recvmsg(sock, &msg, >> 1, MSG_DONTWAIT | MSG_TRUNC); >&gt...
2019 Apr 26
2
[PATCH net] vhost_net: fix possible infinite loop
...On Thu, Apr 25, 2019 at 03:33:19AM -0400, Jason Wang wrote: >> When the rx buffer is too small for a packet, we will discard the vq >> descriptor and retry it for the next packet: >> >> while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk, >> &busyloop_intr))) { >> ... >> /* On overrun, truncate and discard */ >> if (unlikely(headcount > UIO_MAXIOV)) { >> iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1); >> err = sock->ops->recvmsg(sock, &msg, >> 1, MSG_DONTWAIT | MSG_TRUNC); >&gt...
2019 May 12
2
[PATCH net] vhost_net: fix possible infinite loop
...> > When the rx buffer is too small for a packet, we will discard the vq > > > > descriptor and retry it for the next packet: > > > > > > > > while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk, > > > > ????????????????????????? &busyloop_intr))) { > > > > ... > > > > ????/* On overrun, truncate and discard */ > > > > ????if (unlikely(headcount > UIO_MAXIOV)) { > > > > ??????? iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1); > > > > ??????? err = sock->ops->rec...
2019 May 12
2
[PATCH net] vhost_net: fix possible infinite loop
...> > When the rx buffer is too small for a packet, we will discard the vq > > > > descriptor and retry it for the next packet: > > > > > > > > while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk, > > > > ????????????????????????? &busyloop_intr))) { > > > > ... > > > > ????/* On overrun, truncate and discard */ > > > > ????if (unlikely(headcount > UIO_MAXIOV)) { > > > > ??????? iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1); > > > > ??????? err = sock->ops->rec...
2019 Apr 25
0
[PATCH net] vhost_net: fix possible infinite loop
On Thu, Apr 25, 2019 at 03:33:19AM -0400, Jason Wang wrote: > When the rx buffer is too small for a packet, we will discard the vq > descriptor and retry it for the next packet: > > while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk, > &busyloop_intr))) { > ... > /* On overrun, truncate and discard */ > if (unlikely(headcount > UIO_MAXIOV)) { > iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1); > err = sock->ops->recvmsg(sock, &msg, > 1, MSG_DONTWAIT | MSG_TRUNC); > pr_debug("Discarded...
2018 Jul 03
11
[PATCH v2 net-next 0/4] vhost_net: Avoid vq kicks during busyloop
Under heavy load vhost tx busypoll tend not to suppress vq kicks, which causes poor guest tx performance. The detailed scenario is described in commitlog of patch 2. Rx seems not to have that serious problem, but for consistency I made a similar change on rx to avoid rx wakeups (patch 3). Additionary patch 4 is to avoid rx kicks under heavy load during busypoll. Tx performance is greatly improved
2019 May 05
0
[PATCH net] vhost_net: fix possible infinite loop
...-0400, Jason Wang wrote: >>> When the rx buffer is too small for a packet, we will discard the vq >>> descriptor and retry it for the next packet: >>> >>> while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk, >>> ????????????????????????? &busyloop_intr))) { >>> ... >>> ????/* On overrun, truncate and discard */ >>> ????if (unlikely(headcount > UIO_MAXIOV)) { >>> ??????? iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1); >>> ??????? err = sock->ops->recvmsg(sock, &msg, >>> ?...
2018 Jul 03
2
[PATCH v2 net-next 4/4] vhost_net: Avoid rx vring kicks during busyloop
..._virtqueue *rvq = &rnvq->vq; > struct vhost_virtqueue *tvq = &tnvq->vq; > unsigned long uninitialized_var(endtime); > int len = peek_head_len(rnvq, sk); > @@ -677,7 +678,8 @@ static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk, > *busyloop_intr = true; > break; > } > - if (sk_has_rx_data(sk) || > + if ((sk_has_rx_data(sk) && > + !vhost_vq_avail_empty(&net->dev, rvq)) || > !vhost_vq_avail_empty(&net->dev, tvq)) > break; > cpu_relax(); > @@ -827,7 +82...
2018 Jul 03
2
[PATCH v2 net-next 4/4] vhost_net: Avoid rx vring kicks during busyloop
..._virtqueue *rvq = &rnvq->vq; > struct vhost_virtqueue *tvq = &tnvq->vq; > unsigned long uninitialized_var(endtime); > int len = peek_head_len(rnvq, sk); > @@ -677,7 +678,8 @@ static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk, > *busyloop_intr = true; > break; > } > - if (sk_has_rx_data(sk) || > + if ((sk_has_rx_data(sk) && > + !vhost_vq_avail_empty(&net->dev, rvq)) || > !vhost_vq_avail_empty(&net->dev, tvq)) > break; > cpu_relax(); > @@ -827,7 +82...
2019 May 13
0
[PATCH net] vhost_net: fix possible infinite loop
...;> When the rx buffer is too small for a packet, we will discard the vq >>>>> descriptor and retry it for the next packet: >>>>> >>>>> while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk, >>>>> ????????????????????????? &busyloop_intr))) { >>>>> ... >>>>> ????/* On overrun, truncate and discard */ >>>>> ????if (unlikely(headcount > UIO_MAXIOV)) { >>>>> ??????? iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1); >>>>> ??????? err = sock->op...
2020 Jun 03
1
[PATCH RFC 08/13] vhost/net: convert to new API: heads->bufs
...et_busy_poll(struct vhost_net *net, > > static int vhost_net_tx_get_vq_desc(struct vhost_net *net, > struct vhost_net_virtqueue *tnvq, > + struct vhost_buf *buf, > unsigned int *out_num, unsigned int *in_num, > struct msghdr *msghdr, bool *busyloop_intr) > { > @@ -565,10 +578,10 @@ static int vhost_net_tx_get_vq_desc(struct vhost_net *net, > struct vhost_virtqueue *rvq = &rnvq->vq; > struct vhost_virtqueue *tvq = &tnvq->vq; > > - int r = vhost_get_vq_desc(tvq, tvq->iov, ARRAY_SIZE(tvq->iov), > -...
2019 May 14
1
[PATCH net] vhost_net: fix possible infinite loop
...for a packet, we will discard the vq > > > > > > descriptor and retry it for the next packet: > > > > > > > > > > > > while ((sock_len = vhost_net_rx_peek_head_len(net, sock->sk, > > > > > > ????????????????????????? &busyloop_intr))) { > > > > > > ... > > > > > > ????/* On overrun, truncate and discard */ > > > > > > ????if (unlikely(headcount > UIO_MAXIOV)) { > > > > > > ??????? iov_iter_init(&msg.msg_iter, READ, vq->iov, 1, 1); > > &...
2020 Jun 02
0
[PATCH RFC 08/13] vhost/net: convert to new API: heads->bufs
...558,6 +570,7 @@ static void vhost_net_busy_poll(struct vhost_net *net, static int vhost_net_tx_get_vq_desc(struct vhost_net *net, struct vhost_net_virtqueue *tnvq, + struct vhost_buf *buf, unsigned int *out_num, unsigned int *in_num, struct msghdr *msghdr, bool *busyloop_intr) { @@ -565,10 +578,10 @@ static int vhost_net_tx_get_vq_desc(struct vhost_net *net, struct vhost_virtqueue *rvq = &rnvq->vq; struct vhost_virtqueue *tvq = &tnvq->vq; - int r = vhost_get_vq_desc(tvq, tvq->iov, ARRAY_SIZE(tvq->iov), - out_num, in_num, NULL, NULL); + in...
2020 Jun 01
0
[PATCH net-next v8 7/7] net: vhost: make busyloop_intr more accurate
...+487,8 @@ static void vhost_net_busy_poll(struct > > > vhost_net *net, > > > ????? endtime = busy_clock() + busyloop_timeout; > > > ? ????? while (vhost_can_busy_poll(endtime)) { > > > -??????? if (vhost_has_work(&net->dev)) { > > > -??????????? *busyloop_intr = true; > > > +??????? if (vhost_has_work(&net->dev)) > > > ????????????? break; > > > -??????? } > > > ? ????????? if ((sock_has_rx_data(sock) && > > > ?????????????? !vhost_vq_avail_empty(&net->dev, rvq)) || > > > @@ -513...
2018 Jul 03
0
[PATCH v2 net-next 3/4] vhost_net: Avoid rx queue wake-ups during busypoll
...ost/net.c @@ -653,7 +653,8 @@ static void vhost_rx_signal_used(struct vhost_net_virtqueue *nvq) nvq->done_idx = 0; } -static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk) +static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk, + bool *busyloop_intr) { struct vhost_net_virtqueue *rnvq = &net->vqs[VHOST_NET_VQ_RX]; struct vhost_net_virtqueue *tnvq = &net->vqs[VHOST_NET_VQ_TX]; @@ -671,11 +672,16 @@ static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk) preempt_disable(); endtime = busy_clock() + t...
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi: This series implement batch updating of used ring for TX. This help to reduce the cache contention on used ring. The idea is first split datacopy path from zerocopy, and do only batching for datacopy. This is because zercopy had already supported its own batching. TX PPS was increased 25.8% and Netperf TCP does not show obvious differences. The split of datapath will also be helpful for
2018 Jul 20
12
[PATCH net-next 0/9] TX used ring batched updating for vhost
Hi: This series implement batch updating of used ring for TX. This help to reduce the cache contention on used ring. The idea is first split datacopy path from zerocopy, and do only batching for datacopy. This is because zercopy had already supported its own batching. TX PPS was increased 25.8% and Netperf TCP does not show obvious differences. The split of datapath will also be helpful for
2018 Aug 01
5
[PATCH net-next v7 0/4] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com> This patches improve the guest receive performance. On the handle_tx side, we poll the sock receive queue at the same time. handle_rx do that in the same way. For more performance report, see patch 4. v6->v7: fix issue and rebase codes: 1. on tx, busypoll will vhost_net_disable/enable_vq rx vq. [This is suggested by Toshiaki Makita