search for: virtnet_napi_enable

Displaying 20 results from an estimated 187 matches for "virtnet_napi_enable".

2013 Dec 27
1
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
..._queue_pairs) > > /* Make sure we have some buffers: if oom use wq. */ > > if (!try_fill_recv(&vi->rq[i], GFP_KERNEL)) > > schedule_delayed_work(&vi->refill, 0); > > virtnet_napi_enable(&vi->rq[i]); > > > > > > What if the workqueue is scheduled _before_ the call to virtnet_napi_enable(&vi->rq[i]) ? > > Then napi_disable() in refill_work() will busy wait until napi is > enabled by virtnet_napi_enable() which looks safe. Looks like the real...
2013 Dec 27
1
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
..._queue_pairs) > > /* Make sure we have some buffers: if oom use wq. */ > > if (!try_fill_recv(&vi->rq[i], GFP_KERNEL)) > > schedule_delayed_work(&vi->refill, 0); > > virtnet_napi_enable(&vi->rq[i]); > > > > > > What if the workqueue is scheduled _before_ the call to virtnet_napi_enable(&vi->rq[i]) ? > > Then napi_disable() in refill_work() will busy wait until napi is > enabled by virtnet_napi_enable() which looks safe. Looks like the real...
2013 Dec 26
2
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
...{ if (i < vi->curr_queue_pairs) /* Make sure we have some buffers: if oom use wq. */ if (!try_fill_recv(&vi->rq[i], GFP_KERNEL)) schedule_delayed_work(&vi->refill, 0); virtnet_napi_enable(&vi->rq[i]); What if the workqueue is scheduled _before_ the call to virtnet_napi_enable(&vi->rq[i]) ? refill_work() will happily conflict with another cpu, two cpus could call try_fill_recv() at the same time, or worse napi_enable() would crash. I do not have time to make a full...
2013 Dec 26
2
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
...{ if (i < vi->curr_queue_pairs) /* Make sure we have some buffers: if oom use wq. */ if (!try_fill_recv(&vi->rq[i], GFP_KERNEL)) schedule_delayed_work(&vi->refill, 0); virtnet_napi_enable(&vi->rq[i]); What if the workqueue is scheduled _before_ the call to virtnet_napi_enable(&vi->rq[i]) ? refill_work() will happily conflict with another cpu, two cpus could call try_fill_recv() at the same time, or worse napi_enable() would crash. I do not have time to make a full...
2017 Apr 02
5
[PATCH net-next 0/3] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com> Add napi for virtio-net transmit completion processing. Based on previous patchsets by Jason Wang: [RFC V7 PATCH 0/7] enable tx interrupts for virtio-net http://lkml.iu.edu/hypermail/linux/kernel/1505.3/00245.html Changes: RFC -> v1: - dropped vhost interrupt moderation patch: not needed and likely expensive at light
2017 Apr 02
5
[PATCH net-next 0/3] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com> Add napi for virtio-net transmit completion processing. Based on previous patchsets by Jason Wang: [RFC V7 PATCH 0/7] enable tx interrupts for virtio-net http://lkml.iu.edu/hypermail/linux/kernel/1505.3/00245.html Changes: RFC -> v1: - dropped vhost interrupt moderation patch: not needed and likely expensive at light
2011 Feb 10
2
[PATCH] virtio_net: Add schedule check to napi_enable call
...c | 27 ++++++++++++++++----------- 1 file changed, 16 insertions(+), 11 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -446,6 +446,20 @@ static void skb_recv_done(struct virtque } } +static void virtnet_napi_enable(struct virtnet_info *vi) +{ + napi_enable(&vi->napi); + + /* If all buffers were filled by other side before we napi_enabled, we + * won't get another interrupt, so process any outstanding packets + * now. virtnet_poll wants re-enable the queue, so we disable here. + * We synchronize...
2011 Feb 10
2
[PATCH] virtio_net: Add schedule check to napi_enable call
...c | 27 ++++++++++++++++----------- 1 file changed, 16 insertions(+), 11 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -446,6 +446,20 @@ static void skb_recv_done(struct virtque } } +static void virtnet_napi_enable(struct virtnet_info *vi) +{ + napi_enable(&vi->napi); + + /* If all buffers were filled by other side before we napi_enabled, we + * won't get another interrupt, so process any outstanding packets + * now. virtnet_poll wants re-enable the queue, so we disable here. + * We synchronize...
2017 Apr 18
8
[PATCH net-next v2 0/5] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com> Add napi for virtio-net transmit completion processing. Changes: v1 -> v2: - disable by default - disable unless affinity_hint_set because cache misses add up to a third higher cycle cost, e.g., in TCP_RR tests. This is not limited to the patch that enables tx completion cleaning in rx napi. - use trylock to
2017 Apr 18
8
[PATCH net-next v2 0/5] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com> Add napi for virtio-net transmit completion processing. Changes: v1 -> v2: - disable by default - disable unless affinity_hint_set because cache misses add up to a third higher cycle cost, e.g., in TCP_RR tests. This is not limited to the patch that enables tx completion cleaning in rx napi. - use trylock to
2013 Dec 27
0
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
...(i < vi->curr_queue_pairs) > /* Make sure we have some buffers: if oom use wq. */ > if (!try_fill_recv(&vi->rq[i], GFP_KERNEL)) > schedule_delayed_work(&vi->refill, 0); > virtnet_napi_enable(&vi->rq[i]); > > > What if the workqueue is scheduled _before_ the call to virtnet_napi_enable(&vi->rq[i]) ? Then napi_disable() in refill_work() will busy wait until napi is enabled by virtnet_napi_enable() which looks safe. Looks like the real issue is in virtnet_restore()...
2014 Jul 15
3
[PATCH net-next] virtio-net: rx busy polling support
...s = 0; } + skb_mark_napi_id(skb, &rq->napi); + netif_receive_skb(skb); return; @@ -714,7 +853,12 @@ static void refill_work(struct work_struct *work) struct receive_queue *rq = &vi->rq[i]; napi_disable(&rq->napi); + if (!virtnet_rq_lock_napi_refill(rq)) { + virtnet_napi_enable(rq); + continue; + } still_empty = !try_fill_recv(rq, GFP_KERNEL); + virtnet_rq_unlock_napi_refill(rq); virtnet_napi_enable(rq); /* In theory, this can happen: if we don't get any buffers in @@ -725,16 +869,13 @@ static void refill_work(struct work_struct *work) } } -static...
2014 Jul 15
3
[PATCH net-next] virtio-net: rx busy polling support
...s = 0; } + skb_mark_napi_id(skb, &rq->napi); + netif_receive_skb(skb); return; @@ -714,7 +853,12 @@ static void refill_work(struct work_struct *work) struct receive_queue *rq = &vi->rq[i]; napi_disable(&rq->napi); + if (!virtnet_rq_lock_napi_refill(rq)) { + virtnet_napi_enable(rq); + continue; + } still_empty = !try_fill_recv(rq, GFP_KERNEL); + virtnet_rq_unlock_napi_refill(rq); virtnet_napi_enable(rq); /* In theory, this can happen: if we don't get any buffers in @@ -725,16 +869,13 @@ static void refill_work(struct work_struct *work) } } -static...
2017 Apr 03
0
[PATCH net-next 2/3] virtio-net: transmit napi
...We were probably waiting for more output buffers. */ > + netif_wake_subqueue(vi->dev, vq2txq(vq)); > } > > static unsigned int mergeable_ctx_to_buf_truesize(unsigned long mrg_ctx) > @@ -961,6 +968,9 @@ static void skb_recv_done(struct virtqueue *rvq) > > static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi) > { > + if (!napi->weight) > + return; > + > napi_enable(napi); > > /* If all buffers were filled by other side before we napi_enabled, we > @@ -1046,6 +1056,7 @@ static int virtnet_open(struct net_device *dev) >...
2011 Dec 07
1
[PATCH RFC] virtio_net: fix refill related races
...rk, protected by a refill_lock. */ + bool refill_enable; + + /* Whether napi is enabled, protected by a refill_lock. */ + bool napi_enable; + + /* Lock to protect refill and napi enable/disable operations. */ + struct mutex refill_lock; }; struct skb_vnet_hdr { @@ -477,20 +487,35 @@ static void virtnet_napi_enable(struct virtnet_info *vi) } } +static void virtnet_refill_enable(struct virtnet_info *vi, bool enable) +{ + mutex_lock(&vi->refill_lock); + vi->refill_enable = enable; + mutex_unlock(&vi->refill_lock); +} + static void refill_work(struct work_struct *work) { struct virtnet_...
2011 Dec 07
1
[PATCH RFC] virtio_net: fix refill related races
...rk, protected by a refill_lock. */ + bool refill_enable; + + /* Whether napi is enabled, protected by a refill_lock. */ + bool napi_enable; + + /* Lock to protect refill and napi enable/disable operations. */ + struct mutex refill_lock; }; struct skb_vnet_hdr { @@ -477,20 +487,35 @@ static void virtnet_napi_enable(struct virtnet_info *vi) } } +static void virtnet_refill_enable(struct virtnet_info *vi, bool enable) +{ + mutex_lock(&vi->refill_lock); + vi->refill_enable = enable; + mutex_unlock(&vi->refill_lock); +} + static void refill_work(struct work_struct *work) { struct virtnet_...
2023 May 12
4
[PATCH net v6] virtio_net: Fix error unwinding of XDP initialization
...xq_info_reg(&vi->rq[qp_index].xdp_rxq, dev, qp_index, + vi->rq[qp_index].napi.napi_id); + if (err < 0) + return err; + + err = xdp_rxq_info_reg_mem_model(&vi->rq[qp_index].xdp_rxq, + MEM_TYPE_PAGE_SHARED, NULL); + if (err < 0) + goto err_xdp_reg_mem_model; + + virtnet_napi_enable(vi->rq[qp_index].vq, &vi->rq[qp_index].napi); + virtnet_napi_tx_enable(vi, vi->sq[qp_index].vq, &vi->sq[qp_index].napi); + + return 0; + +err_xdp_reg_mem_model: + xdp_rxq_info_unreg(&vi->rq[qp_index].xdp_rxq); + return err; +} + static int virtnet_open(struct net_device...
2017 Apr 24
8
[PATCH net-next v3 0/5] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com> Add napi for virtio-net transmit completion processing. Changes: v2 -> v3: - convert __netif_tx_trylock to __netif_tx_lock on tx napi poll ensure that the handler always cleans, to avoid deadlock - unconditionally clean in start_xmit avoid adding an unnecessary "if (use_napi)" branch - remove
2017 Apr 24
8
[PATCH net-next v3 0/5] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com> Add napi for virtio-net transmit completion processing. Changes: v2 -> v3: - convert __netif_tx_trylock to __netif_tx_lock on tx napi poll ensure that the handler always cleans, to avoid deadlock - unconditionally clean in start_xmit avoid adding an unnecessary "if (use_napi)" branch - remove
2018 Feb 28
3
[PATCH net] virtio-net: disable NAPI only when enabled during XDP set
...+ napi_disable(&vi->rq[i].napi); netif_set_real_num_rx_queues(dev, curr_qp + xdp_qp); err = _virtnet_set_queues(vi, curr_qp + xdp_qp); @@ -2205,7 +2206,8 @@ static int virtnet_xdp_set(struct net_device *dev, struct bpf_prog *prog, } if (old_prog) bpf_prog_put(old_prog); - virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi); + if (netif_running(dev)) + virtnet_napi_enable(vi->rq[i].vq, &vi->rq[i].napi); } return 0; -- 2.7.4