search for: napi_disable

Displaying 20 results from an estimated 207 matches for "napi_disable".

2017 Apr 25
3
[PATCH net-next] virtio-net: on tx, only call napi_disable if tx napi is on
From: Willem de Bruijn <willemb at google.com> As of tx napi, device down (`ip link set dev $dev down`) hangs unless tx napi is enabled. Else napi_enable is not called, so napi_disable will spin on test_and_set_bit NAPI_STATE_SCHED. Only call napi_disable if tx napi is enabled. Fixes: 5a719c2552ca ("virtio-net: transmit napi") Reported-by: Jason Wang <jasowang at redhat.com> Signed-off-by: Willem de Bruijn <willemb at google.com> --- drivers/net/virtio_ne...
2017 Apr 25
3
[PATCH net-next] virtio-net: on tx, only call napi_disable if tx napi is on
From: Willem de Bruijn <willemb at google.com> As of tx napi, device down (`ip link set dev $dev down`) hangs unless tx napi is enabled. Else napi_enable is not called, so napi_disable will spin on test_and_set_bit NAPI_STATE_SCHED. Only call napi_disable if tx napi is enabled. Fixes: 5a719c2552ca ("virtio-net: transmit napi") Reported-by: Jason Wang <jasowang at redhat.com> Signed-off-by: Willem de Bruijn <willemb at google.com> --- drivers/net/virtio_ne...
2012 Apr 04
2
question about napi_disable (was Re: [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop)
...net_probe() could simply fail if it couldn't > allocate a receive buffer, but that's less polite in virtnet_open() so > we schedule a refill as we do in the normal receive path if we run out > of memory. > > Signed-off-by: Rusty Russell <rusty at rustcorp.com.au> Doh. napi_disable does not prevent the following napi_schedule, does it? Can someone confirm that I am not seeing things please? And this means this hack does not work: try_fill_recv can still run in parallel with napi, corrupting the vq. I suspect we need to resurrect a patch that used a dedicated flag to avoid...
2012 Apr 04
2
question about napi_disable (was Re: [PATCH] virtio_net: set/cancel work on ndo_open/ndo_stop)
...net_probe() could simply fail if it couldn't > allocate a receive buffer, but that's less polite in virtnet_open() so > we schedule a refill as we do in the normal receive path if we run out > of memory. > > Signed-off-by: Rusty Russell <rusty at rustcorp.com.au> Doh. napi_disable does not prevent the following napi_schedule, does it? Can someone confirm that I am not seeing things please? And this means this hack does not work: try_fill_recv can still run in parallel with napi, corrupting the vq. I suspect we need to resurrect a patch that used a dedicated flag to avoid...
2013 Dec 27
1
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
...the per-receive page_frags. For future > >> consideration, Eric noted that disabling NAPI before GFP_KERNEL > >> allocs can potentially inhibit virtio-net network processing for some > >> time (e.g., during a blocking memory allocation or preemption). > > Yep, using napi_disable() in the refill process looks quite inefficient > > to me, it not buggy. > > > > napi_disable() is a big hammer, while whole idea of having a process to > > block on GFP_KERNEL allocations is to allow some asynchronous behavior. > > > > I have hard time to convin...
2013 Dec 27
1
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
...the per-receive page_frags. For future > >> consideration, Eric noted that disabling NAPI before GFP_KERNEL > >> allocs can potentially inhibit virtio-net network processing for some > >> time (e.g., during a blocking memory allocation or preemption). > > Yep, using napi_disable() in the refill process looks quite inefficient > > to me, it not buggy. > > > > napi_disable() is a big hammer, while whole idea of having a process to > > block on GFP_KERNEL allocations is to allow some asynchronous behavior. > > > > I have hard time to convin...
2018 Feb 28
3
[PATCH net] virtio-net: disable NAPI only when enabled during XDP set
We try to disable NAPI to prevent a single XDP TX queue being used by multiple cpus. But we don't check if device is up (NAPI is enabled), this could result stall because of infinite wait in napi_disable(). Fixing this by checking device state through netif_running() before. Fixes: 4941d472bf95b ("virtio-net: do not reset during XDP set") Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/net/virtio_net.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) d...
2018 Feb 28
3
[PATCH net] virtio-net: disable NAPI only when enabled during XDP set
We try to disable NAPI to prevent a single XDP TX queue being used by multiple cpus. But we don't check if device is up (NAPI is enabled), this could result stall because of infinite wait in napi_disable(). Fixing this by checking device state through netif_running() before. Fixes: 4941d472bf95b ("virtio-net: do not reset during XDP set") Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/net/virtio_net.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) d...
2013 Dec 26
2
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
...a > followup patchset using just the per-receive page_frags. For future > consideration, Eric noted that disabling NAPI before GFP_KERNEL > allocs can potentially inhibit virtio-net network processing for some > time (e.g., during a blocking memory allocation or preemption). Yep, using napi_disable() in the refill process looks quite inefficient to me, it not buggy. napi_disable() is a big hammer, while whole idea of having a process to block on GFP_KERNEL allocations is to allow some asynchronous behavior. I have hard time to convince myself virtio_net is safe anyway with this work queue t...
2013 Dec 26
2
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
...a > followup patchset using just the per-receive page_frags. For future > consideration, Eric noted that disabling NAPI before GFP_KERNEL > allocs can potentially inhibit virtio-net network processing for some > time (e.g., during a blocking memory allocation or preemption). Yep, using napi_disable() in the refill process looks quite inefficient to me, it not buggy. napi_disable() is a big hammer, while whole idea of having a process to block on GFP_KERNEL allocations is to allow some asynchronous behavior. I have hard time to convince myself virtio_net is safe anyway with this work queue t...
2017 Apr 25
1
[PATCH net-next v3 2/5] virtio-net: transmit napi
...@@ static int virtnet_close(struct net_device *dev) > /* Make sure refill_work doesn't re-enable napi! */ > cancel_delayed_work_sync(&vi->refill); > > - for (i = 0; i < vi->max_queue_pairs; i++) > + for (i = 0; i < vi->max_queue_pairs; i++) { > napi_disable(&vi->rq[i].napi); > + napi_disable(&vi->sq[i].napi); > + } Looks like this will wait for ever if napi_tx is false because we never enable the NAPI so we will wait for NAPI_STATE_SCHED to be cleared. Thanks
2017 Apr 25
1
[PATCH net-next v3 2/5] virtio-net: transmit napi
...@@ static int virtnet_close(struct net_device *dev) > /* Make sure refill_work doesn't re-enable napi! */ > cancel_delayed_work_sync(&vi->refill); > > - for (i = 0; i < vi->max_queue_pairs; i++) > + for (i = 0; i < vi->max_queue_pairs; i++) { > napi_disable(&vi->rq[i].napi); > + napi_disable(&vi->sq[i].napi); > + } Looks like this will wait for ever if napi_tx is false because we never enable the NAPI so we will wait for NAPI_STATE_SCHED to be cleared. Thanks
2014 Oct 14
4
[PATCH RFC] virtio_net: enable tx interrupt
...NETDEV_TX_OK; @@ -1124,8 +1161,10 @@ static int virtnet_close(struct net_device *dev) /* Make sure refill_work doesn't re-enable napi! */ cancel_delayed_work_sync(&vi->refill); - for (i = 0; i < vi->max_queue_pairs; i++) + for (i = 0; i < vi->max_queue_pairs; i++) { napi_disable(&vi->rq[i].napi); + napi_disable(&vi->sq[i].napi); + } return 0; } @@ -1438,8 +1477,10 @@ static void virtnet_free_queues(struct virtnet_info *vi) { int i; - for (i = 0; i < vi->max_queue_pairs; i++) + for (i = 0; i < vi->max_queue_pairs; i++) { netif_napi_de...
2014 Oct 14
4
[PATCH RFC] virtio_net: enable tx interrupt
...NETDEV_TX_OK; @@ -1124,8 +1161,10 @@ static int virtnet_close(struct net_device *dev) /* Make sure refill_work doesn't re-enable napi! */ cancel_delayed_work_sync(&vi->refill); - for (i = 0; i < vi->max_queue_pairs; i++) + for (i = 0; i < vi->max_queue_pairs; i++) { napi_disable(&vi->rq[i].napi); + napi_disable(&vi->sq[i].napi); + } return 0; } @@ -1438,8 +1477,10 @@ static void virtnet_free_queues(struct virtnet_info *vi) { int i; - for (i = 0; i < vi->max_queue_pairs; i++) + for (i = 0; i < vi->max_queue_pairs; i++) { netif_napi_de...
2014 Oct 15
1
[PATCH RFC v2 1/3] virtio_net: enable tx interrupt
...EV_TX_OK; } @@ -1137,8 +1178,10 @@ static int virtnet_close(struct net_device *dev) /* Make sure refill_work doesn't re-enable napi! */ cancel_delayed_work_sync(&vi->refill); - for (i = 0; i < vi->max_queue_pairs; i++) + for (i = 0; i < vi->max_queue_pairs; i++) { napi_disable(&vi->rq[i].napi); + napi_disable(&vi->sq[i].napi); + } return 0; } @@ -1457,8 +1500,10 @@ static void virtnet_free_queues(struct virtnet_info *vi) { int i; - for (i = 0; i < vi->max_queue_pairs; i++) + for (i = 0; i < vi->max_queue_pairs; i++) { netif_napi_de...
2014 Oct 15
1
[PATCH RFC v2 1/3] virtio_net: enable tx interrupt
...EV_TX_OK; } @@ -1137,8 +1178,10 @@ static int virtnet_close(struct net_device *dev) /* Make sure refill_work doesn't re-enable napi! */ cancel_delayed_work_sync(&vi->refill); - for (i = 0; i < vi->max_queue_pairs; i++) + for (i = 0; i < vi->max_queue_pairs; i++) { napi_disable(&vi->rq[i].napi); + napi_disable(&vi->sq[i].napi); + } return 0; } @@ -1457,8 +1500,10 @@ static void virtnet_free_queues(struct virtnet_info *vi) { int i; - for (i = 0; i < vi->max_queue_pairs; i++) + for (i = 0; i < vi->max_queue_pairs; i++) { netif_napi_de...
2023 May 12
4
[PATCH net v6] virtio_net: Fix error unwinding of XDP initialization
.../net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1868,6 +1868,38 @@ static int virtnet_poll(struct napi_struct *napi, int budget) return received; } +static void virtnet_disable_queue_pair(struct virtnet_info *vi, int qp_index) +{ + virtnet_napi_tx_disable(&vi->sq[qp_index].napi); + napi_disable(&vi->rq[qp_index].napi); + xdp_rxq_info_unreg(&vi->rq[qp_index].xdp_rxq); +} + +static int virtnet_enable_queue_pair(struct virtnet_info *vi, int qp_index) +{ + struct net_device *dev = vi->dev; + int err; + + err = xdp_rxq_info_reg(&vi->rq[qp_index].xdp_rxq, dev, qp_index,...
2014 Dec 01
1
[PATCH RFC v4 net-next 1/5] virtio_net: enable tx interrupt
...0,10 @@ static int virtnet_close(struct net_device *dev) > /* Make sure refill_work doesn't re-enable napi! */ > cancel_delayed_work_sync(&vi->refill); > > - for (i = 0; i < vi->max_queue_pairs; i++) > + for (i = 0; i < vi->max_queue_pairs; i++) { > napi_disable(&vi->rq[i].napi); > + napi_disable(&vi->sq[i].napi); > + } > > return 0; > } > @@ -1452,8 +1486,10 @@ static void virtnet_free_queues(struct virtnet_info *vi) > { > int i; > > - for (i = 0; i < vi->max_queue_pairs; i++) > + for (i = 0;...
2014 Dec 01
1
[PATCH RFC v4 net-next 1/5] virtio_net: enable tx interrupt
...0,10 @@ static int virtnet_close(struct net_device *dev) > /* Make sure refill_work doesn't re-enable napi! */ > cancel_delayed_work_sync(&vi->refill); > > - for (i = 0; i < vi->max_queue_pairs; i++) > + for (i = 0; i < vi->max_queue_pairs; i++) { > napi_disable(&vi->rq[i].napi); > + napi_disable(&vi->sq[i].napi); > + } > > return 0; > } > @@ -1452,8 +1486,10 @@ static void virtnet_free_queues(struct virtnet_info *vi) > { > int i; > > - for (i = 0; i < vi->max_queue_pairs; i++) > + for (i = 0;...
2023 May 02
1
[PATCH net v2] virtio_net: Fix error unwinding of XDP initialization
.../drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1868,6 +1868,13 @@ static int virtnet_poll(struct napi_struct *napi, int budget) return received; } +static void virtnet_disable_qp(struct virtnet_info *vi, int qp_index) +{ + virtnet_napi_tx_disable(&vi->sq[qp_index].napi); + napi_disable(&vi->rq[qp_index].napi); + xdp_rxq_info_unreg(&vi->rq[qp_index].xdp_rxq); +} + static int virtnet_open(struct net_device *dev) { struct virtnet_info *vi = netdev_priv(dev); @@ -1883,20 +1890,27 @@ static int virtnet_open(struct net_device *dev) err = xdp_rxq_info_reg(&vi-...