search for: napi_enabled

Displaying 20 results from an estimated 110 matches for "napi_enabled".

Did you mean: napi_enable
2011 Feb 10
2
[PATCH] virtio_net: Add schedule check to napi_enable call
...t/virtio_net.c --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -446,6 +446,20 @@ static void skb_recv_done(struct virtque } } +static void virtnet_napi_enable(struct virtnet_info *vi) +{ + napi_enable(&vi->napi); + + /* If all buffers were filled by other side before we napi_enabled, we + * won't get another interrupt, so process any outstanding packets + * now. virtnet_poll wants re-enable the queue, so we disable here. + * We synchronize against interrupts via NAPI_STATE_SCHED */ + if (napi_schedule_prep(&vi->napi)) { + virtqueue_disable_cb(vi->rvq); + __...
2011 Feb 10
2
[PATCH] virtio_net: Add schedule check to napi_enable call
...t/virtio_net.c --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -446,6 +446,20 @@ static void skb_recv_done(struct virtque } } +static void virtnet_napi_enable(struct virtnet_info *vi) +{ + napi_enable(&vi->napi); + + /* If all buffers were filled by other side before we napi_enabled, we + * won't get another interrupt, so process any outstanding packets + * now. virtnet_poll wants re-enable the queue, so we disable here. + * We synchronize against interrupts via NAPI_STATE_SCHED */ + if (napi_schedule_prep(&vi->napi)) { + virtqueue_disable_cb(vi->rvq); + __...
2010 Jun 03
0
[PATCH 3/3][STABLE] KVM: add schedule check to napi_enable call
...- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -388,6 +388,20 @@ static void skb_recv_done(struct virtque } } +static void virtnet_napi_enable(struct virtnet_info *vi) +{ + napi_enable(&vi->napi); + + /* If all buffers were filled by other side before we napi_enabled, we + * won't get another interrupt, so process any outstanding packets + * now. virtnet_poll wants re-enable the queue, so we disable here. + * We synchronize against interrupts via NAPI_STATE_SCHED */ + if (napi_schedule_prep(&vi->napi)) { + vi...
2010 Jun 03
0
[PATCH 3/3][STABLE] KVM: add schedule check to napi_enable call
...- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -388,6 +388,20 @@ static void skb_recv_done(struct virtque } } +static void virtnet_napi_enable(struct virtnet_info *vi) +{ + napi_enable(&vi->napi); + + /* If all buffers were filled by other side before we napi_enabled, we + * won't get another interrupt, so process any outstanding packets + * now. virtnet_poll wants re-enable the queue, so we disable here. + * We synchronize against interrupts via NAPI_STATE_SCHED */ + if (napi_schedule_prep(&vi->napi)) { + vi...
2011 Feb 09
1
[PATCH] virtio-net: add schedule check to napi_enable call in refill_work
...irtio_net.c.orig 2011-02-08 14:34:51.444099190 -0500 +++ drivers/net/virtio_net.c 2011-02-08 14:18:00.484400134 -0500 @@ -446,6 +446,20 @@ } } +static void virtnet_napi_enable(struct virtnet_info *vi) +{ + napi_enable(&vi->napi); + + /* If all buffers were filled by other side before we napi_enabled, we + * won't get another interrupt, so process any outstanding packets + * now. virtnet_poll wants re-enable the queue, so we disable here. + * We synchronize against interrupts via NAPI_STATE_SCHED */ + if (napi_schedule_prep(&vi->napi)) { + virtqueue_disable_cb(vi->rvq); + __...
2011 Feb 09
1
[PATCH] virtio-net: add schedule check to napi_enable call in refill_work
...irtio_net.c.orig 2011-02-08 14:34:51.444099190 -0500 +++ drivers/net/virtio_net.c 2011-02-08 14:18:00.484400134 -0500 @@ -446,6 +446,20 @@ } } +static void virtnet_napi_enable(struct virtnet_info *vi) +{ + napi_enable(&vi->napi); + + /* If all buffers were filled by other side before we napi_enabled, we + * won't get another interrupt, so process any outstanding packets + * now. virtnet_poll wants re-enable the queue, so we disable here. + * We synchronize against interrupts via NAPI_STATE_SCHED */ + if (napi_schedule_prep(&vi->napi)) { + virtqueue_disable_cb(vi->rvq); + __...
2007 Dec 11
1
[PATCH resent] virtio_net: Fix stalled inbound traffic on early packets
Hello Rusty, while implementing and testing virtio on s390 I found a problem in virtio_net: The current virtio_net driver has a startup race, which prevents any incoming traffic: If try_fill_recv submits buffers to the host system data might be filled in and an interrupt is sent, before napi_enable finishes. In that case the interrupt will kick skb_recv_done which will then call
2007 Dec 11
1
[PATCH resent] virtio_net: Fix stalled inbound traffic on early packets
Hello Rusty, while implementing and testing virtio on s390 I found a problem in virtio_net: The current virtio_net driver has a startup race, which prevents any incoming traffic: If try_fill_recv submits buffers to the host system data might be filled in and an interrupt is sent, before napi_enable finishes. In that case the interrupt will kick skb_recv_done which will then call
2011 Dec 20
0
[PATCH] virtio_net: fix refill related races
Fix theoretical races related to refill work: 1. After napi is disabled by ndo_stop, refill work can run and re-enable it. 2. Refill can get scheduled on many cpus in parallel. if this happens it will corrupt the vq linked list as there's no locking 3. refill work is cancelled after unregister netdev For small bufs this means it can alloc an skb for a device which is
2011 Dec 20
0
[PATCH] virtio_net: fix refill related races
Fix theoretical races related to refill work: 1. After napi is disabled by ndo_stop, refill work can run and re-enable it. 2. Refill can get scheduled on many cpus in parallel. if this happens it will corrupt the vq linked list as there's no locking 3. refill work is cancelled after unregister netdev For small bufs this means it can alloc an skb for a device which is
2011 Dec 07
1
[PATCH RFC] virtio_net: fix refill related races
Fix theoretical races related to refill work: 1. After napi is disabled by ndo_stop, refill work can run and re-enable it. 2. Refill can reschedule itself, if this happens it can run after cancel_delayed_work_sync, and will access device after it is destroyed. As a solution, add flags to track napi state and to disable refill, and toggle them on start, stop and remove; check these flags
2011 Dec 07
1
[PATCH RFC] virtio_net: fix refill related races
Fix theoretical races related to refill work: 1. After napi is disabled by ndo_stop, refill work can run and re-enable it. 2. Refill can reschedule itself, if this happens it can run after cancel_delayed_work_sync, and will access device after it is destroyed. As a solution, add flags to track napi state and to disable refill, and toggle them on start, stop and remove; check these flags
2007 Dec 06
0
[PATCH] virtio_net: Fix stalled inbound traffic on early packets
The current virtio_net driver has a startup race, which prevents any incoming traffic: If try_fill_recv submits buffers to the host system data might be filled in and an interrupt is sent, before napi_enable finishes. In that case the interrupt will kick skb_recv_done which will then call netif_rx_schedule. netif_rx_schedule checks, if NAPI_STATE_SCHED is set - which is not as we did not run
2007 Dec 06
0
[PATCH] virtio_net: Fix stalled inbound traffic on early packets
The current virtio_net driver has a startup race, which prevents any incoming traffic: If try_fill_recv submits buffers to the host system data might be filled in and an interrupt is sent, before napi_enable finishes. In that case the interrupt will kick skb_recv_done which will then call netif_rx_schedule. netif_rx_schedule checks, if NAPI_STATE_SCHED is set - which is not as we did not run
2017 Apr 25
3
[PATCH net-next] virtio-net: on tx, only call napi_disable if tx napi is on
From: Willem de Bruijn <willemb at google.com> As of tx napi, device down (`ip link set dev $dev down`) hangs unless tx napi is enabled. Else napi_enable is not called, so napi_disable will spin on test_and_set_bit NAPI_STATE_SCHED. Only call napi_disable if tx napi is enabled. Fixes: 5a719c2552ca ("virtio-net: transmit napi") Reported-by: Jason Wang <jasowang at
2017 Apr 25
3
[PATCH net-next] virtio-net: on tx, only call napi_disable if tx napi is on
From: Willem de Bruijn <willemb at google.com> As of tx napi, device down (`ip link set dev $dev down`) hangs unless tx napi is enabled. Else napi_enable is not called, so napi_disable will spin on test_and_set_bit NAPI_STATE_SCHED. Only call napi_disable if tx napi is enabled. Fixes: 5a719c2552ca ("virtio-net: transmit napi") Reported-by: Jason Wang <jasowang at
2017 Apr 02
5
[PATCH net-next 0/3] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com> Add napi for virtio-net transmit completion processing. Based on previous patchsets by Jason Wang: [RFC V7 PATCH 0/7] enable tx interrupts for virtio-net http://lkml.iu.edu/hypermail/linux/kernel/1505.3/00245.html Changes: RFC -> v1: - dropped vhost interrupt moderation patch: not needed and likely expensive at light
2017 Apr 02
5
[PATCH net-next 0/3] virtio-net tx napi
From: Willem de Bruijn <willemb at google.com> Add napi for virtio-net transmit completion processing. Based on previous patchsets by Jason Wang: [RFC V7 PATCH 0/7] enable tx interrupts for virtio-net http://lkml.iu.edu/hypermail/linux/kernel/1505.3/00245.html Changes: RFC -> v1: - dropped vhost interrupt moderation patch: not needed and likely expensive at light
2013 Dec 26
2
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
On Thu, 2013-12-26 at 13:28 -0800, Michael Dalton wrote: > On Mon, Dec 23, 2013 at 11:37 AM, Michael S. Tsirkin <mst at redhat.com> wrote: > > So there isn't a conflict with respect to locking. > > > > Is it problematic to use same page_frag with both GFP_ATOMIC and with > > GFP_KERNEL? If yes why? > > I believe it is safe to use the same page_frag and I
2013 Dec 26
2
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
On Thu, 2013-12-26 at 13:28 -0800, Michael Dalton wrote: > On Mon, Dec 23, 2013 at 11:37 AM, Michael S. Tsirkin <mst at redhat.com> wrote: > > So there isn't a conflict with respect to locking. > > > > Is it problematic to use same page_frag with both GFP_ATOMIC and with > > GFP_KERNEL? If yes why? > > I believe it is safe to use the same page_frag and I