Displaying 20 results from an estimated 63 matches for "napi_state_sch".
Did you mean:
napi_state_sched
2013 Jul 08
4
[PATCH 2/2] virtio_net: fix race in RX VQ processing
virtio net called virtqueue_enable_cq on RX path after napi_complete, so
with NAPI_STATE_SCHED clear - outside the implicit napi lock.
This violates the requirement to synchronize virtqueue_enable_cq wrt
virtqueue_add_buf. In particular, used event can move backwards,
causing us to lose interrupts.
In a debug build, this can trigger panic within START_USE.
Jason Wang reports that he can...
2013 Jul 08
4
[PATCH 2/2] virtio_net: fix race in RX VQ processing
virtio net called virtqueue_enable_cq on RX path after napi_complete, so
with NAPI_STATE_SCHED clear - outside the implicit napi lock.
This violates the requirement to synchronize virtqueue_enable_cq wrt
virtqueue_add_buf. In particular, used event can move backwards,
causing us to lose interrupts.
In a debug build, this can trigger panic within START_USE.
Jason Wang reports that he can...
2013 Jul 08
0
[PATCH 2/2] virtio_net: fix race in RX VQ processing
Hello.
On 08-07-2013 13:04, Michael S. Tsirkin wrote:
> virtio net called virtqueue_enable_cq on RX path after napi_complete, so
> with NAPI_STATE_SCHED clear - outside the implicit napi lock.
> This violates the requirement to synchronize virtqueue_enable_cq wrt
> virtqueue_add_buf. In particular, used event can move backwards,
> causing us to lose interrupts.
> In a debug build, this can trigger panic within START_USE.
> Jason...
2013 Jul 09
0
[PATCH v2 2/2] virtio_net: fix race in RX VQ processing
virtio net called virtqueue_enable_cq on RX path after napi_complete, so
with NAPI_STATE_SCHED clear - outside the implicit napi lock.
This violates the requirement to synchronize virtqueue_enable_cq wrt
virtqueue_add_buf. In particular, used event can move backwards,
causing us to lose interrupts.
In a debug build, this can trigger panic within START_USE.
Jason Wang reports that he can...
2013 Jul 09
0
[PATCH 2/2] virtio_net: fix race in RX VQ processing
On 07/08/2013 05:04 PM, Michael S. Tsirkin wrote:
> virtio net called virtqueue_enable_cq on RX path after napi_complete, so
> with NAPI_STATE_SCHED clear - outside the implicit napi lock.
> This violates the requirement to synchronize virtqueue_enable_cq wrt
> virtqueue_add_buf. In particular, used event can move backwards,
> causing us to lose interrupts.
> In a debug build, this can trigger panic within START_USE.
>
> Ja...
2013 Jul 09
0
[PATCH v2 2/2] virtio_net: fix race in RX VQ processing
virtio net called virtqueue_enable_cq on RX path after napi_complete, so
with NAPI_STATE_SCHED clear - outside the implicit napi lock.
This violates the requirement to synchronize virtqueue_enable_cq wrt
virtqueue_add_buf. In particular, used event can move backwards,
causing us to lose interrupts.
In a debug build, this can trigger panic within START_USE.
Jason Wang reports that he can...
2013 Dec 27
1
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
...e(&rq->napi);
try_fill_recv(rq, GFP_KERNEL);
virtnet_napi_enable(&vi->rq[i]);
...
try_fill_recv(rq, GFP_ATOMIC);
napi_enable();// crash on :
BUG_ON(!test_bit(NAPI_STATE_SCHED, &n->state));
2013 Dec 27
1
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
...e(&rq->napi);
try_fill_recv(rq, GFP_KERNEL);
virtnet_napi_enable(&vi->rq[i]);
...
try_fill_recv(rq, GFP_ATOMIC);
napi_enable();// crash on :
BUG_ON(!test_bit(NAPI_STATE_SCHED, &n->state));
2011 Feb 10
2
[PATCH] virtio_net: Add schedule check to napi_enable call
...)
+{
+ napi_enable(&vi->napi);
+
+ /* If all buffers were filled by other side before we napi_enabled, we
+ * won't get another interrupt, so process any outstanding packets
+ * now. virtnet_poll wants re-enable the queue, so we disable here.
+ * We synchronize against interrupts via NAPI_STATE_SCHED */
+ if (napi_schedule_prep(&vi->napi)) {
+ virtqueue_disable_cb(vi->rvq);
+ __napi_schedule(&vi->napi);
+ }
+}
+
static void refill_work(struct work_struct *work)
{
struct virtnet_info *vi;
@@ -454,7 +468,7 @@ static void refill_work(struct work_stru
vi = container_of(wo...
2011 Feb 10
2
[PATCH] virtio_net: Add schedule check to napi_enable call
...)
+{
+ napi_enable(&vi->napi);
+
+ /* If all buffers were filled by other side before we napi_enabled, we
+ * won't get another interrupt, so process any outstanding packets
+ * now. virtnet_poll wants re-enable the queue, so we disable here.
+ * We synchronize against interrupts via NAPI_STATE_SCHED */
+ if (napi_schedule_prep(&vi->napi)) {
+ virtqueue_disable_cb(vi->rvq);
+ __napi_schedule(&vi->napi);
+ }
+}
+
static void refill_work(struct work_struct *work)
{
struct virtnet_info *vi;
@@ -454,7 +468,7 @@ static void refill_work(struct work_stru
vi = container_of(wo...
2017 Apr 25
3
[PATCH net-next] virtio-net: on tx, only call napi_disable if tx napi is on
From: Willem de Bruijn <willemb at google.com>
As of tx napi, device down (`ip link set dev $dev down`) hangs unless
tx napi is enabled. Else napi_enable is not called, so napi_disable
will spin on test_and_set_bit NAPI_STATE_SCHED.
Only call napi_disable if tx napi is enabled.
Fixes: 5a719c2552ca ("virtio-net: transmit napi")
Reported-by: Jason Wang <jasowang at redhat.com>
Signed-off-by: Willem de Bruijn <willemb at google.com>
---
drivers/net/virtio_net.c | 10 ++++++++--
1 file changed, 8 insert...
2017 Apr 25
3
[PATCH net-next] virtio-net: on tx, only call napi_disable if tx napi is on
From: Willem de Bruijn <willemb at google.com>
As of tx napi, device down (`ip link set dev $dev down`) hangs unless
tx napi is enabled. Else napi_enable is not called, so napi_disable
will spin on test_and_set_bit NAPI_STATE_SCHED.
Only call napi_disable if tx napi is enabled.
Fixes: 5a719c2552ca ("virtio-net: transmit napi")
Reported-by: Jason Wang <jasowang at redhat.com>
Signed-off-by: Willem de Bruijn <willemb at google.com>
---
drivers/net/virtio_net.c | 10 ++++++++--
1 file changed, 8 insert...
2017 Apr 25
1
[PATCH net-next v3 2/5] virtio-net: transmit napi
...; vi->max_queue_pairs; i++)
> + for (i = 0; i < vi->max_queue_pairs; i++) {
> napi_disable(&vi->rq[i].napi);
> + napi_disable(&vi->sq[i].napi);
> + }
Looks like this will wait for ever if napi_tx is false because we never
enable the NAPI so we will wait for NAPI_STATE_SCHED to be cleared.
Thanks
2017 Apr 25
1
[PATCH net-next v3 2/5] virtio-net: transmit napi
...; vi->max_queue_pairs; i++)
> + for (i = 0; i < vi->max_queue_pairs; i++) {
> napi_disable(&vi->rq[i].napi);
> + napi_disable(&vi->sq[i].napi);
> + }
Looks like this will wait for ever if napi_tx is false because we never
enable the NAPI so we will wait for NAPI_STATE_SCHED to be cleared.
Thanks
2010 Jun 03
0
[PATCH 3/3][STABLE] KVM: add schedule check to napi_enable call
...vi->napi);
+
+ /* If all buffers were filled by other side before we napi_enabled, we
+ * won't get another interrupt, so process any outstanding packets
+ * now. virtnet_poll wants re-enable the queue, so we disable here.
+ * We synchronize against interrupts via NAPI_STATE_SCHED */
+ if (napi_schedule_prep(&vi->napi)) {
+ vi->rvq->vq_ops->disable_cb(vi->rvq);
+ __napi_schedule(&vi->napi);
+ }
+}
+
static void refill_work(struct work_struct *work)
{
struct virtnet_info *vi;
@@ -397,7 +411,7 @@ sta...
2010 Jun 03
0
[PATCH 3/3][STABLE] KVM: add schedule check to napi_enable call
...vi->napi);
+
+ /* If all buffers were filled by other side before we napi_enabled, we
+ * won't get another interrupt, so process any outstanding packets
+ * now. virtnet_poll wants re-enable the queue, so we disable here.
+ * We synchronize against interrupts via NAPI_STATE_SCHED */
+ if (napi_schedule_prep(&vi->napi)) {
+ vi->rvq->vq_ops->disable_cb(vi->rvq);
+ __napi_schedule(&vi->napi);
+ }
+}
+
static void refill_work(struct work_struct *work)
{
struct virtnet_info *vi;
@@ -397,7 +411,7 @@ sta...
2017 Dec 07
2
[PATCH net-next] virtio_net: Disable interrupts if napi_complete_done rescheduled napi
...ore complete, and when
napi was rescheduled within napi_complete_done() it did not disable
interrupts.
This caused more interrupts when event idx is disabled.
According to commit cbdadbbf0c79 ("virtio_net: fix race in RX VQ
processing") we cannot place virtqueue_enable_cb_prepare() after
NAPI_STATE_SCHED is cleared, so disable interrupts again if
napi_complete_done() returned false.
Tested with vhost-user of OVS 2.7 on host, which does not have the event
idx feature.
* Before patch:
$ netperf -t UDP_STREAM -H 192.168.150.253 -l 60 -- -m 1472
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port...
2017 Dec 07
2
[PATCH net-next] virtio_net: Disable interrupts if napi_complete_done rescheduled napi
...ore complete, and when
napi was rescheduled within napi_complete_done() it did not disable
interrupts.
This caused more interrupts when event idx is disabled.
According to commit cbdadbbf0c79 ("virtio_net: fix race in RX VQ
processing") we cannot place virtqueue_enable_cb_prepare() after
NAPI_STATE_SCHED is cleared, so disable interrupts again if
napi_complete_done() returned false.
Tested with vhost-user of OVS 2.7 on host, which does not have the event
idx feature.
* Before patch:
$ netperf -t UDP_STREAM -H 192.168.150.253 -l 60 -- -m 1472
MIGRATED UDP STREAM TEST from 0.0.0.0 (0.0.0.0) port...
2011 Feb 09
1
[PATCH] virtio-net: add schedule check to napi_enable call in refill_work
...)
+{
+ napi_enable(&vi->napi);
+
+ /* If all buffers were filled by other side before we napi_enabled, we
+ * won't get another interrupt, so process any outstanding packets
+ * now. virtnet_poll wants re-enable the queue, so we disable here.
+ * We synchronize against interrupts via NAPI_STATE_SCHED */
+ if (napi_schedule_prep(&vi->napi)) {
+ virtqueue_disable_cb(vi->rvq);
+ __napi_schedule(&vi->napi);
+ }
+}
+
static void refill_work(struct work_struct *work)
{
struct virtnet_info *vi;
@@ -454,7 +468,7 @@
vi = container_of(work, struct virtnet_info, refill.work);...
2011 Feb 09
1
[PATCH] virtio-net: add schedule check to napi_enable call in refill_work
...)
+{
+ napi_enable(&vi->napi);
+
+ /* If all buffers were filled by other side before we napi_enabled, we
+ * won't get another interrupt, so process any outstanding packets
+ * now. virtnet_poll wants re-enable the queue, so we disable here.
+ * We synchronize against interrupts via NAPI_STATE_SCHED */
+ if (napi_schedule_prep(&vi->napi)) {
+ virtqueue_disable_cb(vi->rvq);
+ __napi_schedule(&vi->napi);
+ }
+}
+
static void refill_work(struct work_struct *work)
{
struct virtnet_info *vi;
@@ -454,7 +468,7 @@
vi = container_of(work, struct virtnet_info, refill.work);...