Displaying 20 results from an estimated 81 matches for "napi_complet".
Did you mean:
napi_complete
2013 Jul 08
4
[PATCH 2/2] virtio_net: fix race in RX VQ processing
virtio net called virtqueue_enable_cq on RX path after napi_complete, so
with NAPI_STATE_SCHED clear - outside the implicit napi lock.
This violates the requirement to synchronize virtqueue_enable_cq wrt
virtqueue_add_buf. In particular, used event can move backwards,
causing us to lose interrupts.
In a debug build, this can trigger panic within START_USE.
Jason...
2013 Jul 08
4
[PATCH 2/2] virtio_net: fix race in RX VQ processing
virtio net called virtqueue_enable_cq on RX path after napi_complete, so
with NAPI_STATE_SCHED clear - outside the implicit napi lock.
This violates the requirement to synchronize virtqueue_enable_cq wrt
virtqueue_add_buf. In particular, used event can move backwards,
causing us to lose interrupts.
In a debug build, this can trigger panic within START_USE.
Jason...
2013 Jul 09
0
[PATCH v2 2/2] virtio_net: fix race in RX VQ processing
virtio net called virtqueue_enable_cq on RX path after napi_complete, so
with NAPI_STATE_SCHED clear - outside the implicit napi lock.
This violates the requirement to synchronize virtqueue_enable_cq wrt
virtqueue_add_buf. In particular, used event can move backwards,
causing us to lose interrupts.
In a debug build, this can trigger panic within START_USE.
Jason...
2013 Jul 09
0
[PATCH 2/2] virtio_net: fix race in RX VQ processing
On 07/08/2013 05:04 PM, Michael S. Tsirkin wrote:
> virtio net called virtqueue_enable_cq on RX path after napi_complete, so
> with NAPI_STATE_SCHED clear - outside the implicit napi lock.
> This violates the requirement to synchronize virtqueue_enable_cq wrt
> virtqueue_add_buf. In particular, used event can move backwards,
> causing us to lose interrupts.
> In a debug build, this can trigger panic...
2013 Jul 09
0
[PATCH v2 2/2] virtio_net: fix race in RX VQ processing
virtio net called virtqueue_enable_cq on RX path after napi_complete, so
with NAPI_STATE_SCHED clear - outside the implicit napi lock.
This violates the requirement to synchronize virtqueue_enable_cq wrt
virtqueue_add_buf. In particular, used event can move backwards,
causing us to lose interrupts.
In a debug build, this can trigger panic within START_USE.
Jason...
2014 Dec 01
1
[PATCH RFC v4 net-next 1/5] virtio_net: enable tx interrupt
...v;
> + struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, vq2txq(sq->vq));
> + u32 limit = vi->tx_work_limit;
> + unsigned int sent;
> +
> + __netif_tx_lock(txq, smp_processor_id());
> + sent = free_old_xmit_skbs(txq, sq, limit);
> + if (sent < limit) {
> + napi_complete(napi);
> + /* Note: we must enable cb *after* napi_complete, because
> + * napi_schedule calls from callbacks that trigger before
> + * napi_complete are ignored.
> + */
> + if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) {
> + virtqueue_disable_cb(sq->vq);
&g...
2014 Dec 01
1
[PATCH RFC v4 net-next 1/5] virtio_net: enable tx interrupt
...v;
> + struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, vq2txq(sq->vq));
> + u32 limit = vi->tx_work_limit;
> + unsigned int sent;
> +
> + __netif_tx_lock(txq, smp_processor_id());
> + sent = free_old_xmit_skbs(txq, sq, limit);
> + if (sent < limit) {
> + napi_complete(napi);
> + /* Note: we must enable cb *after* napi_complete, because
> + * napi_schedule calls from callbacks that trigger before
> + * napi_complete are ignored.
> + */
> + if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) {
> + virtqueue_disable_cb(sq->vq);
&g...
2014 Oct 20
0
[PATCH RFC v3 1/3] virtio_net: enable tx interrupt
...send_queue, napi);
+ struct virtnet_info *vi = sq->vq->vdev->priv;
+ struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, vq2txq(sq->vq));
+ unsigned int sent;
+
+ __netif_tx_lock(txq, smp_processor_id());
+ sent = free_old_xmit_skbs(txq, sq, budget);
+ if (sent < budget) {
+ napi_complete(napi);
+ /* Note: we must enable cb *after* napi_complete, because
+ * napi_schedule calls from callbacks that trigger before
+ * napi_complete are ignored.
+ */
+ if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) {
+ virtqueue_disable_cb(sq->vq);
+ napi_schedule(&sq->na...
2014 Oct 20
0
[PATCH RFC v3 1/3] virtio_net: enable tx interrupt
...send_queue, napi);
+ struct virtnet_info *vi = sq->vq->vdev->priv;
+ struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, vq2txq(sq->vq));
+ unsigned int sent;
+
+ __netif_tx_lock(txq, smp_processor_id());
+ sent = free_old_xmit_skbs(txq, sq, budget);
+ if (sent < budget) {
+ napi_complete(napi);
+ /* Note: we must enable cb *after* napi_complete, because
+ * napi_schedule calls from callbacks that trigger before
+ * napi_complete are ignored.
+ */
+ if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) {
+ virtqueue_disable_cb(sq->vq);
+ napi_schedule(&sq->na...
2014 Dec 01
0
[PATCH RFC v4 net-next 1/5] virtio_net: enable tx interrupt
...et_info *vi = sq->vq->vdev->priv;
+ struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, vq2txq(sq->vq));
+ u32 limit = vi->tx_work_limit;
+ unsigned int sent;
+
+ __netif_tx_lock(txq, smp_processor_id());
+ sent = free_old_xmit_skbs(txq, sq, limit);
+ if (sent < limit) {
+ napi_complete(napi);
+ /* Note: we must enable cb *after* napi_complete, because
+ * napi_schedule calls from callbacks that trigger before
+ * napi_complete are ignored.
+ */
+ if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) {
+ virtqueue_disable_cb(sq->vq);
+ napi_schedule(&sq->na...
2014 Dec 01
0
[PATCH RFC v4 net-next 1/5] virtio_net: enable tx interrupt
...et_info *vi = sq->vq->vdev->priv;
+ struct netdev_queue *txq = netdev_get_tx_queue(vi->dev, vq2txq(sq->vq));
+ u32 limit = vi->tx_work_limit;
+ unsigned int sent;
+
+ __netif_tx_lock(txq, smp_processor_id());
+ sent = free_old_xmit_skbs(txq, sq, limit);
+ if (sent < limit) {
+ napi_complete(napi);
+ /* Note: we must enable cb *after* napi_complete, because
+ * napi_schedule calls from callbacks that trigger before
+ * napi_complete are ignored.
+ */
+ if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) {
+ virtqueue_disable_cb(sq->vq);
+ napi_schedule(&sq->na...
2013 Jul 08
0
[PATCH 2/2] virtio_net: fix race in RX VQ processing
Hello.
On 08-07-2013 13:04, Michael S. Tsirkin wrote:
> virtio net called virtqueue_enable_cq on RX path after napi_complete, so
> with NAPI_STATE_SCHED clear - outside the implicit napi lock.
> This violates the requirement to synchronize virtqueue_enable_cq wrt
> virtqueue_add_buf. In particular, used event can move backwards,
> causing us to lose interrupts.
> In a debug build, this can trigger panic...
2011 Jul 29
1
[PATCH RFC net-next] virtio_net: refill buffer right after being used
...- if (vi->num < vi->max / 2) {
- if (!try_fill_recv(vi, GFP_ATOMIC))
+ if (fill_one(vi, GFP_ATOMIC) < 0)
schedule_delayed_work(&vi->refill, 0);
}
+ /* notify buffers are refilled */
+ virtqueue_kick(vi->rvq);
+
/* Out of packets? */
if (received < budget) {
napi_complete(napi);
2011 Jul 29
1
[PATCH RFC net-next] virtio_net: refill buffer right after being used
...- if (vi->num < vi->max / 2) {
- if (!try_fill_recv(vi, GFP_ATOMIC))
+ if (fill_one(vi, GFP_ATOMIC) < 0)
schedule_delayed_work(&vi->refill, 0);
}
+ /* notify buffers are refilled */
+ virtqueue_kick(vi->rvq);
+
/* Out of packets? */
if (received < budget) {
napi_complete(napi);
2014 Oct 15
2
[RFC PATCH net-next 5/6] virtio-net: enable tx interrupt
...> > + __netif_tx_lock(txq, smp_processor_id());
>> > + virtqueue_disable_cb(sq->vq);
>> > + sent += free_old_xmit_skbs(sq, budget - sent);
>> > +
>> > + if (sent < budget) {
>> > + r = virtqueue_enable_cb_prepare(sq->vq);
>> > + napi_complete(napi);
>> > + __netif_tx_unlock(txq);
>> > + if (unlikely(virtqueue_poll(sq->vq, r)) &&
> So you are enabling callback on the next packet,
> which is almost sure to cause an interrupt storm
> on the guest.
>
>
> I think it's a bad idea, this is...
2014 Oct 15
2
[RFC PATCH net-next 5/6] virtio-net: enable tx interrupt
...> > + __netif_tx_lock(txq, smp_processor_id());
>> > + virtqueue_disable_cb(sq->vq);
>> > + sent += free_old_xmit_skbs(sq, budget - sent);
>> > +
>> > + if (sent < budget) {
>> > + r = virtqueue_enable_cb_prepare(sq->vq);
>> > + napi_complete(napi);
>> > + __netif_tx_unlock(txq);
>> > + if (unlikely(virtqueue_poll(sq->vq, r)) &&
> So you are enabling callback on the next packet,
> which is almost sure to cause an interrupt storm
> on the guest.
>
>
> I think it's a bad idea, this is...
2015 Jul 31
5
[PATCH net-next] virtio_net: add gro capability
From: Eric Dumazet <edumazet at google.com>
Straightforward patch to add GRO processing to virtio_net.
napi_complete_done() usage allows more aggressive aggregation,
opted-in by setting /sys/class/net/xxx/gro_flush_timeout
Tested:
Setting /sys/class/net/xxx/gro_flush_timeout to 1000 nsec,
Rick Jones reported following results.
One VM of each on a pair of OpenStack compute nodes with E5-2650Lv3 CPUs
and Intel...
2015 Jul 31
5
[PATCH net-next] virtio_net: add gro capability
From: Eric Dumazet <edumazet at google.com>
Straightforward patch to add GRO processing to virtio_net.
napi_complete_done() usage allows more aggressive aggregation,
opted-in by setting /sys/class/net/xxx/gro_flush_timeout
Tested:
Setting /sys/class/net/xxx/gro_flush_timeout to 1000 nsec,
Rick Jones reported following results.
One VM of each on a pair of OpenStack compute nodes with E5-2650Lv3 CPUs
and Intel...
2015 Aug 03
0
[PATCH net-next] virtio_net: add gro capability
On Fri, Jul 31, 2015 at 06:25:17PM +0200, Eric Dumazet wrote:
> From: Eric Dumazet <edumazet at google.com>
>
> Straightforward patch to add GRO processing to virtio_net.
>
> napi_complete_done() usage allows more aggressive aggregation,
> opted-in by setting /sys/class/net/xxx/gro_flush_timeout
>
> Tested:
>
> Setting /sys/class/net/xxx/gro_flush_timeout to 1000 nsec,
> Rick Jones reported following results.
>
> One VM of each on a pair of OpenStack compu...
2013 Mar 22
2
[PATCH virtio-next 1/2] caif_virtio: Use vringh_notify_enable correctly
...1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/caif/caif_virtio.c b/drivers/net/caif/caif_virtio.c
index f6caa1e..fb80765 100644
--- a/drivers/net/caif/caif_virtio.c
+++ b/drivers/net/caif/caif_virtio.c
@@ -318,7 +318,7 @@ exit:
/* Really out of patckets? (stolen from virtio_net)*/
napi_complete(napi);
- if (unlikely(vringh_notify_enable_kern(cfv->vr_rx)) &&
+ if (unlikely(!vringh_notify_enable_kern(cfv->vr_rx)) &&
napi_schedule_prep(napi)) {
vringh_notify_disable_kern(cfv->vr_rx);
__napi_schedule(napi);
--
1.7.9.5