search for: 882,12

Displaying 7 results from an estimated 7 matches for "882,12".

Did you mean: 482,12
2014 Dec 01
1
[PATCH RFC v4 net-next 1/5] virtio_net: enable tx interrupt
...patch I sent, this seems to ignore the budget, and always poll the full napi_weight. Seems strange. What is the reason for this? > #ifdef CONFIG_NET_RX_BUSY_POLL > /* must be called with local_bh_disable()d */ > static int virtnet_busy_poll(struct napi_struct *napi) > @@ -825,30 +882,12 @@ static int virtnet_open(struct net_device *dev) > if (!try_fill_recv(&vi->rq[i], GFP_KERNEL)) > schedule_delayed_work(&vi->refill, 0); > virtnet_napi_enable(&vi->rq[i]); > + napi_enable(&vi->sq[i].napi); > } > > return 0; &gt...
2014 Dec 01
1
[PATCH RFC v4 net-next 1/5] virtio_net: enable tx interrupt
...patch I sent, this seems to ignore the budget, and always poll the full napi_weight. Seems strange. What is the reason for this? > #ifdef CONFIG_NET_RX_BUSY_POLL > /* must be called with local_bh_disable()d */ > static int virtnet_busy_poll(struct napi_struct *napi) > @@ -825,30 +882,12 @@ static int virtnet_open(struct net_device *dev) > if (!try_fill_recv(&vi->rq[i], GFP_KERNEL)) > schedule_delayed_work(&vi->refill, 0); > virtnet_napi_enable(&vi->rq[i]); > + napi_enable(&vi->sq[i].napi); > } > > return 0; &gt...
2012 Jan 05
3
[PATCH 0 of 2] xenpaging:speed up page-in
The following two patches are about how to speed up in xenpaging when page in pages. On suse11-64 with 4G memory,if we page out 2G pages,it will cost about 15.5 seconds, but take 2088 seconds to finish paging in.If page-in costs too much time,it will cause unmesurable problems when vm or dom0 access the paged_out page,such as BSOD,crash. What鈥檚 more,the dom0 is always in high I/O pressure.
2014 Dec 01
0
[PATCH RFC v4 net-next 1/5] virtio_net: enable tx interrupt
...+ virtqueue_disable_cb(sq->vq); + napi_schedule(&sq->napi); + } + } + __netif_tx_unlock(txq); + return sent < limit ? 0 : budget; +} + #ifdef CONFIG_NET_RX_BUSY_POLL /* must be called with local_bh_disable()d */ static int virtnet_busy_poll(struct napi_struct *napi) @@ -825,30 +882,12 @@ static int virtnet_open(struct net_device *dev) if (!try_fill_recv(&vi->rq[i], GFP_KERNEL)) schedule_delayed_work(&vi->refill, 0); virtnet_napi_enable(&vi->rq[i]); + napi_enable(&vi->sq[i].napi); } return 0; } -static void free_old_xmit_skbs(s...
2014 Dec 01
0
[PATCH RFC v4 net-next 1/5] virtio_net: enable tx interrupt
...+ virtqueue_disable_cb(sq->vq); + napi_schedule(&sq->napi); + } + } + __netif_tx_unlock(txq); + return sent < limit ? 0 : budget; +} + #ifdef CONFIG_NET_RX_BUSY_POLL /* must be called with local_bh_disable()d */ static int virtnet_busy_poll(struct napi_struct *napi) @@ -825,30 +882,12 @@ static int virtnet_open(struct net_device *dev) if (!try_fill_recv(&vi->rq[i], GFP_KERNEL)) schedule_delayed_work(&vi->refill, 0); virtnet_napi_enable(&vi->rq[i]); + napi_enable(&vi->sq[i].napi); } return 0; } -static void free_old_xmit_skbs(s...
2014 Dec 01
9
[PATCH RFC v4 net-next 0/5] virtio_net: enabling tx interrupts
Hello: We used to orphan packets before transmission for virtio-net. This breaks socket accounting and can lead serveral functions won't work, e.g: - Byte Queue Limit depends on tx completion nofication to work. - Packet Generator depends on tx completion nofication for the last transmitted packet to complete. - TCP Small Queue depends on proper accounting of sk_wmem_alloc to work. This
2014 Dec 01
9
[PATCH RFC v4 net-next 0/5] virtio_net: enabling tx interrupts
Hello: We used to orphan packets before transmission for virtio-net. This breaks socket accounting and can lead serveral functions won't work, e.g: - Byte Queue Limit depends on tx completion nofication to work. - Packet Generator depends on tx completion nofication for the last transmitted packet to complete. - TCP Small Queue depends on proper accounting of sk_wmem_alloc to work. This