Displaying 20 results from an estimated 11000 matches similar to: "[PATCHv2 RFC 0/4] virtio and vhost-net capacity handling"
2011 Jun 01
5
[PATCH RFC 0/3] virtio and vhost-net capacity handling
OK, here's a new attempt to use the new capacity api. I also added more
comments to clarify the logic. Hope this is more readable. Let me know
pls.
This is on top of the patches applied by Rusty.
Note: there are now actually 2 calls to fee_old_xmit_skbs on
data path so instead of passing flag '2' to the
second one I thought we can just make each call free up
at least 1 skb. This
2011 Jun 01
5
[PATCH RFC 0/3] virtio and vhost-net capacity handling
OK, here's a new attempt to use the new capacity api. I also added more
comments to clarify the logic. Hope this is more readable. Let me know
pls.
This is on top of the patches applied by Rusty.
Note: there are now actually 2 calls to fee_old_xmit_skbs on
data path so instead of passing flag '2' to the
second one I thought we can just make each call free up
at least 1 skb. This
2011 May 04
27
[PATCH 00/18] virtio and vhost-net performance enhancements
OK, here's a large patchset that implements the virtio spec update that I
sent earlier. It supercedes the PUBLISH_USED_IDX patches
I sent out earlier.
I know it's a lot to ask but please test, and please consider for 2.6.40 :)
I see nice performance improvements: one run showed going from 12
to 18 Gbit/s host to guest with netperf, but I did not spend a lot
of time testing performance,
2011 May 04
27
[PATCH 00/18] virtio and vhost-net performance enhancements
OK, here's a large patchset that implements the virtio spec update that I
sent earlier. It supercedes the PUBLISH_USED_IDX patches
I sent out earlier.
I know it's a lot to ask but please test, and please consider for 2.6.40 :)
I see nice performance improvements: one run showed going from 12
to 18 Gbit/s host to guest with netperf, but I did not spend a lot
of time testing performance,
2011 May 19
22
[PATCHv2 00/14] virtio and vhost-net performance enhancements
OK, here is the large patchset that implements the virtio spec update
that I sent earlier (the spec itself needs a minor update, will send
that out too next week, but I think we are on the same page here
already). It supercedes the PUBLISH_USED_IDX patches I sent
out earlier.
What will follow will be a patchset that actually includes 4 sets of
patches. I note below their status. Please consider
2011 May 19
22
[PATCHv2 00/14] virtio and vhost-net performance enhancements
OK, here is the large patchset that implements the virtio spec update
that I sent earlier (the spec itself needs a minor update, will send
that out too next week, but I think we are on the same page here
already). It supercedes the PUBLISH_USED_IDX patches I sent
out earlier.
What will follow will be a patchset that actually includes 4 sets of
patches. I note below their status. Please consider
2008 Jun 08
2
[PATCH 1/4] virtio_net: Fix skb->csum_start computation
From: Mark McLoughlin <markmc at redhat.com>
hdr->csum_start is the offset from the start of the ethernet
header to the transport layer checksum field. skb->csum_start
is the offset from skb->head.
skb_partial_csum_set() assumes that skb->data points to the
ethernet header - i.e. it computes skb->csum_start by adding
the headroom to hdr->csum_start.
Since
2008 May 02
1
[PATCH] virtio_net: free transmit skbs in a timer
On Thursday 01 May 2008 00:31:46 Mark McLoughlin wrote:
> virtio_net currently only frees old transmit skbs just
> before queueing new ones. If the queue is full, it then
> enables interrupts and waits for notification that more
> work has been performed.
Hi Mark,
This patch is fine, but it's better to do it from skb_xmit_done(). Of
course, this is usually called from an
2008 May 02
1
[PATCH] virtio_net: free transmit skbs in a timer
On Thursday 01 May 2008 00:31:46 Mark McLoughlin wrote:
> virtio_net currently only frees old transmit skbs just
> before queueing new ones. If the queue is full, it then
> enables interrupts and waits for notification that more
> work has been performed.
Hi Mark,
This patch is fine, but it's better to do it from skb_xmit_done(). Of
course, this is usually called from an
2009 Oct 25
1
[PATCH] virtio-net: fix data corruption with OOM
virtio net used to unlink skbs from send queues on error,
but ever since 48925e372f04f5e35fec6269127c62b2c71ab794
we do not do this. This causes guest data corruption and crashes
with vhost since net core can requeue the skb or free it without
it being taken off the list.
This patch fixes this by queueing the skb after successfull
transmit.
Signed-off-by: Michael S. Tsirkin <mst at
2009 Oct 25
1
[PATCH] virtio-net: fix data corruption with OOM
virtio net used to unlink skbs from send queues on error,
but ever since 48925e372f04f5e35fec6269127c62b2c71ab794
we do not do this. This causes guest data corruption and crashes
with vhost since net core can requeue the skb or free it without
it being taken off the list.
This patch fixes this by queueing the skb after successfull
transmit.
Signed-off-by: Michael S. Tsirkin <mst at
2008 May 26
7
[PATCH 1/3] virtio: fix virtio_net xmit of freed skb bug
If we fail to transmit a packet, we assume the queue is full and put
the skb into last_xmit_skb. However, if more space frees up before we
xmit it, we loop, and the result can be transmitting the same skb twice.
Fix is simple: set skb to NULL if we've used it in some way, and check
before sending.
Signed-off-by: Rusty Russell <rusty at rustcorp.com.au>
---
drivers/net/virtio_net.c |
2008 May 26
7
[PATCH 1/3] virtio: fix virtio_net xmit of freed skb bug
If we fail to transmit a packet, we assume the queue is full and put
the skb into last_xmit_skb. However, if more space frees up before we
xmit it, we loop, and the result can be transmitting the same skb twice.
Fix is simple: set skb to NULL if we've used it in some way, and check
before sending.
Signed-off-by: Rusty Russell <rusty at rustcorp.com.au>
---
drivers/net/virtio_net.c |
2010 Jun 10
1
[PATCH for-2.6.35] virtio_net: fix oom handling on tx
virtio net will never try to overflow the TX ring, so the only reason
add_buf may fail is out of memory. Thus, we can not stop the
device until some request completes - there's no guarantee anything
at all is outstanding.
Make the error message clearer as well: error here does not
indicate queue full.
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
---
drivers/net/virtio_net.c |
2010 Jun 10
1
[PATCH for-2.6.35] virtio_net: fix oom handling on tx
virtio net will never try to overflow the TX ring, so the only reason
add_buf may fail is out of memory. Thus, we can not stop the
device until some request completes - there's no guarantee anything
at all is outstanding.
Make the error message clearer as well: error here does not
indicate queue full.
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
---
drivers/net/virtio_net.c |
2010 Apr 12
10
[PATCH 0/6] virtio: virtqueue ops cleanup
virtqueue ops were introduced in the hope that we'll
have multiple implementations besides virtio_ring,
but none have surfaced so far, and given that
existing virtio ring is deployed in production
we are likely stuck with it now, so this layer just
adds complexity and overhead.
Further, the need to pass vq twice to each call
(as in dev->vq->vq_ops->kick(dev->vq) ) adds potential
2010 Apr 12
10
[PATCH 0/6] virtio: virtqueue ops cleanup
virtqueue ops were introduced in the hope that we'll
have multiple implementations besides virtio_ring,
but none have surfaced so far, and given that
existing virtio ring is deployed in production
we are likely stuck with it now, so this layer just
adds complexity and overhead.
Further, the need to pass vq twice to each call
(as in dev->vq->vq_ops->kick(dev->vq) ) adds potential
2012 Sep 28
6
[PATCH 0/3] virtio-net: inline header support
Thinking about Sasha's patches, we can reduce ring usage
for virtio net small packets dramatically if we put
virtio net header inline with the data.
This can be done for free in case guest net stack allocated
extra head room for the packet, and I don't see
why would this have any downsides.
Even though with my recent patches qemu
no longer requires header to be the first s/g element,
we
2012 Sep 28
6
[PATCH 0/3] virtio-net: inline header support
Thinking about Sasha's patches, we can reduce ring usage
for virtio net small packets dramatically if we put
virtio net header inline with the data.
This can be done for free in case guest net stack allocated
extra head room for the packet, and I don't see
why would this have any downsides.
Even though with my recent patches qemu
no longer requires header to be the first s/g element,
we
2007 Nov 18
3
[PATCH 1/2] virtio: fix net driver loop case where we fail to restart
skb is only NULL the first time around: it's more correct to test for
being under-budget.
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
diff -r 2a94425ac7d5 drivers/net/virtio_net.c
--- a/drivers/net/virtio_net.c Thu Nov 15 13:47:28 2007 +1100
+++ b/drivers/net/virtio_net.c Thu Nov 15 23:10:44 2007 +1100
@@ -198,8 +198,8 @@ again:
if (vi->num < vi->max / 2)