Displaying 8 results from an estimated 8 matches for "virtqueue_get_capac".
2011 May 18
1
[PATCH RFC] virtio_net: fix patch: virtio_net: limit xmit polling
...static bool free_old_xmit_skbs(struct virtnet_info *vi, int capacity)
struct sk_buff *skb;
unsigned int len;
bool c;
+ int n;
+
/* We try to free up at least 2 skbs per one sent, so that we'll get
* all of the memory back if they are used fast enough. */
- int n = 2;
-
- while ((c = virtqueue_get_capacity(vi->svq) >= capacity) && --n > 0 &&
- (skb = virtqueue_get_buf(vi->svq, &len)) != NULL) {
+ for (n = 0;
+ ((c = virtqueue_get_capacity(vi->svq)) < capacity || n < 2) &&
+ ((skb = virtqueue_get_buf(vi->svq, &len)));
+ +...
2011 May 18
1
[PATCH RFC] virtio_net: fix patch: virtio_net: limit xmit polling
...static bool free_old_xmit_skbs(struct virtnet_info *vi, int capacity)
struct sk_buff *skb;
unsigned int len;
bool c;
+ int n;
+
/* We try to free up at least 2 skbs per one sent, so that we'll get
* all of the memory back if they are used fast enough. */
- int n = 2;
-
- while ((c = virtqueue_get_capacity(vi->svq) >= capacity) && --n > 0 &&
- (skb = virtqueue_get_buf(vi->svq, &len)) != NULL) {
+ for (n = 0;
+ ((c = virtqueue_get_capacity(vi->svq)) < capacity || n < 2) &&
+ ((skb = virtqueue_get_buf(vi->svq, &len)));
+ +...
2012 Sep 28
6
[PATCH 0/3] virtio-net: inline header support
Thinking about Sasha's patches, we can reduce ring usage
for virtio net small packets dramatically if we put
virtio net header inline with the data.
This can be done for free in case guest net stack allocated
extra head room for the packet, and I don't see
why would this have any downsides.
Even though with my recent patches qemu
no longer requires header to be the first s/g element,
we
2012 Sep 28
6
[PATCH 0/3] virtio-net: inline header support
Thinking about Sasha's patches, we can reduce ring usage
for virtio net small packets dramatically if we put
virtio net header inline with the data.
This can be done for free in case guest net stack allocated
extra head room for the packet, and I don't see
why would this have any downsides.
Even though with my recent patches qemu
no longer requires header to be the first s/g element,
we
2011 May 19
22
[PATCHv2 00/14] virtio and vhost-net performance enhancements
OK, here is the large patchset that implements the virtio spec update
that I sent earlier (the spec itself needs a minor update, will send
that out too next week, but I think we are on the same page here
already). It supercedes the PUBLISH_USED_IDX patches I sent
out earlier.
What will follow will be a patchset that actually includes 4 sets of
patches. I note below their status. Please consider
2011 May 19
22
[PATCHv2 00/14] virtio and vhost-net performance enhancements
OK, here is the large patchset that implements the virtio spec update
that I sent earlier (the spec itself needs a minor update, will send
that out too next week, but I think we are on the same page here
already). It supercedes the PUBLISH_USED_IDX patches I sent
out earlier.
What will follow will be a patchset that actually includes 4 sets of
patches. I note below their status. Please consider
2011 May 04
27
[PATCH 00/18] virtio and vhost-net performance enhancements
OK, here's a large patchset that implements the virtio spec update that I
sent earlier. It supercedes the PUBLISH_USED_IDX patches
I sent out earlier.
I know it's a lot to ask but please test, and please consider for 2.6.40 :)
I see nice performance improvements: one run showed going from 12
to 18 Gbit/s host to guest with netperf, but I did not spend a lot
of time testing performance,
2011 May 04
27
[PATCH 00/18] virtio and vhost-net performance enhancements
OK, here's a large patchset that implements the virtio spec update that I
sent earlier. It supercedes the PUBLISH_USED_IDX patches
I sent out earlier.
I know it's a lot to ask but please test, and please consider for 2.6.40 :)
I see nice performance improvements: one run showed going from 12
to 18 Gbit/s host to guest with netperf, but I did not spend a lot
of time testing performance,