Displaying 20 results from an estimated 6000 matches similar to: "[PATCH RFC] virtio_net: fix patch: virtio_net: limit xmit polling"
2009 May 29
1
[PATCH 3/4] virtio_net: don't free buffers in xmit ring
The virtio_net driver is complicated by the two methods of freeing old
xmit buffers (in addition to freeing old ones at the start of the xmit
path).
The original code used a 1/10 second timer attached to xmit_free(),
reset on every xmit. Before we orphaned skbs on xmit, the
transmitting userspace could block with a full socket until the timer
fired, the skb destructor was called, and they were
2009 May 29
1
[PATCH 3/4] virtio_net: don't free buffers in xmit ring
The virtio_net driver is complicated by the two methods of freeing old
xmit buffers (in addition to freeing old ones at the start of the xmit
path).
The original code used a 1/10 second timer attached to xmit_free(),
reset on every xmit. Before we orphaned skbs on xmit, the
transmitting userspace could block with a full socket until the timer
fired, the skb destructor was called, and they were
2008 May 26
7
[PATCH 1/3] virtio: fix virtio_net xmit of freed skb bug
If we fail to transmit a packet, we assume the queue is full and put
the skb into last_xmit_skb. However, if more space frees up before we
xmit it, we loop, and the result can be transmitting the same skb twice.
Fix is simple: set skb to NULL if we've used it in some way, and check
before sending.
Signed-off-by: Rusty Russell <rusty at rustcorp.com.au>
---
drivers/net/virtio_net.c |
2008 May 26
7
[PATCH 1/3] virtio: fix virtio_net xmit of freed skb bug
If we fail to transmit a packet, we assume the queue is full and put
the skb into last_xmit_skb. However, if more space frees up before we
xmit it, we loop, and the result can be transmitting the same skb twice.
Fix is simple: set skb to NULL if we've used it in some way, and check
before sending.
Signed-off-by: Rusty Russell <rusty at rustcorp.com.au>
---
drivers/net/virtio_net.c |
2011 Jun 02
6
[PATCHv2 RFC 0/4] virtio and vhost-net capacity handling
OK, here's a new attempt to use the new capacity api. I also added more
comments to clarify the logic. Hope this is more readable. Let me know
pls.
This is on top of the patches applied by Rusty.
Warning: untested. Posting now to give people chance to
comment on the API.
Changes from v1:
- fix comment in patch 2 to correct confusion noted by Rusty
- rewrite patch 3 along the lines
2011 Jun 02
6
[PATCHv2 RFC 0/4] virtio and vhost-net capacity handling
OK, here's a new attempt to use the new capacity api. I also added more
comments to clarify the logic. Hope this is more readable. Let me know
pls.
This is on top of the patches applied by Rusty.
Warning: untested. Posting now to give people chance to
comment on the API.
Changes from v1:
- fix comment in patch 2 to correct confusion noted by Rusty
- rewrite patch 3 along the lines
2011 May 19
22
[PATCHv2 00/14] virtio and vhost-net performance enhancements
OK, here is the large patchset that implements the virtio spec update
that I sent earlier (the spec itself needs a minor update, will send
that out too next week, but I think we are on the same page here
already). It supercedes the PUBLISH_USED_IDX patches I sent
out earlier.
What will follow will be a patchset that actually includes 4 sets of
patches. I note below their status. Please consider
2011 May 19
22
[PATCHv2 00/14] virtio and vhost-net performance enhancements
OK, here is the large patchset that implements the virtio spec update
that I sent earlier (the spec itself needs a minor update, will send
that out too next week, but I think we are on the same page here
already). It supercedes the PUBLISH_USED_IDX patches I sent
out earlier.
What will follow will be a patchset that actually includes 4 sets of
patches. I note below their status. Please consider
2009 May 29
2
[PATCH 2/4] virtio_net: return NETDEV_TX_BUSY instead of queueing an extra skb.
This effectively reverts 99ffc696d10b28580fe93441d627cf290ac4484c
"virtio: wean net driver off NETDEV_TX_BUSY".
The complexity of queuing an skb (setting a tasklet to re-xmit) is
questionable, especially once we get rid of the other reason for the
tasklet in the next patch.
If the skb won't fit in the tx queue, just return NETDEV_TX_BUSY. It
might be frowned upon, but it's
2009 May 29
2
[PATCH 2/4] virtio_net: return NETDEV_TX_BUSY instead of queueing an extra skb.
This effectively reverts 99ffc696d10b28580fe93441d627cf290ac4484c
"virtio: wean net driver off NETDEV_TX_BUSY".
The complexity of queuing an skb (setting a tasklet to re-xmit) is
questionable, especially once we get rid of the other reason for the
tasklet in the next patch.
If the skb won't fit in the tx queue, just return NETDEV_TX_BUSY. It
might be frowned upon, but it's
2011 Jun 19
2
RFT: virtio_net: limit xmit polling
OK, different people seem to test different trees. In the hope to get
everyone on the same page, I created several variants of this patch so
they can be compared. Whoever's interested, please check out the
following, and tell me how these compare:
kernel:
git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git
virtio-net-limit-xmit-polling/base - this is net-next baseline to test
2011 Jun 19
2
RFT: virtio_net: limit xmit polling
OK, different people seem to test different trees. In the hope to get
everyone on the same page, I created several variants of this patch so
they can be compared. Whoever's interested, please check out the
following, and tell me how these compare:
kernel:
git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git
virtio-net-limit-xmit-polling/base - this is net-next baseline to test
2011 Jun 01
5
[PATCH RFC 0/3] virtio and vhost-net capacity handling
OK, here's a new attempt to use the new capacity api. I also added more
comments to clarify the logic. Hope this is more readable. Let me know
pls.
This is on top of the patches applied by Rusty.
Note: there are now actually 2 calls to fee_old_xmit_skbs on
data path so instead of passing flag '2' to the
second one I thought we can just make each call free up
at least 1 skb. This
2011 Jun 01
5
[PATCH RFC 0/3] virtio and vhost-net capacity handling
OK, here's a new attempt to use the new capacity api. I also added more
comments to clarify the logic. Hope this is more readable. Let me know
pls.
This is on top of the patches applied by Rusty.
Note: there are now actually 2 calls to fee_old_xmit_skbs on
data path so instead of passing flag '2' to the
second one I thought we can just make each call free up
at least 1 skb. This
2011 May 04
27
[PATCH 00/18] virtio and vhost-net performance enhancements
OK, here's a large patchset that implements the virtio spec update that I
sent earlier. It supercedes the PUBLISH_USED_IDX patches
I sent out earlier.
I know it's a lot to ask but please test, and please consider for 2.6.40 :)
I see nice performance improvements: one run showed going from 12
to 18 Gbit/s host to guest with netperf, but I did not spend a lot
of time testing performance,
2011 May 04
27
[PATCH 00/18] virtio and vhost-net performance enhancements
OK, here's a large patchset that implements the virtio spec update that I
sent earlier. It supercedes the PUBLISH_USED_IDX patches
I sent out earlier.
I know it's a lot to ask but please test, and please consider for 2.6.40 :)
I see nice performance improvements: one run showed going from 12
to 18 Gbit/s host to guest with netperf, but I did not spend a lot
of time testing performance,
2008 Apr 18
0
virtio: wean net driver off NETDEV_TX_BUSY
Herbert tells me that returning NETDEV_TX_BUSY from hard_start_xmit is
seen as a poor thing to do; we should cache the packet and stop the queue.
Signed-off-by: Rusty Russell <rusty at rustcorp.com.au>
---
drivers/net/virtio_net.c | 57 +++++++++++++++++++++++++++++++----------------
1 file changed, 38 insertions(+), 19 deletions(-)
diff -r a936d320f6d5 drivers/net/virtio_net.c
---
2008 Apr 18
0
virtio: wean net driver off NETDEV_TX_BUSY
Herbert tells me that returning NETDEV_TX_BUSY from hard_start_xmit is
seen as a poor thing to do; we should cache the packet and stop the queue.
Signed-off-by: Rusty Russell <rusty at rustcorp.com.au>
---
drivers/net/virtio_net.c | 57 +++++++++++++++++++++++++++++++----------------
1 file changed, 38 insertions(+), 19 deletions(-)
diff -r a936d320f6d5 drivers/net/virtio_net.c
---
2011 Nov 02
3
[PATCH RFC 0/2] virtio-pci: polling mode support
MSIX spec requires that device can be operated with
all vectors masked, by polling.
So the following patchset (lightly tested) adds this
ability: when driver reads ISR, the device
recalls a pending notification, and returns
pending status in the ISR register.
The polling driver can operate as follows:
- map all VQs and config to the same vector
- poll ISR to get status - this also flushes VQ
2011 Nov 02
3
[PATCH RFC 0/2] virtio-pci: polling mode support
MSIX spec requires that device can be operated with
all vectors masked, by polling.
So the following patchset (lightly tested) adds this
ability: when driver reads ISR, the device
recalls a pending notification, and returns
pending status in the ISR register.
The polling driver can operate as follows:
- map all VQs and config to the same vector
- poll ISR to get status - this also flushes VQ