Michael S. Tsirkin
2011-Jun-02 15:42 UTC
[PATCHv2 RFC 0/4] virtio and vhost-net capacity handling
OK, here's a new attempt to use the new capacity api. I also added more comments to clarify the logic. Hope this is more readable. Let me know pls. This is on top of the patches applied by Rusty. Warning: untested. Posting now to give people chance to comment on the API. Changes from v1: - fix comment in patch 2 to correct confusion noted by Rusty - rewrite patch 3 along the lines suggested by Rusty note: it's not exactly the same but I hope it's close enough, the main difference is that mine does limited polling even in the unlikely xmit failure case. - added a patch to not return capacity from add_buf it always looked like a weird hack Michael S. Tsirkin (4): virtio_ring: add capacity check API virtio_net: fix tx capacity checks using new API virtio_net: limit xmit polling Revert "virtio: make add_buf return capacity remaining: drivers/net/virtio_net.c | 111 ++++++++++++++++++++++++++---------------- drivers/virtio/virtio_ring.c | 10 +++- include/linux/virtio.h | 7 ++- 3 files changed, 84 insertions(+), 44 deletions(-) -- 1.7.5.53.gc233e
Michael S. Tsirkin
2011-Jun-02 15:42 UTC
[PATCHv2 RFC 1/4] virtio_ring: add capacity check API
Add API to check ring capacity. Because of the option to use indirect buffers, this returns the worst case, not the normal case capacity. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- drivers/virtio/virtio_ring.c | 8 ++++++++ include/linux/virtio.h | 5 +++++ 2 files changed, 13 insertions(+), 0 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 68b9136..23422f1 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -344,6 +344,14 @@ void *virtqueue_get_buf(struct virtqueue *_vq, unsigned int *len) } EXPORT_SYMBOL_GPL(virtqueue_get_buf); +int virtqueue_min_capacity(struct virtqueue *_vq) +{ + struct vring_virtqueue *vq = to_vvq(_vq); + + return vq->num_free; +} +EXPORT_SYMBOL_GPL(virtqueue_min_capacity); + void virtqueue_disable_cb(struct virtqueue *_vq) { struct vring_virtqueue *vq = to_vvq(_vq); diff --git a/include/linux/virtio.h b/include/linux/virtio.h index 7108857..209220d 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -42,6 +42,9 @@ struct virtqueue { * vq: the struct virtqueue we're talking about. * len: the length written into the buffer * Returns NULL or the "data" token handed to add_buf. + * virtqueue_min_capacity: get the current capacity of the queue + * vq: the struct virtqueue we're talking about. + * Returns the current worst case capacity of the queue. * virtqueue_disable_cb: disable callbacks * vq: the struct virtqueue we're talking about. * Note that this is not necessarily synchronous, hence unreliable and only @@ -89,6 +92,8 @@ void virtqueue_kick(struct virtqueue *vq); void *virtqueue_get_buf(struct virtqueue *vq, unsigned int *len); +int virtqueue_min_capacity(struct virtqueue *vq); + void virtqueue_disable_cb(struct virtqueue *vq); bool virtqueue_enable_cb(struct virtqueue *vq); -- 1.7.5.53.gc233e
Michael S. Tsirkin
2011-Jun-02 15:43 UTC
[PATCHv2 RFC 2/4] virtio_net: fix tx capacity checks using new API
In the (rare) case where new descriptors are used while virtio_net enables vq callback for the TX vq, virtio_net uses the number of sg entries in the skb it frees to calculate how many descriptors in the ring have just been made available. But this value is an overestimate: with indirect buffers each skb only uses one descriptor entry, meaning we may wake the queue only to find we are still out of space and need to stop the ring almost at once. Using the new virtqueue_min_capacity() call, we can determine the remaining capacity, so we should use that instead. This estimation is worst-case which is consistent with the value returned by virtqueue_add_buf. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- drivers/net/virtio_net.c | 9 ++++----- 1 files changed, 4 insertions(+), 5 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index f685324..a0ee78d 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -509,19 +509,17 @@ again: return received; } -static unsigned int free_old_xmit_skbs(struct virtnet_info *vi) +static void free_old_xmit_skbs(struct virtnet_info *vi) { struct sk_buff *skb; - unsigned int len, tot_sgs = 0; + unsigned int len; while ((skb = virtqueue_get_buf(vi->svq, &len)) != NULL) { pr_debug("Sent skb %p\n", skb); vi->dev->stats.tx_bytes += skb->len; vi->dev->stats.tx_packets++; - tot_sgs += skb_vnet_hdr(skb)->num_sg; dev_kfree_skb_any(skb); } - return tot_sgs; } static int xmit_skb(struct virtnet_info *vi, struct sk_buff *skb) @@ -611,7 +609,8 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) netif_stop_queue(dev); if (unlikely(!virtqueue_enable_cb_delayed(vi->svq))) { /* More just got used, free them then recheck. */ - capacity += free_old_xmit_skbs(vi); + free_old_xmit_skbs(vi); + capacity = virtqueue_min_capacity(vi->svq); if (capacity >= 2+MAX_SKB_FRAGS) { netif_start_queue(dev); virtqueue_disable_cb(vi->svq); -- 1.7.5.53.gc233e
Current code might introduce a lot of latency variation if there are many pending bufs at the time we attempt to transmit a new one. This is bad for real-time applications and can't be good for TCP either. Free up just enough to both clean up all buffers eventually and to be able to xmit the next packet. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- drivers/net/virtio_net.c | 106 +++++++++++++++++++++++++++++----------------- 1 files changed, 67 insertions(+), 39 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index a0ee78d..b25db1c 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -509,17 +509,33 @@ again: return received; } -static void free_old_xmit_skbs(struct virtnet_info *vi) +static bool free_old_xmit_skb(struct virtnet_info *vi) { struct sk_buff *skb; unsigned int len; - while ((skb = virtqueue_get_buf(vi->svq, &len)) != NULL) { - pr_debug("Sent skb %p\n", skb); - vi->dev->stats.tx_bytes += skb->len; - vi->dev->stats.tx_packets++; - dev_kfree_skb_any(skb); - } + skb = virtqueue_get_buf(vi->svq, &len); + if (unlikely(!skb)) + return false; + pr_debug("Sent skb %p\n", skb); + vi->dev->stats.tx_bytes += skb->len; + vi->dev->stats.tx_packets++; + dev_kfree_skb_any(skb); + return true; +} + +/* Check capacity and try to free enough pending old buffers to enable queueing + * new ones. Return true if we can guarantee that a following + * virtqueue_add_buf will succeed. */ +static bool free_xmit_capacity(struct virtnet_info *vi) +{ + struct sk_buff *skb; + unsigned int len; + + while (virtqueue_min_capacity(vi->svq) < MAX_SKB_FRAGS + 2) + if (unlikely(!free_old_xmit_skb)) + return false; + return true; } static int xmit_skb(struct virtnet_info *vi, struct sk_buff *skb) @@ -572,30 +588,34 @@ static int xmit_skb(struct virtnet_info *vi, struct sk_buff *skb) static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) { struct virtnet_info *vi = netdev_priv(dev); - int capacity; - - /* Free up any pending old buffers before queueing new ones. */ - free_old_xmit_skbs(vi); - - /* Try to transmit */ - capacity = xmit_skb(vi, skb); - - /* This can happen with OOM and indirect buffers. */ - if (unlikely(capacity < 0)) { - if (net_ratelimit()) { - if (likely(capacity == -ENOMEM)) { - dev_warn(&dev->dev, - "TX queue failure: out of memory\n"); - } else { - dev->stats.tx_fifo_errors++; + int ret, i; + + /* We normally do have space in the ring, so try to queue the skb as + * fast as possible. */ + ret = xmit_skb(vi, skb); + if (unlikely(ret < 0)) { + /* This triggers on the first xmit after ring full condition. + * We need to free up some skbs first. */ + if (likely(free_xmit_capacity(vi))) { + ret = xmit_skb(vi, skb); + /* This should never fail. Check, just in case. */ + if (unlikely(ret < 0)) { dev_warn(&dev->dev, "Unexpected TX queue failure: %d\n", - capacity); + ret); + dev->stats.tx_fifo_errors++; + dev->stats.tx_dropped++; + kfree_skb(skb); + return NETDEV_TX_OK; } + } else { + /* Ring full: it might happen if we get a callback while + * the queue is still mostly full. This should be + * extremely rare. */ + dev->stats.tx_dropped++; + kfree_skb(skb); + goto stop; } - dev->stats.tx_dropped++; - kfree_skb(skb); - return NETDEV_TX_OK; } virtqueue_kick(vi->svq); @@ -603,18 +623,26 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) skb_orphan(skb); nf_reset(skb); - /* Apparently nice girls don't return TX_BUSY; stop the queue - * before it gets out of hand. Naturally, this wastes entries. */ - if (capacity < 2+MAX_SKB_FRAGS) { - netif_stop_queue(dev); - if (unlikely(!virtqueue_enable_cb_delayed(vi->svq))) { - /* More just got used, free them then recheck. */ - free_old_xmit_skbs(vi); - capacity = virtqueue_min_capacity(vi->svq); - if (capacity >= 2+MAX_SKB_FRAGS) { - netif_start_queue(dev); - virtqueue_disable_cb(vi->svq); - } + /* We transmit one skb, so try to free at least two pending skbs. + * This is so that we don't hog the skb memory unnecessarily. * + * Doing this after kick means there's a chance we'll free + * the skb we have just sent, which is hot in cache. */ + for (i = 0; i < 2; i++) + free_old_xmit_skb(v); + + if (likely(free_xmit_capacity(vi))) + return NETDEV_TX_OK; + +stop: + /* Apparently nice girls don't return TX_BUSY; check capacity and stop + * the queue before it gets out of hand. + * Naturally, this wastes entries. */ + netif_stop_queue(dev); + if (unlikely(!virtqueue_enable_cb_delayed(vi->svq))) { + /* More just got used, free them and recheck. */ + if (free_xmit_capacity(vi)) { + netif_start_queue(dev); + virtqueue_disable_cb(vi->svq); } } -- 1.7.5.53.gc233e
Michael S. Tsirkin
2011-Jun-02 15:43 UTC
[PATCHv2 RFC 4/4] Revert "virtio: make add_buf return capacity remaining:
This reverts commit 3c1b27d5043086a485f8526353ae9fe37bfa1065. The only user was virtio_net, and it switched to min_capacity instead. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- drivers/virtio/virtio_ring.c | 2 +- include/linux/virtio.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 23422f1..a6c21eb 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -233,7 +233,7 @@ add_head: pr_debug("Added buffer head %i to %p\n", head, vq); END_USE(vq); - return vq->num_free; + return 0; } EXPORT_SYMBOL_GPL(virtqueue_add_buf_gfp); diff --git a/include/linux/virtio.h b/include/linux/virtio.h index 209220d..63c4908 100644 --- a/include/linux/virtio.h +++ b/include/linux/virtio.h @@ -34,7 +34,7 @@ struct virtqueue { * in_num: the number of sg which are writable (after readable ones) * data: the token identifying the buffer. * gfp: how to do memory allocations (if necessary). - * Returns remaining capacity of queue (sg segments) or a negative error. + * Returns 0 on success or a negative error. * virtqueue_kick: update after add_buf * vq: the struct virtqueue * After one or more add_buf calls, invoke this to kick the other side. -- 1.7.5.53.gc233e
Michael S. Tsirkin
2011-Jun-02 17:17 UTC
[PATCHv2 RFC 0/4] virtio and vhost-net capacity handling
On Thu, Jun 02, 2011 at 06:42:35PM +0300, Michael S. Tsirkin wrote:> OK, here's a new attempt to use the new capacity api. I also added more > comments to clarify the logic. Hope this is more readable. Let me know > pls. > > This is on top of the patches applied by Rusty. > > Warning: untested. Posting now to give people chance to > comment on the API. > > Changes from v1: > - fix comment in patch 2 to correct confusion noted by Rusty > - rewrite patch 3 along the lines suggested by Rusty > note: it's not exactly the same but I hope it's close > enough, the main difference is that mine does limited > polling even in the unlikely xmit failure case. > - added a patch to not return capacity from add_buf > it always looked like a weird hack > > Michael S. Tsirkin (4): > virtio_ring: add capacity check API > virtio_net: fix tx capacity checks using new API > virtio_net: limit xmit polling > Revert "virtio: make add_buf return capacity remaining: > > drivers/net/virtio_net.c | 111 ++++++++++++++++++++++++++---------------- > drivers/virtio/virtio_ring.c | 10 +++- > include/linux/virtio.h | 7 ++- > 3 files changed, 84 insertions(+), 44 deletions(-) > > -- > 1.7.5.53.gc233eAnd just FYI, here's a patch (on top) that I considered but decided against. This does a single get_buf before xmit. I thought it's not really needed as the capacity check in add_buf is relatively cheap, and we removed the kick in xmit_skb. But the point is that the loop will now be easy to move around if someone manages to show this benefits speed (which I doubt). diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index b25db1c..75af5be 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -590,6 +590,9 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) struct virtnet_info *vi = netdev_priv(dev); int ret, i; + /* Try to pop an skb, to increase the chance we can add this one. */ + free_old_xmit_skb(v); + /* We normally do have space in the ring, so try to queue the skb as * fast as possible. */ ret = xmit_skb(vi, skb); @@ -627,8 +630,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev) * This is so that we don't hog the skb memory unnecessarily. * * Doing this after kick means there's a chance we'll free * the skb we have just sent, which is hot in cache. */ - for (i = 0; i < 2; i++) - free_old_xmit_skb(v); + free_old_xmit_skb(v); if (likely(free_xmit_capacity(vi))) return NETDEV_TX_OK;
Michael S. Tsirkin
2011-Jun-07 16:08 UTC
[PATCHv2 RFC 0/4] virtio and vhost-net capacity handling
On Thu, Jun 02, 2011 at 06:42:35PM +0300, Michael S. Tsirkin wrote:> OK, here's a new attempt to use the new capacity api. I also added more > comments to clarify the logic. Hope this is more readable. Let me know > pls. > > This is on top of the patches applied by Rusty. > > Warning: untested. Posting now to give people chance to > comment on the API.OK, this seems to have survived some testing so far, after I dropped patch 4 and fixed build for patch 3 (build fixup patch sent in reply to the original). I'll be mostly offline until Sunday, would appreciate testing reports. git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git virtio-net-xmit-polling-v2 git://git.kernel.org/pub/scm/linux/kernel/git/mst/qemu-kvm.git virtio-net-event-idx-v3 Thanks!> Changes from v1: > - fix comment in patch 2 to correct confusion noted by Rusty > - rewrite patch 3 along the lines suggested by Rusty > note: it's not exactly the same but I hope it's close > enough, the main difference is that mine does limited > polling even in the unlikely xmit failure case. > - added a patch to not return capacity from add_buf > it always looked like a weird hack > > Michael S. Tsirkin (4): > virtio_ring: add capacity check API > virtio_net: fix tx capacity checks using new API > virtio_net: limit xmit polling > Revert "virtio: make add_buf return capacity remaining: > > drivers/net/virtio_net.c | 111 ++++++++++++++++++++++++++---------------- > drivers/virtio/virtio_ring.c | 10 +++- > include/linux/virtio.h | 7 ++- > 3 files changed, 84 insertions(+), 44 deletions(-) > > -- > 1.7.5.53.gc233e
Apparently Analagous Threads
- [PATCHv2 RFC 0/4] virtio and vhost-net capacity handling
- [PATCH RFC 0/3] virtio and vhost-net capacity handling
- [PATCH RFC 0/3] virtio and vhost-net capacity handling
- [PATCH 00/18] virtio and vhost-net performance enhancements
- [PATCH 00/18] virtio and vhost-net performance enhancements