similar to: [PATCH] virtio-net: put virtio net header inline with data

Displaying 20 results from an estimated 6000 matches similar to: "[PATCH] virtio-net: put virtio net header inline with data"

2012 Sep 28
6
[PATCH 0/3] virtio-net: inline header support
Thinking about Sasha's patches, we can reduce ring usage for virtio net small packets dramatically if we put virtio net header inline with the data. This can be done for free in case guest net stack allocated extra head room for the packet, and I don't see why would this have any downsides. Even though with my recent patches qemu no longer requires header to be the first s/g element, we
2012 Sep 28
6
[PATCH 0/3] virtio-net: inline header support
Thinking about Sasha's patches, we can reduce ring usage for virtio net small packets dramatically if we put virtio net header inline with the data. This can be done for free in case guest net stack allocated extra head room for the packet, and I don't see why would this have any downsides. Even though with my recent patches qemu no longer requires header to be the first s/g element, we
2013 Jul 10
2
[PATCH] virtio-net: put virtio net header inline with data
From: Rusty Russell <rusty at rustcorp.com.au> Date: Tue, 09 Jul 2013 17:38:51 +0930 > If you convince DaveM, I won't object :) Simplifications are great, but not when the merge window opens up. Sorry, this isn't appropriate now.
2013 Jul 10
2
[PATCH] virtio-net: put virtio net header inline with data
From: Rusty Russell <rusty at rustcorp.com.au> Date: Tue, 09 Jul 2013 17:38:51 +0930 > If you convince DaveM, I won't object :) Simplifications are great, but not when the merge window opens up. Sorry, this isn't appropriate now.
2014 Nov 27
1
[PATCH v6 24/46] virtio_net: get rid of virtio_net_hdr/skb_vnet_hdr
virtio 1.0 doesn't use virtio_net_hdr anymore, and in fact, it's not really useful since virtio_net_hdr_mrg_rxbuf includes that as the first field anyway. Let's drop it, precalculate header len and store within vi instead. This way we can also remove struct skb_vnet_hdr. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> Reviewed-by: Cornelia Huck <cornelia.huck at
2014 Nov 27
1
[PATCH v6 24/46] virtio_net: get rid of virtio_net_hdr/skb_vnet_hdr
virtio 1.0 doesn't use virtio_net_hdr anymore, and in fact, it's not really useful since virtio_net_hdr_mrg_rxbuf includes that as the first field anyway. Let's drop it, precalculate header len and store within vi instead. This way we can also remove struct skb_vnet_hdr. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> Reviewed-by: Cornelia Huck <cornelia.huck at
2014 Oct 23
6
[PATCH RFC 1/4] virtio_net: pass vi around
Too many places poke at [rs]q->vq->vdev->priv just to get the the vi structure. Let's just pass the pointer around: seems cleaner, and might even be faster. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- drivers/net/virtio_net.c | 36 +++++++++++++++++++----------------- 1 file changed, 19 insertions(+), 17 deletions(-) diff --git a/drivers/net/virtio_net.c
2014 Oct 23
6
[PATCH RFC 1/4] virtio_net: pass vi around
Too many places poke at [rs]q->vq->vdev->priv just to get the the vi structure. Let's just pass the pointer around: seems cleaner, and might even be faster. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- drivers/net/virtio_net.c | 36 +++++++++++++++++++----------------- 1 file changed, 19 insertions(+), 17 deletions(-) diff --git a/drivers/net/virtio_net.c
2013 Jun 07
0
[PATCH] virtio-net: put virtio net header inline with data
On 06/06/2013 05:55 PM, Michael S. Tsirkin wrote: > For small packets we can simplify xmit processing by linearizing buffers > with the header: most packets seem to have enough head room we can use > for this purpose. > > Since some older hypervisors (e.g. qemu before version 1.5) > required that header is the first s/g element, > we need a feature bit for this. > >
2017 Nov 02
2
Possible unsafe usage of skb->cb in virtio-net
On Thu, Nov 02, 2017 at 11:40:36AM +0000, Ilya Lesokhin wrote: > Hi, > I've noticed that the virtio-net uses skb->cb. > > I don't know all the detail by my understanding is it caused problem with the mlx5 driver > and was fixed here: > https://github.com/torvalds/linux/commit/34802a42b3528b0e18ea4517c8b23e1214a09332 > > Thanks, > Ilya Thanks a lot for the
2017 Nov 02
2
Possible unsafe usage of skb->cb in virtio-net
On Thu, Nov 02, 2017 at 11:40:36AM +0000, Ilya Lesokhin wrote: > Hi, > I've noticed that the virtio-net uses skb->cb. > > I don't know all the detail by my understanding is it caused problem with the mlx5 driver > and was fixed here: > https://github.com/torvalds/linux/commit/34802a42b3528b0e18ea4517c8b23e1214a09332 > > Thanks, > Ilya Thanks a lot for the
2014 Nov 25
2
[PATCH v4 21/42] virtio_net: get rid of virtio_net_hdr/skb_vnet_hdr
virtio 1.0 doesn't use virtio_net_hdr anymore, and in fact, it's not really useful since virtio_net_hdr_mrg_rxbuf includes that as the first field anyway. Let's drop it, precalculate header len and store within vi instead. This way we can also remove struct skb_vnet_hdr. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- drivers/net/virtio_net.c | 90
2014 Nov 25
2
[PATCH v4 21/42] virtio_net: get rid of virtio_net_hdr/skb_vnet_hdr
virtio 1.0 doesn't use virtio_net_hdr anymore, and in fact, it's not really useful since virtio_net_hdr_mrg_rxbuf includes that as the first field anyway. Let's drop it, precalculate header len and store within vi instead. This way we can also remove struct skb_vnet_hdr. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- drivers/net/virtio_net.c | 90
2013 Jul 15
0
[PATCH] virtio-net: put virtio net header inline with data
From: Michael S. Tsirkin <mst at redhat.com> For small packets we can simplify xmit processing by linearizing buffers with the header: most packets seem to have enough head room we can use for this purpose. Since existing hypervisors require that header is the first s/g element, we need a feature bit for this. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> Signed-off-by: Rusty
2013 Jul 08
3
[PATCH] virtio-net: put virtio net header inline with data
For small packets we can simplify xmit processing by linearizing buffers with the header: most packets seem to have enough head room we can use for this purpose. Since existing hypervisors require that header is the first s/g element, we need a feature bit for this. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- Note: this needs to be applied on top of patch defining
2013 Jul 08
3
[PATCH] virtio-net: put virtio net header inline with data
For small packets we can simplify xmit processing by linearizing buffers with the header: most packets seem to have enough head room we can use for this purpose. Since existing hypervisors require that header is the first s/g element, we need a feature bit for this. Signed-off-by: Michael S. Tsirkin <mst at redhat.com> --- Note: this needs to be applied on top of patch defining
2011 Dec 05
8
[net-next RFC PATCH 0/5] Series short description
multiple queue virtio-net: flow steering through host/guest cooperation Hello all: This is a rough series adds the guest/host cooperation of flow steering support based on Krish Kumar's multiple queue virtio-net driver patch 3/3 (http://lwn.net/Articles/467283/). This idea is simple, the backend pass the rxhash to the guest and guest would tell the backend the hash to queue mapping when
2011 Dec 05
8
[net-next RFC PATCH 0/5] Series short description
multiple queue virtio-net: flow steering through host/guest cooperation Hello all: This is a rough series adds the guest/host cooperation of flow steering support based on Krish Kumar's multiple queue virtio-net driver patch 3/3 (http://lwn.net/Articles/467283/). This idea is simple, the backend pass the rxhash to the guest and guest would tell the backend the hash to queue mapping when
2013 Oct 28
8
[PATCH net-next] virtio_net: migrate mergeable rx buffers to page frag allocators
The virtio_net driver's mergeable receive buffer allocator uses 4KB packet buffers. For MTU-sized traffic, SKB truesize is > 4KB but only ~1500 bytes of the buffer is used to store packet data, reducing the effective TCP window size substantially. This patch addresses the performance concerns with mergeable receive buffers by allocating MTU-sized packet buffers using page frag allocators.
2013 Oct 28
8
[PATCH net-next] virtio_net: migrate mergeable rx buffers to page frag allocators
The virtio_net driver's mergeable receive buffer allocator uses 4KB packet buffers. For MTU-sized traffic, SKB truesize is > 4KB but only ~1500 bytes of the buffer is used to store packet data, reducing the effective TCP window size substantially. This patch addresses the performance concerns with mergeable receive buffers by allocating MTU-sized packet buffers using page frag allocators.