similar to: [PATCH 5/6] virtio_net: rework mergeable buffer handling

Displaying 20 results from an estimated 5000 matches similar to: "[PATCH 5/6] virtio_net: rework mergeable buffer handling"

2014 Jan 16
0
[PATCH net-next v4 3/6] virtio-net: auto-tune mergeable rx buffer size for improved performance
Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page frag allocators") changed the mergeable receive buffer size from PAGE_SIZE to MTU-size, introducing a single-stream regression for benchmarks with large average packet size. There is no single optimal buffer size for all workloads. For workloads with packet size <= MTU bytes, MTU + virtio-net header-sized buffers
2016 Feb 21
1
[PATCH] virtio_net: switch to build_skb for mrg_rxbuf
For small packets data copy was observed to take up about 15% CPU time. Switch to build_skb and avoid the copy when using mergeable rx buffers. As a bonus, medium-size skbs that fit in a page will be completely linear. Of course, we now need to lower the lower bound on packet size, to make sure a sane number of skbs fits in rx socket buffer. By how much? I don't know yet. It might also be
2016 Feb 21
1
[PATCH] virtio_net: switch to build_skb for mrg_rxbuf
For small packets data copy was observed to take up about 15% CPU time. Switch to build_skb and avoid the copy when using mergeable rx buffers. As a bonus, medium-size skbs that fit in a page will be completely linear. Of course, we now need to lower the lower bound on packet size, to make sure a sane number of skbs fits in rx socket buffer. By how much? I don't know yet. It might also be
2013 Nov 12
0
[PATCH net-next 4/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page frag allocators") changed the mergeable receive buffer size from PAGE_SIZE to MTU-size, introducing a single-stream regression for benchmarks with large average packet size. There is no single optimal buffer size for all workloads. For workloads with packet size <= MTU bytes, MTU + virtio-net header-sized buffers
2013 Nov 12
0
[PATCH net-next 3/4] virtio-net: use per-receive queue page frag alloc for mergeable bufs
The virtio-net driver currently uses netdev_alloc_frag() for GFP_ATOMIC mergeable rx buffer allocations. This commit migrates virtio-net to use per-receive queue page frags for GFP_ATOMIC allocation. This change unifies mergeable rx buffer memory allocation, which now will use skb_refill_frag() for both atomic and GFP-WAIT buffer allocations. To address fragmentation concerns, if after buffer
2013 Nov 13
0
[PATCH net-next 4/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
On Wed, Nov 13, 2013 at 03:10:20PM +0800, Jason Wang wrote: > On 11/13/2013 06:21 AM, Michael Dalton wrote: > > Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page frag > > allocators") changed the mergeable receive buffer size from PAGE_SIZE to > > MTU-size, introducing a single-stream regression for benchmarks with large > > average packet
2013 Oct 29
0
[PATCH net-next] virtio_net: migrate mergeable rx buffers to page frag allocators
On Mon, Oct 28, 2013 at 10:44 PM, Michael Dalton <mwdalton at google.com> wrote: > The virtio_net driver's mergeable receive buffer allocator > uses 4KB packet buffers. For MTU-sized traffic, SKB truesize > is > 4KB but only ~1500 bytes of the buffer is used to store > packet data, reducing the effective TCP window size > substantially. This patch addresses the
2014 Jan 07
0
[PATCH net-next v2 3/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page frag allocators") changed the mergeable receive buffer size from PAGE_SIZE to MTU-size, introducing a single-stream regression for benchmarks with large average packet size. There is no single optimal buffer size for all workloads. For workloads with packet size <= MTU bytes, MTU + virtio-net header-sized buffers
2013 Nov 13
2
[PATCH net-next 4/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
On 11/13/2013 12:21 AM, Michael Dalton wrote: > Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page frag > allocators") changed the mergeable receive buffer size from PAGE_SIZE to > MTU-size, introducing a single-stream regression for benchmarks with large > average packet size. There is no single optimal buffer size for all workloads. > For workloads
2013 Nov 13
2
[PATCH net-next 4/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
On 11/13/2013 12:21 AM, Michael Dalton wrote: > Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page frag > allocators") changed the mergeable receive buffer size from PAGE_SIZE to > MTU-size, introducing a single-stream regression for benchmarks with large > average packet size. There is no single optimal buffer size for all workloads. > For workloads
2017 Jul 17
0
[PATCH net-next 2/5] virtio-net: pack headroom into ctx for mergeable buffer
Pack headroom into ctx, then during XDP set, we could know the size of headroom and copy if needed. This is required for avoiding reset on XDP. Signed-off-by: Jason Wang <jasowang at redhat.com> --- drivers/net/virtio_net.c | 29 ++++++++++++++++++++++++----- 1 file changed, 24 insertions(+), 5 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index
2017 Jul 18
1
[PATCH net-next 2/5] virtio-net: pack headroom into ctx for mergeable buffer
On Mon, Jul 17, 2017 at 08:43:58PM +0800, Jason Wang wrote: > Pack headroom into ctx, then during XDP set, we could know the size of > headroom and copy if needed. This is required for avoiding reset on > XDP. Not really when XDP is set - it's when buffers are used. virtio-net: pack headroom into ctx for mergeable buffers Pack headroom into ctx - this way when we get a buffer we
2017 Jul 18
1
[PATCH net-next 2/5] virtio-net: pack headroom into ctx for mergeable buffer
On Mon, Jul 17, 2017 at 08:43:58PM +0800, Jason Wang wrote: > Pack headroom into ctx, then during XDP set, we could know the size of > headroom and copy if needed. This is required for avoiding reset on > XDP. Not really when XDP is set - it's when buffers are used. virtio-net: pack headroom into ctx for mergeable buffers Pack headroom into ctx - this way when we get a buffer we
2013 Dec 17
0
[PATCH net-next 2/3] virtio-net: use per-receive queue page frag alloc for mergeable bufs
The virtio-net driver currently uses netdev_alloc_frag() for GFP_ATOMIC mergeable rx buffer allocations. This commit migrates virtio-net to use per-receive queue page frags for GFP_ATOMIC allocation. This change unifies mergeable rx buffer memory allocation, which now will use skb_refill_frag() for both atomic and GFP-WAIT buffer allocations. To address fragmentation concerns, if after buffer
2014 Jan 08
3
[PATCH net-next v2 3/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
On 01/07/2014 01:25 PM, Michael Dalton wrote: > Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page frag > allocators") changed the mergeable receive buffer size from PAGE_SIZE to > MTU-size, introducing a single-stream regression for benchmarks with large > average packet size. There is no single optimal buffer size for all > workloads. For workloads
2014 Jan 08
3
[PATCH net-next v2 3/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
On 01/07/2014 01:25 PM, Michael Dalton wrote: > Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page frag > allocators") changed the mergeable receive buffer size from PAGE_SIZE to > MTU-size, introducing a single-stream regression for benchmarks with large > average packet size. There is no single optimal buffer size for all > workloads. For workloads
2014 Jan 09
3
[PATCH net-next v2 3/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
On Mon, Jan 06, 2014 at 09:25:54PM -0800, Michael Dalton wrote: > Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page frag > allocators") changed the mergeable receive buffer size from PAGE_SIZE to > MTU-size, introducing a single-stream regression for benchmarks with large > average packet size. There is no single optimal buffer size for all >
2014 Jan 09
3
[PATCH net-next v2 3/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
On Mon, Jan 06, 2014 at 09:25:54PM -0800, Michael Dalton wrote: > Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page frag > allocators") changed the mergeable receive buffer size from PAGE_SIZE to > MTU-size, introducing a single-stream regression for benchmarks with large > average packet size. There is no single optimal buffer size for all >
2014 Jan 16
0
[PATCH net-next v4 2/6] virtio-net: use per-receive queue page frag alloc for mergeable bufs
The virtio-net driver currently uses netdev_alloc_frag() for GFP_ATOMIC mergeable rx buffer allocations. This commit migrates virtio-net to use per-receive queue page frags for GFP_ATOMIC allocation. This change unifies mergeable rx buffer memory allocation, which now will use skb_refill_frag() for both atomic and GFP-WAIT buffer allocations. To address fragmentation concerns, if after buffer
2013 Nov 13
4
[PATCH net-next 4/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
On 11/13/2013 06:21 AM, Michael Dalton wrote: > Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page frag > allocators") changed the mergeable receive buffer size from PAGE_SIZE to > MTU-size, introducing a single-stream regression for benchmarks with large > average packet size. There is no single optimal buffer size for all workloads. > For workloads