Displaying 20 results from an estimated 40 matches for "mergeable_buffer_align".
2017 Jan 23
1
[PATCH] virtio_net: fix PAGE_SIZE > 64k
I don't have any guests with PAGE_SIZE > 64k but the
code seems to be clearly broken in that case
as PAGE_SIZE / MERGEABLE_BUFFER_ALIGN will need
more than 8 bit and so the code in mergeable_ctx_to_buf_address
does not give us the actual true size.
Cc: John Fastabend <john.fastabend at gmail.com>
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
---
Lightly tested on x86 only.
drivers/net/virtio_net.c | 10 +++++...
2017 Jan 23
1
[PATCH] virtio_net: fix PAGE_SIZE > 64k
I don't have any guests with PAGE_SIZE > 64k but the
code seems to be clearly broken in that case
as PAGE_SIZE / MERGEABLE_BUFFER_ALIGN will need
more than 8 bit and so the code in mergeable_ctx_to_buf_address
does not give us the actual true size.
Cc: John Fastabend <john.fastabend at gmail.com>
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
---
Lightly tested on x86 only.
drivers/net/virtio_net.c | 10 +++++...
2014 Jan 09
3
[PATCH net-next v2 3/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
...y 64, and has a maximum theoretical buffer size of
aligned GOOD_PACKET_LEN + (BUF_ALIGN - 1) * BUF_ALIGN, which is at least
1536 + 63 * 64 = 5568. On x86, we already use a 64 byte alignment, and
this code supports all current buffer sizes, from 1536 to PAGE_SIZE.
#if L1_CACHE_BYTES < 64
#define MERGEABLE_BUFFER_ALIGN 64
#define MERGEABLE_BUFFER_SHIFT 6
#else
#define MERGEABLE_BUFFER_ALIGN L1_CACHE_BYTES
#define MERGEABLE_BUFFER_SHIFT L1_CACHE_SHIFT
#endif
#define MERGEABLE_BUFFER_MIN ALIGN(GOOD_PACKET_LEN +
sizeof(virtio_net_hdr_mrg_rbuf),
ME...
2014 Jan 09
3
[PATCH net-next v2 3/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
...y 64, and has a maximum theoretical buffer size of
aligned GOOD_PACKET_LEN + (BUF_ALIGN - 1) * BUF_ALIGN, which is at least
1536 + 63 * 64 = 5568. On x86, we already use a 64 byte alignment, and
this code supports all current buffer sizes, from 1536 to PAGE_SIZE.
#if L1_CACHE_BYTES < 64
#define MERGEABLE_BUFFER_ALIGN 64
#define MERGEABLE_BUFFER_SHIFT 6
#else
#define MERGEABLE_BUFFER_ALIGN L1_CACHE_BYTES
#define MERGEABLE_BUFFER_SHIFT L1_CACHE_SHIFT
#endif
#define MERGEABLE_BUFFER_MIN ALIGN(GOOD_PACKET_LEN +
sizeof(virtio_net_hdr_mrg_rbuf),
ME...
2014 Jan 09
0
[PATCH net-next v2 3/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
...best to start with 256 alignment.
A bit simpler, better than what we had, and will let us go
above PAGE_SIZE long term. The optimization shrinking
alignment to 64 can be done on top if we see a
work-load that's improved by it, which I doubt.
>
> #if L1_CACHE_BYTES < 64
> #define MERGEABLE_BUFFER_ALIGN 64
> #define MERGEABLE_BUFFER_SHIFT 6
> #else
> #define MERGEABLE_BUFFER_ALIGN L1_CACHE_BYTES
> #define MERGEABLE_BUFFER_SHIFT L1_CACHE_SHIFT
> #endif
> #define MERGEABLE_BUFFER_MIN ALIGN(GOOD_PACKET_LEN +
> sizeof(virtio_net_hdr_mrg_rbuf),
&g...
2014 Jan 09
2
[PATCH net-next v2 3/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
Sorry, forgot to mention - if we want to explore combining the buffer
address and truesize into a single void *, we could also exploit the
fact that our size ranges from aligned GOOD_PACKET_LEN to PAGE_SIZE, and
potentially encode fewer values for truesize (and require a smaller
alignment than 256). The prior e-mails discussion of 256 byte alignment
with 256 values is just one potential design
2014 Jan 09
2
[PATCH net-next v2 3/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
Sorry, forgot to mention - if we want to explore combining the buffer
address and truesize into a single void *, we could also exploit the
fact that our size ranges from aligned GOOD_PACKET_LEN to PAGE_SIZE, and
potentially encode fewer values for truesize (and require a smaller
alignment than 256). The prior e-mails discussion of 256 byte alignment
with 256 values is just one potential design
2017 Jan 23
2
[PATCH v2] virtio_net: fix PAGE_SIZE > 64k
I don't have any guests with PAGE_SIZE > 64k but the
code seems to be clearly broken in that case
as PAGE_SIZE / MERGEABLE_BUFFER_ALIGN will need
more than 8 bit and so the code in mergeable_ctx_to_buf_address
does not give us the actual true size.
Cc: John Fastabend <john.fastabend at gmail.com>
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
---
changes from v1:
fix build warnings
drivers/net/virtio_net.c |...
2017 Jan 23
2
[PATCH v2] virtio_net: fix PAGE_SIZE > 64k
I don't have any guests with PAGE_SIZE > 64k but the
code seems to be clearly broken in that case
as PAGE_SIZE / MERGEABLE_BUFFER_ALIGN will need
more than 8 bit and so the code in mergeable_ctx_to_buf_address
does not give us the actual true size.
Cc: John Fastabend <john.fastabend at gmail.com>
Signed-off-by: Michael S. Tsirkin <mst at redhat.com>
---
changes from v1:
fix build warnings
drivers/net/virtio_net.c |...
2014 Jan 16
0
[PATCH net-next v4 3/6] virtio-net: auto-tune mergeable rx buffer size for improved performance
...uffer size when refilling RX rings. As the entire RX
+ * ring may be refilled at once, the weight is chosen so that the EWMA will be
+ * insensitive to short-term, transient changes in packet size.
+ */
+#define RECEIVE_AVG_WEIGHT 64
+
+/* Minimum alignment for mergeable packet buffers. */
+#define MERGEABLE_BUFFER_ALIGN max(L1_CACHE_BYTES, 256)
+
#define VIRTNET_DRIVER_VERSION "1.0.0"
struct virtnet_stats {
@@ -78,6 +86,9 @@ struct receive_queue {
/* Chain pages by the private ptr. */
struct page *pages;
+ /* Average packet length for mergeable receive buffers. */
+ struct ewma mrg_avg_pkt_len;...
2017 Mar 29
0
[PATCH 5/6] virtio_net: rework mergeable buffer handling
...-- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -250,24 +250,6 @@ static void skb_xmit_done(struct virtqueue *vq)
netif_wake_subqueue(vi->dev, vq2txq(vq));
}
-static unsigned int mergeable_ctx_to_buf_truesize(unsigned long mrg_ctx)
-{
- unsigned int truesize = mrg_ctx & (MERGEABLE_BUFFER_ALIGN - 1);
- return (truesize + 1) * MERGEABLE_BUFFER_ALIGN;
-}
-
-static void *mergeable_ctx_to_buf_address(unsigned long mrg_ctx)
-{
- return (void *)(mrg_ctx & -MERGEABLE_BUFFER_ALIGN);
-
-}
-
-static unsigned long mergeable_buf_to_ctx(void *buf, unsigned int truesize)
-{
- unsigned int size = tr...
2017 Mar 29
0
[PATCH 5/6] virtio_net: rework mergeable buffer handling
...-- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -250,24 +250,6 @@ static void skb_xmit_done(struct virtqueue *vq)
netif_wake_subqueue(vi->dev, vq2txq(vq));
}
-static unsigned int mergeable_ctx_to_buf_truesize(unsigned long mrg_ctx)
-{
- unsigned int truesize = mrg_ctx & (MERGEABLE_BUFFER_ALIGN - 1);
- return (truesize + 1) * MERGEABLE_BUFFER_ALIGN;
-}
-
-static void *mergeable_ctx_to_buf_address(unsigned long mrg_ctx)
-{
- return (void *)(mrg_ctx & -MERGEABLE_BUFFER_ALIGN);
-
-}
-
-static unsigned long mergeable_buf_to_ctx(void *buf, unsigned int truesize)
-{
- unsigned int size = tr...
2016 Feb 21
1
[PATCH] virtio_net: switch to build_skb for mrg_rxbuf
..._len)
{
- const size_t hdr_len = sizeof(struct virtio_net_hdr_mrg_rxbuf);
+ unsigned int hdr;
unsigned int len;
- len = hdr_len + clamp_t(unsigned int, ewma_pkt_len_read(avg_pkt_len),
- GOOD_PACKET_LEN, PAGE_SIZE - hdr_len);
+ hdr = ALIGN(VNET_SKB_PAD + sizeof(struct skb_shared_info),
+ MERGEABLE_BUFFER_ALIGN);
+
+ len = hdr + clamp_t(unsigned int, ewma_pkt_len_read(avg_pkt_len),
+ 500 /* TODO */, PAGE_SIZE - hdr);
return ALIGN(len, MERGEABLE_BUFFER_ALIGN);
}
@@ -626,7 +648,10 @@ static int add_recvbuf_mergeable(struct receive_queue *rq, gfp_t gfp)
if (unlikely(!skb_page_frag_refill(len,...
2016 Feb 21
1
[PATCH] virtio_net: switch to build_skb for mrg_rxbuf
..._len)
{
- const size_t hdr_len = sizeof(struct virtio_net_hdr_mrg_rxbuf);
+ unsigned int hdr;
unsigned int len;
- len = hdr_len + clamp_t(unsigned int, ewma_pkt_len_read(avg_pkt_len),
- GOOD_PACKET_LEN, PAGE_SIZE - hdr_len);
+ hdr = ALIGN(VNET_SKB_PAD + sizeof(struct skb_shared_info),
+ MERGEABLE_BUFFER_ALIGN);
+
+ len = hdr + clamp_t(unsigned int, ewma_pkt_len_read(avg_pkt_len),
+ 500 /* TODO */, PAGE_SIZE - hdr);
return ALIGN(len, MERGEABLE_BUFFER_ALIGN);
}
@@ -626,7 +648,10 @@ static int add_recvbuf_mergeable(struct receive_queue *rq, gfp_t gfp)
if (unlikely(!skb_page_frag_refill(len,...
2014 Jan 16
13
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 16
13
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This