Displaying 14 results from an estimated 14 matches for "est_buffer_len".
2013 Nov 13
4
[PATCH net-next 4/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
...uct page *head_page)
> {
> struct skb_vnet_hdr *hdr = skb_vnet_hdr(head_skb);
> struct sk_buff *curr_skb = head_skb;
> + struct page *page = head_page;
> char *buf;
> - struct page *page;
> - int num_buf, len, offset, truesize;
> + int num_buf, len, offset;
> + u32 est_buffer_len;
>
> + len = head_skb->len;
> num_buf = hdr->mhdr.num_buffers;
> while (--num_buf) {
> int num_skb_frags = skb_shinfo(curr_skb)->nr_frags;
> @@ -320,7 +325,6 @@ static int receive_mergeable(struct receive_queue *rq, struct sk_buff *head_skb)
> head_skb->...
2013 Nov 13
4
[PATCH net-next 4/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
...uct page *head_page)
> {
> struct skb_vnet_hdr *hdr = skb_vnet_hdr(head_skb);
> struct sk_buff *curr_skb = head_skb;
> + struct page *page = head_page;
> char *buf;
> - struct page *page;
> - int num_buf, len, offset, truesize;
> + int num_buf, len, offset;
> + u32 est_buffer_len;
>
> + len = head_skb->len;
> num_buf = hdr->mhdr.num_buffers;
> while (--num_buf) {
> int num_skb_frags = skb_shinfo(curr_skb)->nr_frags;
> @@ -320,7 +325,6 @@ static int receive_mergeable(struct receive_queue *rq, struct sk_buff *head_skb)
> head_skb->...
2013 Nov 13
0
[PATCH net-next 4/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
...r (which is ok
> since we can do coalescing).
It's hard to predict the future ;)
256 bytes frames consume 2.5 KB anyway on a traditional NIC.
If it was a concern, we would have it already.
If you receive a mix of big and small frames, there is no win.
> > + if (page) {
> > + est_buffer_len = page_private(page);
> > + if (est_buffer_len > len) {
> > + u32 truesize_delta = est_buffer_len - len;
> > +
> > + curr_skb->truesize += truesize_delta;
> > + if (curr_skb != head_skb)
> > + head_skb->truesize += truesize_delta;
> > +...
2013 Nov 13
0
[PATCH net-next 4/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
...struct skb_vnet_hdr *hdr = skb_vnet_hdr(head_skb);
> > struct sk_buff *curr_skb = head_skb;
> > + struct page *page = head_page;
> > char *buf;
> > - struct page *page;
> > - int num_buf, len, offset, truesize;
> > + int num_buf, len, offset;
> > + u32 est_buffer_len;
> >
> > + len = head_skb->len;
> > num_buf = hdr->mhdr.num_buffers;
> > while (--num_buf) {
> > int num_skb_frags = skb_shinfo(curr_skb)->nr_frags;
> > @@ -320,7 +325,6 @@ static int receive_mergeable(struct receive_queue *rq, struct sk_buff *h...
2013 Nov 12
0
[PATCH net-next 4/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
...e *rq, struct sk_buff *head_skb,
+ struct page *head_page)
{
struct skb_vnet_hdr *hdr = skb_vnet_hdr(head_skb);
struct sk_buff *curr_skb = head_skb;
+ struct page *page = head_page;
char *buf;
- struct page *page;
- int num_buf, len, offset, truesize;
+ int num_buf, len, offset;
+ u32 est_buffer_len;
+ len = head_skb->len;
num_buf = hdr->mhdr.num_buffers;
while (--num_buf) {
int num_skb_frags = skb_shinfo(curr_skb)->nr_frags;
@@ -320,7 +325,6 @@ static int receive_mergeable(struct receive_queue *rq, struct sk_buff *head_skb)
head_skb->dev->stats.rx_length_errors++;...
2013 Nov 13
2
[PATCH net-next 4/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
...page *head_page)
> {
> struct skb_vnet_hdr *hdr = skb_vnet_hdr(head_skb);
> struct sk_buff *curr_skb = head_skb;
> + struct page *page = head_page;
> char *buf;
> - struct page *page;
> - int num_buf, len, offset, truesize;
> + int num_buf, len, offset;
> + u32 est_buffer_len;
>
> + len = head_skb->len;
> num_buf = hdr->mhdr.num_buffers;
> while (--num_buf) {
> int num_skb_frags = skb_shinfo(curr_skb)->nr_frags;
> @@ -320,7 +325,6 @@ static int receive_mergeable(struct receive_queue *rq, struct sk_buff *head_skb)
> head_sk...
2013 Nov 13
2
[PATCH net-next 4/4] virtio-net: auto-tune mergeable rx buffer size for improved performance
...page *head_page)
> {
> struct skb_vnet_hdr *hdr = skb_vnet_hdr(head_skb);
> struct sk_buff *curr_skb = head_skb;
> + struct page *page = head_page;
> char *buf;
> - struct page *page;
> - int num_buf, len, offset, truesize;
> + int num_buf, len, offset;
> + u32 est_buffer_len;
>
> + len = head_skb->len;
> num_buf = hdr->mhdr.num_buffers;
> while (--num_buf) {
> int num_skb_frags = skb_shinfo(curr_skb)->nr_frags;
> @@ -320,7 +325,6 @@ static int receive_mergeable(struct receive_queue *rq, struct sk_buff *head_skb)
> head_sk...
2013 Nov 12
12
[PATCH net-next 1/4] virtio-net: mergeable buffer size should include virtio-net header
Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page
frag allocators") changed the mergeable receive buffer size from PAGE_SIZE
to MTU-size. However, the merge buffer size does not take into account the
size of the virtio-net header. Consequently, packets that are MTU-size
will take two buffers intead of one (to store the virtio-net header),
substantially decreasing the
2013 Nov 12
12
[PATCH net-next 1/4] virtio-net: mergeable buffer size should include virtio-net header
Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page
frag allocators") changed the mergeable receive buffer size from PAGE_SIZE
to MTU-size. However, the merge buffer size does not take into account the
size of the virtio-net header. Consequently, packets that are MTU-size
will take two buffers intead of one (to store the virtio-net header),
substantially decreasing the
2013 Dec 17
0
[PATCH net-next 3/3] net: auto-tune mergeable rx buffer size for improved performance
...last frag's page to estimate the truesize of the last frag.
+ * EWMA with a weight of 64 makes the size adjustments quite small in
+ * the frags allocated on one page (even a order-3 one), and truesize
+ * doesn't need to be 100% accurate.
+ */
+ if (skb_is_nonlinear(head_skb)) {
+ u32 est_buffer_len = page_private(page);
+ if (est_buffer_len > len) {
+ u32 truesize_delta = est_buffer_len - len;
+
+ curr_skb->truesize += truesize_delta;
+ if (curr_skb != head_skb)
+ head_skb->truesize += truesize_delta;
+ }
+ }
+ ewma_add(&rq->mrg_avg_pkt_len, head_skb->len);
ret...
2013 Dec 17
15
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2013 Dec 17
15
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs
unless GFP_WAIT is used. Change skb_page_frag_refill to attempt
higher-order page allocations whether or not GFP_WAIT is used. If
memory cannot be allocated, the allocator will fall back to
successively smaller page allocs (down to order-0 page allocs).
This change brings skb_page_frag_refill in line with the existing
page allocation
2013 Dec 23
2
[PATCH net-next 3/3] net: auto-tune mergeable rx buffer size for improved performance
...te the truesize of the last frag.
> + * EWMA with a weight of 64 makes the size adjustments quite small in
> + * the frags allocated on one page (even a order-3 one), and truesize
> + * doesn't need to be 100% accurate.
> + */
> + if (skb_is_nonlinear(head_skb)) {
> + u32 est_buffer_len = page_private(page);
> + if (est_buffer_len > len) {
> + u32 truesize_delta = est_buffer_len - len;
> +
> + curr_skb->truesize += truesize_delta;
> + if (curr_skb != head_skb)
> + head_skb->truesize += truesize_delta;
> + }
> + }
> + ewma_add(&rq-...
2013 Dec 23
2
[PATCH net-next 3/3] net: auto-tune mergeable rx buffer size for improved performance
...te the truesize of the last frag.
> + * EWMA with a weight of 64 makes the size adjustments quite small in
> + * the frags allocated on one page (even a order-3 one), and truesize
> + * doesn't need to be 100% accurate.
> + */
> + if (skb_is_nonlinear(head_skb)) {
> + u32 est_buffer_len = page_private(page);
> + if (est_buffer_len > len) {
> + u32 truesize_delta = est_buffer_len - len;
> +
> + curr_skb->truesize += truesize_delta;
> + if (curr_skb != head_skb)
> + head_skb->truesize += truesize_delta;
> + }
> + }
> + ewma_add(&rq-...