Displaying 20 results from an estimated 23 matches for "skb_frag_page_ord".
Did you mean:
skb_frag_page_order
2023 Aug 16
1
[PATCH vhost v13 05/12] virtio_ring: introduce virtqueue_dma_dev()
...mpact
> > > > >
> > > > > Not every page, just the first page of the COMP pages.
> > > > >
> > > > > So I think there is no impact.
> > > >
> > > > Nope, see this:
> > > >
> > > > if (SKB_FRAG_PAGE_ORDER &&
> > > > !static_branch_unlikely(&net_high_order_alloc_disable_key)) {
> > > > /* Avoid direct reclaim but allow kswapd to wake */
> > > > pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) |...
2018 Nov 15
3
[PATCH net-next 1/2] vhost_net: mitigate page reference counting during page frag refill
.../
+ struct page_frag page_frag;
+ /* Refcount bias of page frag */
+ int refcnt_bias;
};
static unsigned vhost_net_zcopy_mask __read_mostly;
@@ -637,14 +641,53 @@ static bool tx_can_batch(struct vhost_virtqueue *vq, size_t total_len)
!vhost_vq_avail_empty(vq->dev, vq);
}
+#define SKB_FRAG_PAGE_ORDER get_order(32768)
+
+static bool vhost_net_page_frag_refill(struct vhost_net *net, unsigned int sz,
+ struct page_frag *pfrag, gfp_t gfp)
+{
+ if (pfrag->page) {
+ if (pfrag->offset + sz <= pfrag->size)
+ return true;
+ __page_frag_cache_drain(pfrag->page, net->...
2013 Nov 12
0
[PATCH net-next 2/4] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1865,9 +1865,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio)
put_page(pfrag->page);
}
- /* We restrict high order allocations to users that can afford to wait */
- order = (prio & __GFP_WAIT) ? SKB_FRAG_PAGE_ORDER : 0;
-
+ order = SKB_FRAG_PAGE_ORDER;
do {
gfp_t gfp = prio;
--
1.8.4.1
2013 Dec 23
0
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...++ b/net/core/sock.c
> @@ -1865,9 +1865,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio)
> put_page(pfrag->page);
> }
>
> - /* We restrict high order allocations to users that can afford to wait */
> - order = (prio & __GFP_WAIT) ? SKB_FRAG_PAGE_ORDER : 0;
> -
> + order = SKB_FRAG_PAGE_ORDER;
> do {
> gfp_t gfp = prio;
>
The original code seems try to avoid the high order allocation for
atomic allocation. This patch changes this, and looks like it will
introduce some extra cost when the memory is highly fragmented.
2013 Dec 23
0
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...++ b/net/core/sock.c
> @@ -1865,9 +1865,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio)
> put_page(pfrag->page);
> }
>
> - /* We restrict high order allocations to users that can afford to wait */
> - order = (prio & __GFP_WAIT) ? SKB_FRAG_PAGE_ORDER : 0;
> -
> + order = SKB_FRAG_PAGE_ORDER;
> do {
> gfp_t gfp = prio;
>
> --
> 1.8.5.1
2014 Jan 08
0
[PATCH net-next v2 1/4] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...++ b/net/core/sock.c
> @@ -1865,9 +1865,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio)
> put_page(pfrag->page);
> }
>
> - /* We restrict high order allocations to users that can afford to wait */
> - order = (prio & __GFP_WAIT) ? SKB_FRAG_PAGE_ORDER : 0;
> -
> + order = SKB_FRAG_PAGE_ORDER;
> do {
> gfp_t gfp = prio;
Eric said we also need a patch to add __GFP_NORETRY, right?
Probably before this one in series.
> --
> 1.8.5.1
2013 Dec 17
15
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1865,9 +1865,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio)
put_page(pfrag->page);
}
- /* We restrict high order allocations to users that can afford to wait */
- order = (prio & __GFP_WAIT) ? SKB_FRAG_PAGE_ORDER : 0;
-
+ order = SKB_FRAG_PAGE_ORDER;
do {
gfp_t gfp = prio;
--
1.8.5.1
2013 Dec 17
15
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1865,9 +1865,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio)
put_page(pfrag->page);
}
- /* We restrict high order allocations to users that can afford to wait */
- order = (prio & __GFP_WAIT) ? SKB_FRAG_PAGE_ORDER : 0;
-
+ order = SKB_FRAG_PAGE_ORDER;
do {
gfp_t gfp = prio;
--
1.8.5.1
2012 Oct 12
13
Dom0 physical networking/swiotlb/something issue in 3.7-rc1
Hi Konrad,
The following patch causes fairly large packet loss when transmitting
from dom0 to the physical network, at least with my tg3 hardware, but I
assume it can impact anything which uses this interface.
I suspect that the issue is that the compound pages allocated in this
way are not backed by contiguous mfns and so things fall apart when the
driver tries to do DMA.
However I
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2014 Jan 17
7
[PATCH net-next v5 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2014 Jan 17
7
[PATCH net-next v6 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2014 Jan 17
7
[PATCH net-next v6 0/6] virtio-net: mergeable rx buffer size auto-tuning
The virtio-net device currently uses aligned MTU-sized mergeable receive
packet buffers. Network throughput for workloads with large average
packet size can be improved by posting larger receive packet buffers.
However, due to SKB truesize effects, posting large (e.g, PAGE_SIZE)
buffers reduces the throughput of workloads that do not benefit from GRO
and have no large inbound packets.
This
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1836,9 +1836,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio)
put_page(pfrag->page);
}
- /* We restrict high order allocations to users that can afford to wait */
- order = (prio & __GFP_WAIT) ? SKB_FRAG_PAGE_ORDER : 0;
-
+ order = SKB_FRAG_PAGE_ORDER;
do {
gfp_t gfp = prio;
--
1.8.5.2
2014 Jan 16
6
[PATCH net-next v3 1/5] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1836,9 +1836,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio)
put_page(pfrag->page);
}
- /* We restrict high order allocations to users that can afford to wait */
- order = (prio & __GFP_WAIT) ? SKB_FRAG_PAGE_ORDER : 0;
-
+ order = SKB_FRAG_PAGE_ORDER;
do {
gfp_t gfp = prio;
--
1.8.5.2
2014 Jan 07
10
[PATCH net-next v2 1/4] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1865,9 +1865,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio)
put_page(pfrag->page);
}
- /* We restrict high order allocations to users that can afford to wait */
- order = (prio & __GFP_WAIT) ? SKB_FRAG_PAGE_ORDER : 0;
-
+ order = SKB_FRAG_PAGE_ORDER;
do {
gfp_t gfp = prio;
--
1.8.5.1
2014 Jan 07
10
[PATCH net-next v2 1/4] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1865,9 +1865,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio)
put_page(pfrag->page);
}
- /* We restrict high order allocations to users that can afford to wait */
- order = (prio & __GFP_WAIT) ? SKB_FRAG_PAGE_ORDER : 0;
-
+ order = SKB_FRAG_PAGE_ORDER;
do {
gfp_t gfp = prio;
--
1.8.5.1
2014 Jan 16
13
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1836,9 +1836,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio)
put_page(pfrag->page);
}
- /* We restrict high order allocations to users that can afford to wait */
- order = (prio & __GFP_WAIT) ? SKB_FRAG_PAGE_ORDER : 0;
-
+ order = SKB_FRAG_PAGE_ORDER;
do {
gfp_t gfp = prio;
--
1.8.5.2
2014 Jan 16
13
[PATCH net-next v4 1/6] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...100644
--- a/net/core/sock.c
+++ b/net/core/sock.c
@@ -1836,9 +1836,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio)
put_page(pfrag->page);
}
- /* We restrict high order allocations to users that can afford to wait */
- order = (prio & __GFP_WAIT) ? SKB_FRAG_PAGE_ORDER : 0;
-
+ order = SKB_FRAG_PAGE_ORDER;
do {
gfp_t gfp = prio;
--
1.8.5.2
2013 Nov 12
12
[PATCH net-next 1/4] virtio-net: mergeable buffer size should include virtio-net header
Commit 2613af0ed18a ("virtio_net: migrate mergeable rx buffers to page
frag allocators") changed the mergeable receive buffer size from PAGE_SIZE
to MTU-size. However, the merge buffer size does not take into account the
size of the virtio-net header. Consequently, packets that are MTU-size
will take two buffers intead of one (to store the virtio-net header),
substantially decreasing the