Eric Dumazet
2014-Jan-03 01:26 UTC
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
On Thu, 2014-01-02 at 16:56 -0800, Eric Dumazet wrote:> > My suggestion is to use a recent kernel, and/or eventually backport the > mm fixes if any. > > order-3 allocations should not reclaim 2GB out of 8GB. > > There is a reason PAGE_ALLOC_COSTLY_ORDER exists and is 3Hmm... it looks like I missed __GFP_NORETRY diff --git a/net/core/sock.c b/net/core/sock.c index 5393b4b719d7..5f42a4d70cb2 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1872,7 +1872,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio) gfp_t gfp = prio; if (order) - gfp |= __GFP_COMP | __GFP_NOWARN; + gfp |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY; pfrag->page = alloc_pages(gfp, order); if (likely(pfrag->page)) { pfrag->offset = 0;
Debabrata Banerjee
2014-Jan-03 01:59 UTC
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
On Thu, Jan 2, 2014 at 8:26 PM, Eric Dumazet <eric.dumazet at gmail.com> wrote:> On Thu, 2014-01-02 at 16:56 -0800, Eric Dumazet wrote: > >> >> My suggestion is to use a recent kernel, and/or eventually backport the >> mm fixes if any. >> >> order-3 allocations should not reclaim 2GB out of 8GB. >> >> There is a reason PAGE_ALLOC_COSTLY_ORDER exists and is 3Sorry 2GB cache out of 8GB phys, ~1GB gets reclaimed. Regardless the reclaimation of cache is minor compared to the compaction event that precedes it, I haven't spotted something addressing that yet - isolate_migratepages_range ()/compact_checklock_irqsave(). If even more of memory was unmoveable, the compaction routines would be hit even harder as reclaimation wouldn't do anything - mm would have to get very very smart about unmoveable pages being freed and just fail allocations/oom kill if nothing has changed without running through compaction/reclaim fruitlessly. I guess this is a bit of a tangent since what I'm saying proves the patch from Michael doesn't make this behavior worse.> > Hmm... it looks like I missed __GFP_NORETRY > > > > diff --git a/net/core/sock.c b/net/core/sock.c > index 5393b4b719d7..5f42a4d70cb2 100644 > --- a/net/core/sock.c > +++ b/net/core/sock.c > @@ -1872,7 +1872,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio) > gfp_t gfp = prio; > > if (order) > - gfp |= __GFP_COMP | __GFP_NOWARN; > + gfp |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY; > pfrag->page = alloc_pages(gfp, order); > if (likely(pfrag->page)) { > pfrag->offset = 0; > > >Yes this seems like it will make the situation better, but one send() may still cause a direct_compact and direct_reclaim() cycle to happen, followed immediately by another direct_compact() if direct_reclaim() didn't free an order-3. Now have all cpu's doing a send(), you can still get heavy spinlock contention in the routines described above. The major change I see here is that allocations > order-0 used to be rare, now it's on every send(). I can try your patch to see how much things improve. -Debabrata
Debabrata Banerjee
2014-Jan-03 22:47 UTC
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
>> On Thu, 2014-01-02 at 16:56 -0800, Eric Dumazet wrote: >> >> Hmm... it looks like I missed __GFP_NORETRY >> >> >> >> diff --git a/net/core/sock.c b/net/core/sock.c >> index 5393b4b719d7..5f42a4d70cb2 100644 >> --- a/net/core/sock.c >> +++ b/net/core/sock.c >> @@ -1872,7 +1872,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio) >> gfp_t gfp = prio; >> >> if (order) >> - gfp |= __GFP_COMP | __GFP_NOWARN; >> + gfp |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY; >> pfrag->page = alloc_pages(gfp, order); >> if (likely(pfrag->page)) { >> pfrag->offset = 0; >> >> >>There is another patch needed (looks like good stable fixes): diff --git a/net/core/skbuff.c b/net/core/skbuff.c index 06e72d3..d42d48c 100644 --- a/net/core/skbuff.c +++ b/net/core/skbuff.c @@ -378,7 +378,7 @@ refill: gfp_t gfp = gfp_mask; if (order) - gfp |= __GFP_COMP | __GFP_NOWARN; + gfp |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY; nc->frag.page = alloc_pages(gfp, order); if (likely(nc->frag.page)) break; This reduces the really pathological compact/reclaim behavior but doesn't fix it. Actually it still really quite bad because the whole thing loops until it gets to order-0 so it's effectively trying the allocation 4 times anyway. I typically see non-zero order allocations very rarely without these two pieces of code. I hotpatched a running system to get results from this quickly. Even setting the max order to order-1 I still see bad behavior. If anything this behavior should be conditional until this is ironed out. Performance data: http://pastebin.ubuntu.com/6687527/ -Debabrata
Apparently Analagous Threads
- [PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
- [PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
- [PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
- [PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
- [PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill