Displaying 4 results from an estimated 4 matches for "compact_checklock_irqsav".
Did you mean:
compact_checklock_irqsave
2014 Jan 03
0
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...;> There is a reason PAGE_ALLOC_COSTLY_ORDER exists and is 3
Sorry 2GB cache out of 8GB phys, ~1GB gets reclaimed. Regardless the
reclaimation of cache is minor compared to the compaction event that
precedes it, I haven't spotted something addressing that yet -
isolate_migratepages_range ()/compact_checklock_irqsave(). If even
more of memory was unmoveable, the compaction routines would be hit
even harder as reclaimation wouldn't do anything - mm would have to
get very very smart about unmoveable pages being freed and just fail
allocations/oom kill if nothing has changed without running through
compaction...
2014 Jan 03
2
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
On Thu, 2014-01-02 at 16:56 -0800, Eric Dumazet wrote:
>
> My suggestion is to use a recent kernel, and/or eventually backport the
> mm fixes if any.
>
> order-3 allocations should not reclaim 2GB out of 8GB.
>
> There is a reason PAGE_ALLOC_COSTLY_ORDER exists and is 3
Hmm... it looks like I missed __GFP_NORETRY
diff --git a/net/core/sock.c b/net/core/sock.c
index
2014 Jan 03
2
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
On Thu, 2014-01-02 at 16:56 -0800, Eric Dumazet wrote:
>
> My suggestion is to use a recent kernel, and/or eventually backport the
> mm fixes if any.
>
> order-3 allocations should not reclaim 2GB out of 8GB.
>
> There is a reason PAGE_ALLOC_COSTLY_ORDER exists and is 3
Hmm... it looks like I missed __GFP_NORETRY
diff --git a/net/core/sock.c b/net/core/sock.c
index
2013 Aug 22
13
Lustre buffer cache causes large system overhead.
We have just discovered that a large buffer cache generated from traversing a
lustre file system will cause a significant system overhead for applications
with high memory demands. We have seen a 50% slowdown or worse for
applications. Even High Performance Linpack, that have no file IO whatsoever
is affected. The only remedy seems to be to empty the buffer cache from memory
by running