search for: __gfp_comp

Displaying 20 results from an estimated 20 matches for "__gfp_comp".

2014 Jan 03
2
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...d __GFP_NORETRY diff --git a/net/core/sock.c b/net/core/sock.c index 5393b4b719d7..5f42a4d70cb2 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1872,7 +1872,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio) gfp_t gfp = prio; if (order) - gfp |= __GFP_COMP | __GFP_NOWARN; + gfp |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY; pfrag->page = alloc_pages(gfp, order); if (likely(pfrag->page)) { pfrag->offset = 0;
2014 Jan 03
2
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...d __GFP_NORETRY diff --git a/net/core/sock.c b/net/core/sock.c index 5393b4b719d7..5f42a4d70cb2 100644 --- a/net/core/sock.c +++ b/net/core/sock.c @@ -1872,7 +1872,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio) gfp_t gfp = prio; if (order) - gfp |= __GFP_COMP | __GFP_NOWARN; + gfp |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY; pfrag->page = alloc_pages(gfp, order); if (likely(pfrag->page)) { pfrag->offset = 0;
2014 Jan 03
2
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...gt;> --- a/net/core/sock.c >> +++ b/net/core/sock.c >> @@ -1872,7 +1872,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio) >> gfp_t gfp = prio; >> >> if (order) >> - gfp |= __GFP_COMP | __GFP_NOWARN; >> + gfp |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY; >> pfrag->page = alloc_pages(gfp, order); >> if (likely(pfrag->page)) { >> pfrag->offset = 0; >> >> >&...
2014 Jan 03
2
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...gt;> --- a/net/core/sock.c >> +++ b/net/core/sock.c >> @@ -1872,7 +1872,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio) >> gfp_t gfp = prio; >> >> if (order) >> - gfp |= __GFP_COMP | __GFP_NOWARN; >> + gfp |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY; >> pfrag->page = alloc_pages(gfp, order); >> if (likely(pfrag->page)) { >> pfrag->offset = 0; >> >> >&...
2014 Jan 03
0
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...> >> +++ b/net/core/sock.c > >> @@ -1872,7 +1872,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio) > >> gfp_t gfp = prio; > >> > >> if (order) > >> - gfp |= __GFP_COMP | __GFP_NOWARN; > >> + gfp |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY; > >> pfrag->page = alloc_pages(gfp, order); > >> if (likely(pfrag->page)) { > >> pfrag->offset = 0; &gt...
2014 Jan 03
0
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
...719d7..5f42a4d70cb2 100644 > --- a/net/core/sock.c > +++ b/net/core/sock.c > @@ -1872,7 +1872,7 @@ bool skb_page_frag_refill(unsigned int sz, struct page_frag *pfrag, gfp_t prio) > gfp_t gfp = prio; > > if (order) > - gfp |= __GFP_COMP | __GFP_NOWARN; > + gfp |= __GFP_COMP | __GFP_NOWARN | __GFP_NORETRY; > pfrag->page = alloc_pages(gfp, order); > if (likely(pfrag->page)) { > pfrag->offset = 0; > > > Yes this seems like it...
2014 Jan 03
2
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
Currently because of how mm behaves (3.10.y) the code even before the patch is a problem. I believe what may fix it is if instead of just removing the conditional on __GFP_WAIT, the initial order > 0 allocation should be made GFP_ATOMIC, then fallback to the original gfp mask for the order-0 allocations. On systems that have highly fragmented main memory with pressure, skb_page_frag_refill()
2014 Jan 03
2
[PATCH net-next 1/3] net: allow > 0 order atomic page alloc in skb_page_frag_refill
Currently because of how mm behaves (3.10.y) the code even before the patch is a problem. I believe what may fix it is if instead of just removing the conditional on __GFP_WAIT, the initial order > 0 allocation should be made GFP_ATOMIC, then fallback to the original gfp mask for the order-0 allocations. On systems that have highly fragmented main memory with pressure, skb_page_frag_refill()
2023 Aug 16
1
[PATCH vhost v13 05/12] virtio_ring: introduce virtqueue_dma_dev()
...likely(&net_high_order_alloc_disable_key)) { > > > > /* Avoid direct reclaim but allow kswapd to wake */ > > > > pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) | > > > > __GFP_COMP | __GFP_NOWARN | > > > > __GFP_NORETRY, > > > > SKB_FRAG_PAGE_ORDER); > > > > if (likely(pfrag->page)) { > > > > pfrag->s...
2024 Apr 26
1
[PATCH 1/2] drm/nouveau/firmware: Fix SG_DEBUG error with nvkm_firmware_ctor()
On Fri, 2024-04-26 at 11:41 -0400, Lyude Paul wrote: > We hit this because when initializing firmware of type > NVKM_FIRMWARE_IMG_DMA we allocate coherent memory and then attempt to > include that coherent memory in a scatterlist. I'm sure this patch is a good one, and I will try to test it soon, but I am very curious to know why including coherent memory in a scatterlist is bad.
2024 Apr 28
1
[PATCH 1/2] drm/nouveau/firmware: Fix SG_DEBUG error with nvkm_firmware_ctor()
...attrs() (which dma_alloc_coherent() is just a wrapper) for): /* * DMA allocations can never be turned back into a page pointer, so * requesting compound pages doesn't make sense (and can't even be * supported at all by various backends). */ if (WARN_ON_ONCE(flag & __GFP_COMP)) return NULL; Which explains the check in sg_set_buf() that this patch stops us from hitting: BUG_ON(!virt_addr_valid(buf)); Scatterlists need page pointers (we use one later down here:) sg_set_page(sg, virt_to_page(buf), buflen, offset_in_page(buf)); But we can't get a page poi...
2013 Mar 27
2
system death under oom - 3.7.9
Hello, My system died last night apparently due to OOM conditions. Note that I don't have any swap set up, but my understanding is that this is not required. The full log is at: http://pastebin.com/YCYUXWvV. It was in my messages, so I guess the system took a bit to die completely. nouveau is somewhat implicated, as it is the first thing that hits an allocation failure in nouveau_vm_create,
2018 Nov 15
3
[PATCH net-next 1/2] vhost_net: mitigate page reference counting during page frag refill
...+ return true; + __page_frag_cache_drain(pfrag->page, net->refcnt_bias); + } + + pfrag->offset = 0; + net->refcnt_bias = 0; + if (SKB_FRAG_PAGE_ORDER) { + /* Avoid direct reclaim but allow kswapd to wake */ + pfrag->page = alloc_pages((gfp & ~__GFP_DIRECT_RECLAIM) | + __GFP_COMP | __GFP_NOWARN | + __GFP_NORETRY, + SKB_FRAG_PAGE_ORDER); + if (likely(pfrag->page)) { + pfrag->size = PAGE_SIZE << SKB_FRAG_PAGE_ORDER; + goto done; + } + } + pfrag->page = alloc_page(gfp); + if (likely(pfrag->page)) { + pfrag->size = PAGE_SIZE; + goto done...
2011 Mar 20
6
PATCH: Hugepage support for Domains booting with 4KB pages
We have implemented hugepage support for guests in following manner In our implementation we added a parameter hugepage_num which is specified in the config file of the DomU. It is the number of hugepages that the guest is guaranteed to receive whenever the kernel asks for hugepage by using its boot time parameter or reserving after booting (eg. Using echo XX > /proc/sys/vm/nr_hugepages).
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 23
11
[PATCH v3 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
The internal mechanisms support this, but instead of exposting the gfp to the caller it wrappers it into iommu_map() and iommu_map_atomic() Fix this instead of adding more variants for GFP_KERNEL_ACCOUNT. Signed-off-by: Jason Gunthorpe <jgg at nvidia.com> --- arch/arm/mm/dma-mapping.c | 11 +++++++---- .../gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c | 3 ++-
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
The internal mechanisms support this, but instead of exposting the gfp to the caller it wrappers it into iommu_map() and iommu_map_atomic() Fix this instead of adding more variants for GFP_KERNEL_ACCOUNT. Signed-off-by: Jason Gunthorpe <jgg at nvidia.com> --- arch/arm/mm/dma-mapping.c | 11 +++++++---- .../gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c | 3 ++-
2023 Jan 06
3
[PATCH 1/8] iommu: Add a gfp parameter to iommu_map()
The internal mechanisms support this, but instead of exposting the gfp to the caller it wrappers it into iommu_map() and iommu_map_atomic() Fix this instead of adding more variants for GFP_KERNEL_ACCOUNT. Signed-off-by: Jason Gunthorpe <jgg at nvidia.com> --- arch/arm/mm/dma-mapping.c | 11 +++++++---- .../gpu/drm/nouveau/nvkm/subdev/instmem/gk20a.c | 3 ++-
2023 Jan 18
10
[PATCH v2 00/10] Let iommufd charge IOPTE allocations to the memory cgroup
iommufd follows the same design as KVM and uses memory cgroups to limit the amount of kernel memory a iommufd file descriptor can pin down. The various internal data structures already use GFP_KERNEL_ACCOUNT to charge its own memory. However, one of the biggest consumers of kernel memory is the IOPTEs stored under the iommu_domain and these allocations are not tracked. This series is the first