Duoming Zhou
2024-Mar-06 05:01 UTC
[PATCH v3] nouveau/dmem: handle kcalloc() allocation failure
The kcalloc() in nouveau_dmem_evict_chunk() will return null if
the physical memory has run out. As a result, if we dereference
src_pfns, dst_pfns or dma_addrs, the null pointer dereference bugs
will happen.
Moreover, the GPU is going away. If the kcalloc() fails, we could not
evict all pages mapping a chunk. So this patch adds a __GFP_NOFAIL
flag in kcalloc().
Finally, as there is no need to have physically contiguous memory,
this patch switches kcalloc() to kvcalloc() in order to avoid
failing allocations.
Fixes: 249881232e14 ("nouveau/dmem: evict device private memory during
release")
Suggested-by: Danilo Krummrich <dakr at redhat.com>
Signed-off-by: Duoming Zhou <duoming at zju.edu.cn>
---
Changes in v3:
- Switch kcalloc() to kvcalloc().
drivers/gpu/drm/nouveau/nouveau_dmem.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c
b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index 12feecf71e7..6fb65b01d77 100644
--- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
@@ -378,9 +378,9 @@ nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk)
dma_addr_t *dma_addrs;
struct nouveau_fence *fence;
- src_pfns = kcalloc(npages, sizeof(*src_pfns), GFP_KERNEL);
- dst_pfns = kcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL);
- dma_addrs = kcalloc(npages, sizeof(*dma_addrs), GFP_KERNEL);
+ src_pfns = kvcalloc(npages, sizeof(*src_pfns), GFP_KERNEL | __GFP_NOFAIL);
+ dst_pfns = kvcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL | __GFP_NOFAIL);
+ dma_addrs = kvcalloc(npages, sizeof(*dma_addrs), GFP_KERNEL | __GFP_NOFAIL);
migrate_device_range(src_pfns, chunk->pagemap.range.start >>
PAGE_SHIFT,
npages);
@@ -406,11 +406,11 @@ nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk)
migrate_device_pages(src_pfns, dst_pfns, npages);
nouveau_dmem_fence_done(&fence);
migrate_device_finalize(src_pfns, dst_pfns, npages);
- kfree(src_pfns);
- kfree(dst_pfns);
+ kvfree(src_pfns);
+ kvfree(dst_pfns);
for (i = 0; i < npages; i++)
dma_unmap_page(chunk->drm->dev->dev, dma_addrs[i], PAGE_SIZE,
DMA_BIDIRECTIONAL);
- kfree(dma_addrs);
+ kvfree(dma_addrs);
}
void
--
2.17.1
Danilo Krummrich
2024-Mar-08 16:45 UTC
[PATCH v3] nouveau/dmem: handle kcalloc() allocation failure
On 3/6/24 06:01, Duoming Zhou wrote:> The kcalloc() in nouveau_dmem_evict_chunk() will return null if > the physical memory has run out. As a result, if we dereference > src_pfns, dst_pfns or dma_addrs, the null pointer dereference bugs > will happen. > > Moreover, the GPU is going away. If the kcalloc() fails, we could not > evict all pages mapping a chunk. So this patch adds a __GFP_NOFAIL > flag in kcalloc(). > > Finally, as there is no need to have physically contiguous memory, > this patch switches kcalloc() to kvcalloc() in order to avoid > failing allocations. > > Fixes: 249881232e14 ("nouveau/dmem: evict device private memory during release") > Suggested-by: Danilo Krummrich <dakr at redhat.com> > Signed-off-by: Duoming Zhou <duoming at zju.edu.cn>Applied to drm-misc-fixes, thanks!> --- > Changes in v3: > - Switch kcalloc() to kvcalloc(). > > drivers/gpu/drm/nouveau/nouveau_dmem.c | 12 ++++++------ > 1 file changed, 6 insertions(+), 6 deletions(-) > > diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c > index 12feecf71e7..6fb65b01d77 100644 > --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c > +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c > @@ -378,9 +378,9 @@ nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk) > dma_addr_t *dma_addrs; > struct nouveau_fence *fence; > > - src_pfns = kcalloc(npages, sizeof(*src_pfns), GFP_KERNEL); > - dst_pfns = kcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL); > - dma_addrs = kcalloc(npages, sizeof(*dma_addrs), GFP_KERNEL); > + src_pfns = kvcalloc(npages, sizeof(*src_pfns), GFP_KERNEL | __GFP_NOFAIL); > + dst_pfns = kvcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL | __GFP_NOFAIL); > + dma_addrs = kvcalloc(npages, sizeof(*dma_addrs), GFP_KERNEL | __GFP_NOFAIL); > > migrate_device_range(src_pfns, chunk->pagemap.range.start >> PAGE_SHIFT, > npages); > @@ -406,11 +406,11 @@ nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk) > migrate_device_pages(src_pfns, dst_pfns, npages); > nouveau_dmem_fence_done(&fence); > migrate_device_finalize(src_pfns, dst_pfns, npages); > - kfree(src_pfns); > - kfree(dst_pfns); > + kvfree(src_pfns); > + kvfree(dst_pfns); > for (i = 0; i < npages; i++) > dma_unmap_page(chunk->drm->dev->dev, dma_addrs[i], PAGE_SIZE, DMA_BIDIRECTIONAL); > - kfree(dma_addrs); > + kvfree(dma_addrs); > } > > void
Apparently Analagous Threads
- [PATCH] nouveau/dmem: handle kcalloc() allocation failure
- [PATCH v3] nouveau/dmem: handle kcalloc() allocation failure
- [PATCH] nouveau/dmem: handle kcalloc() allocation failure
- [PATCH drm-misc-next] drm/nouveau: fence: fix undefined fence state after emit
- [PATCH 6/9] nouveau: simplify nouveau_dmem_migrate_vma