Displaying 20 results from an estimated 700 matches similar to: "[PATCH] nouveau/hmm: fix nouveau_dmem_chunk allocations"
2020 Apr 23
0
[PATCH] nouveau/hmm: fix nouveau_dmem_chunk allocations
On Tue, Apr 21, 2020 at 04:11:07PM -0700, Ralph Campbell wrote:
> In nouveau_dmem_init(), a number of struct nouveau_dmem_chunk are allocated
> and put on the dmem->chunk_empty list. Then in nouveau_dmem_pages_alloc(),
> a nouveau_dmem_chunk is removed from the list and GPU memory is allocated.
> However, the nouveau_dmem_chunk is never removed from the chunk_empty
> list nor
2019 Mar 21
0
Nouveau dmem NULL Pointer deref (SVM)
On Thu, Mar 21, 2019 at 04:59:14PM +0100, Tobias Klausmann wrote:
> Hi,
>
> just for your information and maybe for some help: with 5.1rc1 and SVM
> enabled i see the following backtrace [1] when the nouveau card (reverse
> prime) goes to sleep, for now i have papered over with [2] which leaves me
> with userspace hangs. Any pointers where to look for the actual culprit?
>
2019 Jul 29
0
[PATCH 3/9] nouveau: factor out device memory address calculation
Factor out the repeated device memory address calculation into
a helper.
Signed-off-by: Christoph Hellwig <hch at lst.de>
---
drivers/gpu/drm/nouveau/nouveau_dmem.c | 42 +++++++++++---------------
1 file changed, 17 insertions(+), 25 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index e696157f771e..d469bc334438 100644
---
2019 Jul 29
0
[PATCH 6/9] nouveau: simplify nouveau_dmem_migrate_vma
Factor the main copy page to vram routine out into a helper that acts
on a single page and which doesn't require the nouveau_dmem_migrate
structure for argument passing. As an added benefit the new version
only allocates the dma address array once and reuses it for each
subsequent chunk of work.
Signed-off-by: Christoph Hellwig <hch at lst.de>
---
2019 Mar 21
0
[PATCH] gpu/nouveau: empty chunk do not have a buffer object associated with them.
From: Jérôme Glisse <jglisse at redhat.com>
Empty chunk do not have a bo associated with them so no need to pin/unpin
on suspend/resume.
This fix suspend/resume on 5.1rc1 when NOUVEAU_SVM is enabled.
Signed-off-by: Jérôme Glisse <jglisse at redhat.com>
Reviewed-by: Tobias Klausmann <tobias.johannes.klausmann at mni.thm.de>
Tested-by: Tobias Klausmann
2019 Feb 21
1
[PATCH -next] drm/nouveau/dmem: remove set but not used variable 'drm'
Fixes gcc '-Wunused-but-set-variable' warning:
drivers/gpu/drm/nouveau/nouveau_dmem.c: In function 'nouveau_dmem_free':
drivers/gpu/drm/nouveau/nouveau_dmem.c:103:22: warning:
variable 'drm' set but not used [-Wunused-but-set-variable]
struct nouveau_drm *drm;
^
Signed-off-by: YueHaibing <yuehaibing at huawei.com>
---
2020 May 20
2
[PATCH] nouveau/hmm: fix migrate zero page to GPU
When calling OpenCL clEnqueueSVMMigrateMem() on a region of memory that
is backed by pte_none() or zero pages, migrate_vma_setup() will fill the
source PFN array with an entry indicating the source page is zero.
Use this to optimize migration to device private memory by allocating
GPU memory and zero filling it instead of failing to migrate the page.
Signed-off-by: Ralph Campbell <rcampbell at
2024 Mar 06
1
[PATCH v3] nouveau/dmem: handle kcalloc() allocation failure
The kcalloc() in nouveau_dmem_evict_chunk() will return null if
the physical memory has run out. As a result, if we dereference
src_pfns, dst_pfns or dma_addrs, the null pointer dereference bugs
will happen.
Moreover, the GPU is going away. If the kcalloc() fails, we could not
evict all pages mapping a chunk. So this patch adds a __GFP_NOFAIL
flag in kcalloc().
Finally, as there is no need to
2024 Mar 08
0
[PATCH v3] nouveau/dmem: handle kcalloc() allocation failure
On 3/6/24 06:01, Duoming Zhou wrote:
> The kcalloc() in nouveau_dmem_evict_chunk() will return null if
> the physical memory has run out. As a result, if we dereference
> src_pfns, dst_pfns or dma_addrs, the null pointer dereference bugs
> will happen.
>
> Moreover, the GPU is going away. If the kcalloc() fails, we could not
> evict all pages mapping a chunk. So this patch
2024 Mar 03
1
[PATCH] nouveau/dmem: handle kcalloc() allocation failure
The kcalloc() in nouveau_dmem_evict_chunk() will return null if
the physical memory has run out. As a result, if we dereference
src_pfns, dst_pfns or dma_addrs, the null pointer dereference bugs
will happen.
This patch uses stack variables to replace the kcalloc().
Fixes: 249881232e14 ("nouveau/dmem: evict device private memory during release")
Signed-off-by: Duoming Zhou <duoming
2019 Mar 21
3
Nouveau dmem NULL Pointer deref (SVM)
Hi,
just for your information and maybe for some help: with 5.1rc1 and SVM
enabled i see the following backtrace [1] when the nouveau card (reverse
prime) goes to sleep, for now i have papered over with [2] which leaves
me with userspace hangs. Any pointers where to look for the actual culprit?
PS: Card is: nouveau 0000:01:00.0: NVIDIA GP106 (136000a1)
Greetings,
Tobias
[1]:
BUG: unable
2019 Jun 17
34
dev_pagemap related cleanups v2
Hi Dan, Jérôme and Jason,
below is a series that cleans up the dev_pagemap interface so that
it is more easily usable, which removes the need to wrap it in hmm
and thus allowing to kill a lot of code
Note: this series is on top of the rdma/hmm branch + the dev_pagemap
releas fix series from Dan that went into 5.2-rc5.
Git tree:
git://git.infradead.org/users/hch/misc.git
2019 Jun 13
57
dev_pagemap related cleanups
Hi Dan, Jérôme and Jason,
below is a series that cleans up the dev_pagemap interface so that
it is more easily usable, which removes the need to wrap it in hmm
and thus allowing to kill a lot of code
Diffstat:
22 files changed, 245 insertions(+), 802 deletions(-)
Git tree:
git://git.infradead.org/users/hch/misc.git hmm-devmem-cleanup
Gitweb:
2023 Aug 29
1
[PATCH drm-misc-next] drm/nouveau: fence: fix undefined fence state after emit
nouveau_fence_emit() can fail before and after initializing the
dma-fence and hence before and after initializing the dma-fence' kref.
In order to avoid nouveau_fence_emit() potentially failing before
dma-fence initialization pass the channel to nouveau_fence_new() already
and perform the required check before even allocating the fence.
While at it, restore the original behavior of
2024 Oct 15
5
[PATCH v1 0/4] GPU Direct RDMA (P2P DMA) for Device Private Pages
From: Yonatan Maman <Ymaman at Nvidia.com>
This patch series aims to enable Peer-to-Peer (P2P) DMA access in
GPU-centric applications that utilize RDMA and private device pages. This
enhancement is crucial for minimizing data transfer overhead by allowing
the GPU to directly expose device private page data to devices such as
NICs, eliminating the need to traverse system RAM, which is the
2019 Jun 26
41
dev_pagemap related cleanups v3
Hi Dan, Jérôme and Jason,
below is a series that cleans up the dev_pagemap interface so that
it is more easily usable, which removes the need to wrap it in hmm
and thus allowing to kill a lot of code
Note: this series is on top of Linux 5.2-rc5 and has some minor
conflicts with the hmm tree that are easy to resolve.
Diffstat summary:
32 files changed, 361 insertions(+), 1012 deletions(-)
Git
2020 Oct 08
2
[PATCH] mm: make device private reference counts zero based
ZONE_DEVICE struct pages have an extra reference count that complicates the
code for put_page() and several places in the kernel that need to check the
reference count to see that a page is not being used (gup, compaction,
migration, etc.). Clean up the code so the reference count doesn't need to
be treated specially for device private pages, leaving DAX as still being
a special case.
2020 Oct 12
2
[PATCH v2] mm/hmm: make device private reference counts zero based
ZONE_DEVICE struct pages have an extra reference count that complicates the
code for put_page() and several places in the kernel that need to check the
reference count to see that a page is not being used (gup, compaction,
migration, etc.). Clean up the code so the reference count doesn't need to
be treated specially for device private pages, leaving DAX as still being
a special case.
2020 Nov 06
12
[PATCH v3 0/6] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to
migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers.
Earlier versions were posted previously [1] and [2].
The patches apply cleanly to the linux-mm 5.10.0-rc2 tree. There are a
lot of other THP patches being posted. I don't think there are any
semantic conflicts but there may be some merge conflicts depending on
2019 Aug 08
10
turn hmm migrate_vma upside down v2
Hi Jérôme, Ben and Jason,
below is a series against the hmm tree which starts revamping the
migrate_vma functionality. The prime idea is to export three slightly
lower level functions and thus avoid the need for migrate_vma_ops
callbacks.
Diffstat:
5 files changed, 281 insertions(+), 607 deletions(-)
A git tree is also available at:
git://git.infradead.org/users/hch/misc.git