Displaying 20 results from an estimated 28 matches for "nouveau_dmem_chunk".
2020 Apr 21
2
[PATCH] nouveau/hmm: fix nouveau_dmem_chunk allocations
In nouveau_dmem_init(), a number of struct nouveau_dmem_chunk are allocated
and put on the dmem->chunk_empty list. Then in nouveau_dmem_pages_alloc(),
a nouveau_dmem_chunk is removed from the list and GPU memory is allocated.
However, the nouveau_dmem_chunk is never removed from the chunk_empty
list nor placed on the chunk_free or chunk_full lists. This re...
2020 Apr 23
0
[PATCH] nouveau/hmm: fix nouveau_dmem_chunk allocations
On Tue, Apr 21, 2020 at 04:11:07PM -0700, Ralph Campbell wrote:
> In nouveau_dmem_init(), a number of struct nouveau_dmem_chunk are allocated
> and put on the dmem->chunk_empty list. Then in nouveau_dmem_pages_alloc(),
> a nouveau_dmem_chunk is removed from the list and GPU memory is allocated.
> However, the nouveau_dmem_chunk is never removed from the chunk_empty
> list nor placed on the chunk_free or chunk...
2019 Jul 29
0
[PATCH 3/9] nouveau: factor out device memory address calculation
...au/nouveau_dmem.c
index e696157f771e..d469bc334438 100644
--- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
@@ -102,6 +102,14 @@ struct nouveau_migrate {
unsigned long dma_nr;
};
+static unsigned long nouveau_dmem_page_addr(struct page *page)
+{
+ struct nouveau_dmem_chunk *chunk = page->zone_device_data;
+ unsigned long idx = page_to_pfn(page) - chunk->pfn_first;
+
+ return (idx << PAGE_SHIFT) + chunk->bo->bo.offset;
+}
+
static void nouveau_dmem_page_free(struct page *page)
{
struct nouveau_dmem_chunk *chunk = page->zone_device_data;
@@ -16...
2024 Mar 06
1
[PATCH v3] nouveau/dmem: handle kcalloc() allocation failure
...rtions(+), 6 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index 12feecf71e7..6fb65b01d77 100644
--- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
@@ -378,9 +378,9 @@ nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk)
dma_addr_t *dma_addrs;
struct nouveau_fence *fence;
- src_pfns = kcalloc(npages, sizeof(*src_pfns), GFP_KERNEL);
- dst_pfns = kcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL);
- dma_addrs = kcalloc(npages, sizeof(*dma_addrs), GFP_KERNEL);
+ src_pfns = kvcalloc(npages, sizeof(*src_pfns),...
2020 May 20
2
[PATCH] nouveau/hmm: fix migrate zero page to GPU
...s two
patches I posted earlier. The first is queued in Ben Skegg's nouveau
tree and the second is still pending review/not queued.
[1] ("nouveau/hmm: map pages after migration")
https://lore.kernel.org/linux-mm/20200304001339.8248-5-rcampbell at nvidia.com/
[2] ("nouveau/hmm: fix nouveau_dmem_chunk allocations")
https://lore.kernel.org/lkml/20200421231107.30958-1-rcampbell at nvidia.com/
drivers/gpu/drm/nouveau/nouveau_dmem.c | 75 ++++++++++++++++++++++----
1 file changed, 66 insertions(+), 9 deletions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/n...
2019 Feb 21
1
[PATCH -next] drm/nouveau/dmem: remove set but not used variable 'drm'
...au/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index aa9fec80492d..900a302b7ce9 100644
--- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
@@ -100,12 +100,10 @@ static void
nouveau_dmem_free(struct hmm_devmem *devmem, struct page *page)
{
struct nouveau_dmem_chunk *chunk;
- struct nouveau_drm *drm;
unsigned long idx;
chunk = (void *)hmm_devmem_page_get_drvdata(page);
idx = page_to_pfn(page) - chunk->pfn_first;
- drm = chunk->drm;
/*
* FIXME:
2024 Mar 08
0
[PATCH v3] nouveau/dmem: handle kcalloc() allocation failure
...> diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> index 12feecf71e7..6fb65b01d77 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
> @@ -378,9 +378,9 @@ nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk)
> dma_addr_t *dma_addrs;
> struct nouveau_fence *fence;
>
> - src_pfns = kcalloc(npages, sizeof(*src_pfns), GFP_KERNEL);
> - dst_pfns = kcalloc(npages, sizeof(*dst_pfns), GFP_KERNEL);
> - dma_addrs = kcalloc(npages, sizeof(*dma_addrs), GFP_KERNEL);
> + src_pfns...
2024 Mar 03
1
[PATCH] nouveau/dmem: handle kcalloc() allocation failure
...letions(-)
diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c
index 12feecf71e7..9a578262c6d 100644
--- a/drivers/gpu/drm/nouveau/nouveau_dmem.c
+++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c
@@ -374,13 +374,13 @@ static void
nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk)
{
unsigned long i, npages = range_len(&chunk->pagemap.range) >> PAGE_SHIFT;
- unsigned long *src_pfns, *dst_pfns;
- dma_addr_t *dma_addrs;
+ unsigned long src_pfns[npages], dst_pfns[npages];
+ dma_addr_t dma_addrs[npages];
struct nouveau_fence *fence;
- src_pfns = kcallo...
2020 May 20
1
[PATCH] nouveau/hmm: fix migrate zero page to GPU
...first is queued in Ben Skegg's nouveau
>> tree and the second is still pending review/not queued.
>> [1] ("nouveau/hmm: map pages after migration")
>> https://lore.kernel.org/linux-mm/20200304001339.8248-5-rcampbell at nvidia.com/
>> [2] ("nouveau/hmm: fix nouveau_dmem_chunk allocations")
>> https://lore.kernel.org/lkml/20200421231107.30958-1-rcampbell at nvidia.com/
>
> It would be best if it goes through Ben's tree if it doesn't have
> conflicts with the hunks I have in the hmm tree.. Is it the case?
>
> Jason
I think there might...
2020 May 26
1
[PATCH 0/6] nouveau/hmm: add support for mapping large pages
...e the new flag.
>>
>> Note that this series depends on a patch queued in Ben Skeggs' nouveau
>> tree ("nouveau/hmm: map pages after migration") and the patches queued
>> in Jason's HMM tree.
>> There is also a patch outstanding ("nouveau/hmm: fix nouveau_dmem_chunk
>> allocations") that is independent of the above and could be applied
>> before or after.
>
> Did Christoph and Matt's remarks get addressed here?
Both questioned the need to add the HMM_PFN_COMPOUND flag to the
hmm_range_fault() output array saying that the PFN can be...
2020 May 20
0
[PATCH] nouveau/hmm: fix migrate zero page to GPU
...ed earlier. The first is queued in Ben Skegg's nouveau
> tree and the second is still pending review/not queued.
> [1] ("nouveau/hmm: map pages after migration")
> https://lore.kernel.org/linux-mm/20200304001339.8248-5-rcampbell at nvidia.com/
> [2] ("nouveau/hmm: fix nouveau_dmem_chunk allocations")
> https://lore.kernel.org/lkml/20200421231107.30958-1-rcampbell at nvidia.com/
It would be best if it goes through Ben's tree if it doesn't have
conflicts with the hunks I have in the hmm tree.. Is it the case?
Jason
2020 May 25
0
[PATCH 0/6] nouveau/hmm: add support for mapping large pages
...s are updated to use the new flag.
>
> Note that this series depends on a patch queued in Ben Skeggs' nouveau
> tree ("nouveau/hmm: map pages after migration") and the patches queued
> in Jason's HMM tree.
> There is also a patch outstanding ("nouveau/hmm: fix nouveau_dmem_chunk
> allocations") that is independent of the above and could be applied
> before or after.
Did Christoph and Matt's remarks get addressed here?
I think ODP could use something like this, currently it checks every
page to get back to the huge page size and this flag would optimze
that...
2019 Aug 08
10
turn hmm migrate_vma upside down v2
Hi Jérôme, Ben and Jason,
below is a series against the hmm tree which starts revamping the
migrate_vma functionality. The prime idea is to export three slightly
lower level functions and thus avoid the need for migrate_vma_ops
callbacks.
Diffstat:
5 files changed, 281 insertions(+), 607 deletions(-)
A git tree is also available at:
git://git.infradead.org/users/hch/misc.git
2024 Oct 15
5
[PATCH v1 0/4] GPU Direct RDMA (P2P DMA) for Device Private Pages
From: Yonatan Maman <Ymaman at Nvidia.com>
This patch series aims to enable Peer-to-Peer (P2P) DMA access in
GPU-centric applications that utilize RDMA and private device pages. This
enhancement is crucial for minimizing data transfer overhead by allowing
the GPU to directly expose device private page data to devices such as
NICs, eliminating the need to traverse system RAM, which is the
2019 Jul 29
24
turn the hmm migrate_vma upside down
Hi Jérôme, Ben and Jason,
below is a series against the hmm tree which starts revamping the
migrate_vma functionality. The prime idea is to export three slightly
lower level functions and thus avoid the need for migrate_vma_ops
callbacks.
Diffstat:
4 files changed, 285 insertions(+), 602 deletions(-)
A git tree is also available at:
git://git.infradead.org/users/hch/misc.git
2023 Aug 29
1
[PATCH drm-misc-next] drm/nouveau: fence: fix undefined fence state after emit
...fence_emit(fence, dmem->migrate.chan);
+ nouveau_fence_new(&fence, dmem->migrate.chan);
migrate_vma_pages(&args);
nouveau_dmem_fence_done(&fence);
dma_unmap_page(drm->dev->dev, dma_addr, PAGE_SIZE, DMA_BIDIRECTIONAL);
@@ -403,8 +402,7 @@ nouveau_dmem_evict_chunk(struct nouveau_dmem_chunk *chunk)
}
}
- if (!nouveau_fence_new(&fence))
- nouveau_fence_emit(fence, chunk->drm->dmem->migrate.chan);
+ nouveau_fence_new(&fence, chunk->drm->dmem->migrate.chan);
migrate_device_pages(src_pfns, dst_pfns, npages);
nouveau_dmem_fence_done(&fence);
migr...
2019 Jul 29
0
[PATCH 6/9] nouveau: simplify nouveau_dmem_migrate_vma
...struct nouveau_dmem, pagemap);
}
-struct nouveau_migrate {
- struct vm_area_struct *vma;
- struct nouveau_drm *drm;
- struct nouveau_fence *fence;
- unsigned long npages;
- dma_addr_t *dma;
- unsigned long dma_nr;
-};
-
static unsigned long nouveau_dmem_page_addr(struct page *page)
{
struct nouveau_dmem_chunk *chunk = page->zone_device_data;
@@ -569,131 +558,67 @@ nouveau_dmem_init(struct nouveau_drm *drm)
drm->dmem = NULL;
}
-static void
-nouveau_dmem_migrate_alloc_and_copy(struct vm_area_struct *vma,
- const unsigned long *src_pfns,
- unsigned long *dst_pfns,
- unsigned...
2024 Dec 01
5
[RFC 0/5] GPU Direct RDMA (P2P DMA) for Device Private Pages
From: Yonatan Maman <Ymaman at Nvidia.com>
Based on: Provide a new two step DMA mapping API patchset
https://lore.kernel.org/kvm/20241114170247.GA5813 at lst.de/T/#t
This patch series aims to enable Peer-to-Peer (P2P) DMA access in
GPU-centric applications that utilize RDMA and private device pages. This
enhancement reduces data transfer overhead by allowing the GPU to directly
expose
2019 Aug 14
20
turn hmm migrate_vma upside down v3
Hi Jérôme, Ben and Jason,
below is a series against the hmm tree which starts revamping the
migrate_vma functionality. The prime idea is to export three slightly
lower level functions and thus avoid the need for migrate_vma_ops
callbacks.
Diffstat:
7 files changed, 282 insertions(+), 614 deletions(-)
A git tree is also available at:
git://git.infradead.org/users/hch/misc.git
2020 May 08
11
[PATCH 0/6] nouveau/hmm: add support for mapping large pages
...Nouveau and the HMM tests are updated to use the new flag.
Note that this series depends on a patch queued in Ben Skeggs' nouveau
tree ("nouveau/hmm: map pages after migration") and the patches queued
in Jason's HMM tree.
There is also a patch outstanding ("nouveau/hmm: fix nouveau_dmem_chunk
allocations") that is independent of the above and could be applied
before or after.
Ralph Campbell (6):
nouveau/hmm: map pages after migration
nouveau: make nvkm_vmm_ctor() and nvkm_mmu_ptp_get() static
nouveau/hmm: fault one page at a time
mm/hmm: add output flag for compound page...