search for: nvkm_vmm_pfn_map

Displaying 18 results from an estimated 18 matches for "nvkm_vmm_pfn_map".

2022 Oct 29
3
[PATCH] drm/nouveau/mmu: fix use-after-free bug in nvkm_vmm_pfn_map
...ertion(+), 2 deletions(-) diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c index ae793f400ba1..04befd28f80b 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c @@ -1272,8 +1272,7 @@ nvkm_vmm_pfn_map(struct nvkm_vmm *vmm, u8 shift, u64 addr, u64 size, u64 *pfn) page - vmm->func->page, map); if (WARN_ON(!tmp)) { - ret = -ENOMEM; - goto next; + return -ENOMEM; } if ((tmp->mapped = map)) -- 2.25.1
2023 Mar 07
0
[PATCH] drm/nouveau/mmu: fix use-after-free bug in nvkm_vmm_pfn_map
...> diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c > index ae793f400ba1..04befd28f80b 100644 > --- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c > +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c > @@ -1272,8 +1272,7 @@ nvkm_vmm_pfn_map(struct nvkm_vmm *vmm, u8 shift, u64 addr, u64 size, u64 *pfn) > page - > vmm->func->page, map); > if (WARN_ON(!tmp)) { > - ret = -ENOMEM; > - goto next; > + return -ENOMEM; > } > > if ((tmp->mapped = map)) -- C...
2023 Mar 07
1
[PATCH] drm/nouveau/mmu: fix use-after-free bug in nvkm_vmm_pfn_map
...> diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c > index ae793f400ba1..04befd28f80b 100644 > --- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c > +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c > @@ -1272,8 +1272,7 @@ nvkm_vmm_pfn_map(struct nvkm_vmm *vmm, u8 shift, u64 addr, u64 size, u64 *pfn) > page - > vmm->func->page, map); > if (WARN_ON(!tmp)) { > - ret = -ENOMEM; > - goto next; > + return -ENOMEM; > } > > if ((tmp->mapped = map)) -- C...
2020 Apr 22
2
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...l_mthd() > + * nvkm_object_mthd() > + * struct nvkm_object_func nvkm_uvmm: > + * .mthd = nvkm_uvmm_mthd > + * nvkm_uvmm_mthd() > + * nvkm_uvmm_mthd_pfnmap() > + * nvkm_vmm_pfn_map() > + * nvkm_vmm_ptes_get_map() > + * func == gp100_vmm_pgt_pfn > + * struct nvkm_vmm_desc_func gp100_vmm_desc_spt: > + * .pfn = gp100_vmm_pgt_pfn > + * nvkm_v...
2024 Jan 23
0
[PATCH 48/82] drm/nouveau/mmu: Refactor intentional wrap-around test
...sertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c index 6ca1a82ccbc1..87c0903be9a7 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmm.c @@ -1291,7 +1291,7 @@ nvkm_vmm_pfn_map(struct nvkm_vmm *vmm, u8 shift, u64 addr, u64 size, u64 *pfn) if (!page->shift || !IS_ALIGNED(addr, 1ULL << shift) || !IS_ALIGNED(size, 1ULL << shift) || - addr + size < addr || addr + size > vmm->limit) { + add_would_overflow(addr, size) || addr + size &...
2020 Apr 22
0
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...nvkm_object_mthd() > > + * struct nvkm_object_func nvkm_uvmm: > > + * .mthd = nvkm_uvmm_mthd > > + * nvkm_uvmm_mthd() > > + * nvkm_uvmm_mthd_pfnmap() > > + * nvkm_vmm_pfn_map() > > + * nvkm_vmm_ptes_get_map() > > + * func == gp100_vmm_pgt_pfn > > + * struct nvkm_vmm_desc_func gp100_vmm_desc_spt: > > + * .pfn = gp100_vmm_pgt_pfn > > + *...
2020 May 02
1
[PATCH hmm v2 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...nvkm_ioctl() > nvkm_ioctl_path() > nvkm_ioctl_v0[type].func(..) > nvkm_ioctl_mthd() > nvkm_object_mthd() > struct nvkm_object_func nvkm_uvmm: > .mthd = nvkm_uvmm_mthd > nvkm_uvmm_mthd() > nvkm_uvmm_mthd_pfnmap() > nvkm_vmm_pfn_map() > nvkm_vmm_ptes_get_map() > func == gp100_vmm_pgt_pfn > struct nvkm_vmm_desc_func gp100_vmm_desc_spt: > .pfn = gp100_vmm_pgt_pfn > nvkm_vmm_iter() > REF_PTES == func == gp100_vmm_pgt_pfn() > dma_map_page() > > Acked-by: Felix Ku...
2020 Jul 01
0
[PATCH v3 3/5] nouveau: fix mapping 2MB sysmem pages
...TODO: * - Avoid PT readback (for dma_unmap etc), this might end up being dealt * with inside HMM, which would be a lot nicer for us to deal with. - * - Multiple page sizes (particularly for huge page support). * - Support for systems without a 4KiB page size. */ int @@ -1220,8 +1219,8 @@ nvkm_vmm_pfn_map(struct nvkm_vmm *vmm, u8 shift, u64 addr, u64 size, u64 *pfn) /* Only support mapping where the page size of the incoming page * array matches a page size available for direct mapping. */ - while (page->shift && page->shift != shift && - page->desc->func-&...
2020 Jun 19
0
[PATCH 10/16] nouveau/hmm: support mapping large sysmem pages
...TODO: * - Avoid PT readback (for dma_unmap etc), this might end up being dealt * with inside HMM, which would be a lot nicer for us to deal with. - * - Multiple page sizes (particularly for huge page support). * - Support for systems without a 4KiB page size. */ int @@ -1220,8 +1222,8 @@ nvkm_vmm_pfn_map(struct nvkm_vmm *vmm, u8 shift, u64 addr, u64 size, u64 *pfn) /* Only support mapping where the page size of the incoming page * array matches a page size available for direct mapping. */ - while (page->shift && page->shift != shift && - page->desc->func-&...
2020 Jul 01
8
[PATCH v3 0/5] mm/hmm/nouveau: add PMD system memory mapping
The goal for this series is to introduce the hmm_pfn_to_map_order() function. This allows a device driver to know that a given 4K PFN is actually mapped by the CPU using a larger sized CPU page table entry and therefore the device driver can safely map system memory using larger device MMU PTEs. The series is based on 5.8.0-rc3 and is intended for Jason Gunthorpe's hmm tree. These were
2020 Apr 22
11
[PATCH hmm 0/5] Adjust hmm_range_fault() API
From: Jason Gunthorpe <jgg at mellanox.com> The API is a bit complicated for the uses we actually have, and disucssions for simplifying have come up a number of times. This small series removes the customizable pfn format and simplifies the return code of hmm_range_fault() All the drivers are adjusted to process in the simplified format. I would appreciated tested-by's for the two
2020 Jun 30
6
[PATCH v2 0/5] mm/hmm/nouveau: add PMD system memory mapping
The goal for this series is to introduce the hmm_range_fault() output array flags HMM_PFN_PMD and HMM_PFN_PUD. This allows a device driver to know that a given 4K PFN is actually mapped by the CPU using either a PMD sized or PUD sized CPU page table entry and therefore the device driver can safely map system memory using larger device MMU PTEs. The series is based on 5.8.0-rc3 and is intended for
2020 Apr 22
0
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...) + * nvkm_ioctl_mthd() + * nvkm_object_mthd() + * struct nvkm_object_func nvkm_uvmm: + * .mthd = nvkm_uvmm_mthd + * nvkm_uvmm_mthd() + * nvkm_uvmm_mthd_pfnmap() + * nvkm_vmm_pfn_map() + * nvkm_vmm_ptes_get_map() + * func == gp100_vmm_pgt_pfn + * struct nvkm_vmm_desc_func gp100_vmm_desc_spt: + * .pfn = gp100_vmm_pgt_pfn + * nvkm_vmm_iter() + *...
2020 May 01
0
[PATCH hmm v2 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...driver_nvkm: .ioctl = nvkm_client_ioctl nvkm_ioctl() nvkm_ioctl_path() nvkm_ioctl_v0[type].func(..) nvkm_ioctl_mthd() nvkm_object_mthd() struct nvkm_object_func nvkm_uvmm: .mthd = nvkm_uvmm_mthd nvkm_uvmm_mthd() nvkm_uvmm_mthd_pfnmap() nvkm_vmm_pfn_map() nvkm_vmm_ptes_get_map() func == gp100_vmm_pgt_pfn struct nvkm_vmm_desc_func gp100_vmm_desc_spt: .pfn = gp100_vmm_pgt_pfn nvkm_vmm_iter() REF_PTES == func == gp100_vmm_pgt_pfn() dma_map_page() Acked-by: Felix Kuehling <Felix.Kuehling at amd.com> Test...
2020 May 01
13
[PATCH hmm v2 0/5] Adjust hmm_range_fault() API
From: Jason Gunthorpe <jgg at mellanox.com> The API is a bit complicated for the uses we actually have, and disucssions for simplifying have come up a number of times. This small series removes the customizable pfn format and simplifies the return code of hmm_range_fault() All the drivers are adjusted to process in the simplified format. I would appreciated tested-by's for the two
2020 Apr 22
1
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...l_mthd() > + * nvkm_object_mthd() > + * struct nvkm_object_func nvkm_uvmm: > + * .mthd = nvkm_uvmm_mthd > + * nvkm_uvmm_mthd() > + * nvkm_uvmm_mthd_pfnmap() > + * nvkm_vmm_pfn_map() > + * nvkm_vmm_ptes_get_map() > + * func == gp100_vmm_pgt_pfn > + * struct nvkm_vmm_desc_func gp100_vmm_desc_spt: > + * .pfn = gp100_vmm_pgt_pfn > + * nvkm_v...
2020 May 08
11
[PATCH 0/6] nouveau/hmm: add support for mapping large pages
hmm_range_fault() returns an array of page frame numbers and flags for how the pages are mapped in the requested process' page tables. The PFN can be used to get the struct page with hmm_pfn_to_page() and the page size order can be determined with compound_order(page) but if the page is larger than order 0 (PAGE_SIZE), there is no indication that the page is mapped using a larger page size. To
2020 Jun 19
22
[PATCH 00/16] mm/hmm/nouveau: THP mapping and migration
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM self tests. Patch 7-8 prepare nouveau for the meat of this series which adds support and testing for compound page mapping of system memory (patches 9-11) and compound page migration to device private memory (patches 12-16). Since these changes are split