Displaying 20 results from an estimated 21 matches for "gp100_vmm_pgt_pfn".
2020 Apr 22
2
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...> + * .mthd = nvkm_uvmm_mthd
> + * nvkm_uvmm_mthd()
> + * nvkm_uvmm_mthd_pfnmap()
> + * nvkm_vmm_pfn_map()
> + * nvkm_vmm_ptes_get_map()
> + * func == gp100_vmm_pgt_pfn
> + * struct nvkm_vmm_desc_func gp100_vmm_desc_spt:
> + * .pfn = gp100_vmm_pgt_pfn
> + * nvkm_vmm_iter()
> + * REF_PTES == func == gp100_vmm_pgt_pfn()
> + * dma_map_page(...
2020 May 02
1
[PATCH hmm v2 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...unc(..)
> nvkm_ioctl_mthd()
> nvkm_object_mthd()
> struct nvkm_object_func nvkm_uvmm:
> .mthd = nvkm_uvmm_mthd
> nvkm_uvmm_mthd()
> nvkm_uvmm_mthd_pfnmap()
> nvkm_vmm_pfn_map()
> nvkm_vmm_ptes_get_map()
> func == gp100_vmm_pgt_pfn
> struct nvkm_vmm_desc_func gp100_vmm_desc_spt:
> .pfn = gp100_vmm_pgt_pfn
> nvkm_vmm_iter()
> REF_PTES == func == gp100_vmm_pgt_pfn()
> dma_map_page()
>
> Acked-by: Felix Kuehling <Felix.Kuehling at amd.com>
> Tested-by: Ralph Campbell <rca...
2020 Apr 22
0
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
....mthd = nvkm_uvmm_mthd
> > + * nvkm_uvmm_mthd()
> > + * nvkm_uvmm_mthd_pfnmap()
> > + * nvkm_vmm_pfn_map()
> > + * nvkm_vmm_ptes_get_map()
> > + * func == gp100_vmm_pgt_pfn
> > + * struct nvkm_vmm_desc_func gp100_vmm_desc_spt:
> > + * .pfn = gp100_vmm_pgt_pfn
> > + * nvkm_vmm_iter()
> > + * REF_PTES == func == gp100_vmm_pgt_pfn()
> > +...
2020 Jun 19
0
[PATCH 10/16] nouveau/hmm: support mapping large sysmem pages
...ddr)
+ struct hmm_range *range,
+ struct nouveau_pfnmap_args *args)
{
struct page *page;
/*
- * The ioctl_addr prepared here is passed through nvif_object_ioctl()
+ * The address prepared here is passed through nvif_object_ioctl()
* to an eventual DMA map in something like gp100_vmm_pgt_pfn()
*
* This is all just encoding the internal hmm representation into a
* different nouveau internal representation.
*/
if (!(range->hmm_pfns[0] & HMM_PFN_VALID)) {
- ioctl_addr[0] = 0;
+ args->p.phys[0] = 0;
return;
}
page = hmm_pfn_to_page(range->hmm_pfns[0]);...
2020 Jul 06
0
[PATCH 1/5] nouveau: fix storing invalid ptes
...tions(-)
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c
index ed37fddd063f..7eabe9fe0d2b 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c
@@ -79,8 +79,12 @@ gp100_vmm_pgt_pfn(struct nvkm_vmm *vmm, struct nvkm_mmu_pt *pt,
dma_addr_t addr;
nvkm_kmap(pt->memory);
- while (ptes--) {
+ for (; ptes; ptes--, map->pfn++) {
u64 data = 0;
+
+ if (!(*map->pfn & NVKM_VMM_PFN_V))
+ continue;
+
if (!(*map->pfn & NVKM_VMM_PFN_W))
data |= BIT_ULL(...
2020 Jul 13
0
[PATCH v2 1/5] nouveau: fix storing invalid ptes
...tions(-)
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c
index ed37fddd063f..7eabe9fe0d2b 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c
@@ -79,8 +79,12 @@ gp100_vmm_pgt_pfn(struct nvkm_vmm *vmm, struct nvkm_mmu_pt *pt,
dma_addr_t addr;
nvkm_kmap(pt->memory);
- while (ptes--) {
+ for (; ptes; ptes--, map->pfn++) {
u64 data = 0;
+
+ if (!(*map->pfn & NVKM_VMM_PFN_V))
+ continue;
+
if (!(*map->pfn & NVKM_VMM_PFN_W))
data |= BIT_ULL(...
2020 Jul 23
0
[PATCH v4 1/6] nouveau: fix storing invalid ptes
...tions(-)
diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c
index ed37fddd063f..7eabe9fe0d2b 100644
--- a/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c
+++ b/drivers/gpu/drm/nouveau/nvkm/subdev/mmu/vmmgp100.c
@@ -79,8 +79,12 @@ gp100_vmm_pgt_pfn(struct nvkm_vmm *vmm, struct nvkm_mmu_pt *pt,
dma_addr_t addr;
nvkm_kmap(pt->memory);
- while (ptes--) {
+ for (; ptes; ptes--, map->pfn++) {
u64 data = 0;
+
+ if (!(*map->pfn & NVKM_VMM_PFN_V))
+ continue;
+
if (!(*map->pfn & NVKM_VMM_PFN_W))
data |= BIT_ULL(...
2020 May 01
13
[PATCH hmm v2 0/5] Adjust hmm_range_fault() API
From: Jason Gunthorpe <jgg at mellanox.com>
The API is a bit complicated for the uses we actually have, and
disucssions for simplifying have come up a number of times.
This small series removes the customizable pfn format and simplifies the
return code of hmm_range_fault()
All the drivers are adjusted to process in the simplified format.
I would appreciated tested-by's for the two
2020 May 01
0
[PATCH hmm v2 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...vkm_ioctl_path()
nvkm_ioctl_v0[type].func(..)
nvkm_ioctl_mthd()
nvkm_object_mthd()
struct nvkm_object_func nvkm_uvmm:
.mthd = nvkm_uvmm_mthd
nvkm_uvmm_mthd()
nvkm_uvmm_mthd_pfnmap()
nvkm_vmm_pfn_map()
nvkm_vmm_ptes_get_map()
func == gp100_vmm_pgt_pfn
struct nvkm_vmm_desc_func gp100_vmm_desc_spt:
.pfn = gp100_vmm_pgt_pfn
nvkm_vmm_iter()
REF_PTES == func == gp100_vmm_pgt_pfn()
dma_map_page()
Acked-by: Felix Kuehling <Felix.Kuehling at amd.com>
Tested-by: Ralph Campbell <rcampbell at nvidia.com>
Signed-off-by:...
2020 Apr 22
11
[PATCH hmm 0/5] Adjust hmm_range_fault() API
From: Jason Gunthorpe <jgg at mellanox.com>
The API is a bit complicated for the uses we actually have, and
disucssions for simplifying have come up a number of times.
This small series removes the customizable pfn format and simplifies the
return code of hmm_range_fault()
All the drivers are adjusted to process in the simplified format.
I would appreciated tested-by's for the two
2020 May 08
11
[PATCH 0/6] nouveau/hmm: add support for mapping large pages
hmm_range_fault() returns an array of page frame numbers and flags for
how the pages are mapped in the requested process' page tables. The PFN
can be used to get the struct page with hmm_pfn_to_page() and the page size
order can be determined with compound_order(page) but if the page is larger
than order 0 (PAGE_SIZE), there is no indication that the page is mapped
using a larger page size. To
2020 Apr 22
0
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...t nvkm_object_func nvkm_uvmm:
+ * .mthd = nvkm_uvmm_mthd
+ * nvkm_uvmm_mthd()
+ * nvkm_uvmm_mthd_pfnmap()
+ * nvkm_vmm_pfn_map()
+ * nvkm_vmm_ptes_get_map()
+ * func == gp100_vmm_pgt_pfn
+ * struct nvkm_vmm_desc_func gp100_vmm_desc_spt:
+ * .pfn = gp100_vmm_pgt_pfn
+ * nvkm_vmm_iter()
+ * REF_PTES == func == gp100_vmm_pgt_pfn()
+ * dma_map_page()
+ *
+ * This is all j...
2020 Apr 22
1
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...> + * .mthd = nvkm_uvmm_mthd
> + * nvkm_uvmm_mthd()
> + * nvkm_uvmm_mthd_pfnmap()
> + * nvkm_vmm_pfn_map()
> + * nvkm_vmm_ptes_get_map()
> + * func == gp100_vmm_pgt_pfn
> + * struct nvkm_vmm_desc_func gp100_vmm_desc_spt:
> + * .pfn = gp100_vmm_pgt_pfn
> + * nvkm_vmm_iter()
> + * REF_PTES == func == gp100_vmm_pgt_pfn()
> + * dma_map_page(...
2020 Jul 06
8
[PATCH 0/5] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is
2020 Jul 21
6
[PATCH v3 0/5] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is
2020 Jul 13
9
[PATCH v2 0/5] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is
2020 Jul 23
9
[PATCH v4 0/6] mm/migrate: avoid device private invalidations
The goal for this series is to avoid device private memory TLB
invalidations when migrating a range of addresses from system
memory to device private memory and some of those pages have already
been migrated. The approach taken is to introduce a new mmu notifier
invalidation event type and use that in the device driver to skip
invalidation callbacks from migrate_vma_setup(). The device driver is
2020 May 05
1
[PATCH hmm v2 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...atic void nouveau_hmm_convert_pfn(struct nouveau_drm *drm,
> + struct hmm_range *range, u64 *ioctl_addr)
> +{
> + unsigned long i, npages;
> +
> + /*
> + * The ioctl_addr prepared here is passed through nvif_object_ioctl()
> + * to an eventual DMA map in something like gp100_vmm_pgt_pfn()
> + *
> + * This is all just encoding the internal hmm reprensetation into a
"representation"
...
> @@ -542,12 +564,15 @@ static int nouveau_range_fault(struct nouveau_svmm *svmm,
> return -EBUSY;
>
> range.notifier_seq = mmu_interval_read_begin(range....
2020 Jun 19
22
[PATCH 00/16] mm/hmm/nouveau: THP mapping and migration
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go
into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM
self tests. Patch 7-8 prepare nouveau for the meat of this series which
adds support and testing for compound page mapping of system memory
(patches 9-11) and compound page migration to device private memory
(patches 12-16). Since these changes are split
2020 Jun 30
6
[PATCH v2 0/5] mm/hmm/nouveau: add PMD system memory mapping
The goal for this series is to introduce the hmm_range_fault() output
array flags HMM_PFN_PMD and HMM_PFN_PUD. This allows a device driver to
know that a given 4K PFN is actually mapped by the CPU using either a
PMD sized or PUD sized CPU page table entry and therefore the device
driver can safely map system memory using larger device MMU PTEs.
The series is based on 5.8.0-rc3 and is intended for