search for: hmm_pfns

Displaying 20 results from an estimated 23 matches for "hmm_pfns".

2020 Apr 22
0
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...if (!amdgpu_ttm_tt_is_readonly(ttm)) - range->default_flags |= range->flags[HMM_PFN_WRITE]; + range->default_flags |= HMM_PFN_REQ_WRITE; - range->pfns = kvmalloc_array(ttm->num_pages, sizeof(*range->pfns), - GFP_KERNEL); - if (unlikely(!range->pfns)) { + range->hmm_pfns = kvmalloc_array(ttm->num_pages, + sizeof(*range->hmm_pfns), GFP_KERNEL); + if (unlikely(!range->hmm_pfns)) { r = -ENOMEM; goto out_free_ranges; } @@ -867,7 +853,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages) * the notifier_lock, and mmu_in...
2020 May 01
0
[PATCH hmm v2 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...if (!amdgpu_ttm_tt_is_readonly(ttm)) - range->default_flags |= range->flags[HMM_PFN_WRITE]; + range->default_flags |= HMM_PFN_REQ_WRITE; - range->pfns = kvmalloc_array(ttm->num_pages, sizeof(*range->pfns), - GFP_KERNEL); - if (unlikely(!range->pfns)) { + range->hmm_pfns = kvmalloc_array(ttm->num_pages, + sizeof(*range->hmm_pfns), GFP_KERNEL); + if (unlikely(!range->hmm_pfns)) { r = -ENOMEM; goto out_free_ranges; } @@ -867,7 +853,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages) * the notifier_lock, and mmu_in...
2020 Apr 22
1
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...m)) > - range->default_flags |= range->flags[HMM_PFN_WRITE]; > + range->default_flags |= HMM_PFN_REQ_WRITE; > > - range->pfns = kvmalloc_array(ttm->num_pages, sizeof(*range->pfns), > - GFP_KERNEL); > - if (unlikely(!range->pfns)) { > + range->hmm_pfns = kvmalloc_array(ttm->num_pages, > + sizeof(*range->hmm_pfns), GFP_KERNEL); > + if (unlikely(!range->hmm_pfns)) { > r = -ENOMEM; > goto out_free_ranges; > } > @@ -867,7 +853,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages) >...
2020 Jun 19
0
[PATCH 08/16] nouveau/hmm: fault one page at a time
...ses and requests hmm_range_fault() to populate and return the page frame number of system memory mapped by the CPU. In preparation for supporting large pages to be mapped by the GPU, process faults one page at a time. In addition, use the hmm_range default_flags to fix a corner case where the input hmm_pfns array is not reinitialized after hmm_range_fault() returns -EBUSY and must be called again. Signed-off-by: Ralph Campbell <rcampbell at nvidia.com> --- drivers/gpu/drm/nouveau/nouveau_svm.c | 199 +++++++++----------------- 1 file changed, 66 insertions(+), 133 deletions(-) diff --git a/dr...
2020 Jul 01
0
[PATCH v3 1/5] nouveau/hmm: fault one page at a time
...ses and requests hmm_range_fault() to populate and return the page frame number of system memory mapped by the CPU. In preparation for supporting large pages to be mapped by the GPU, process faults one page at a time. In addition, use the hmm_range default_flags to fix a corner case where the input hmm_pfns array is not reinitialized after hmm_range_fault() returns -EBUSY and must be called again. Signed-off-by: Ralph Campbell <rcampbell at nvidia.com> --- drivers/gpu/drm/nouveau/nouveau_svm.c | 199 +++++++++----------------- 1 file changed, 66 insertions(+), 133 deletions(-) diff --git a/dr...
2020 Apr 22
11
[PATCH hmm 0/5] Adjust hmm_range_fault() API
...register for mmu interval * notifier updates. @@ -175,7 +160,7 @@ static inline struct dmirror_device *dmirror_page_to_device(struct page *page) static int dmirror_do_fault(struct dmirror *dmirror, struct hmm_range *range) { - uint64_t *pfns = range->pfns; + unsigned long *pfns = range->hmm_pfns; unsigned long pfn; for (pfn = (range->start >> PAGE_SHIFT); @@ -188,15 +173,16 @@ static int dmirror_do_fault(struct dmirror *dmirror, struct hmm_range *range) * Since we asked for hmm_range_fault() to populate pages, * it shouldn't return an error entry on success....
2020 May 01
13
[PATCH hmm v2 0/5] Adjust hmm_range_fault() API
From: Jason Gunthorpe <jgg at mellanox.com> The API is a bit complicated for the uses we actually have, and disucssions for simplifying have come up a number of times. This small series removes the customizable pfn format and simplifies the return code of hmm_range_fault() All the drivers are adjusted to process in the simplified format. I would appreciated tested-by's for the two
2020 Jun 30
6
[PATCH v2 0/5] mm/hmm/nouveau: add PMD system memory mapping
The goal for this series is to introduce the hmm_range_fault() output array flags HMM_PFN_PMD and HMM_PFN_PUD. This allows a device driver to know that a given 4K PFN is actually mapped by the CPU using either a PMD sized or PUD sized CPU page table entry and therefore the device driver can safely map system memory using larger device MMU PTEs. The series is based on 5.8.0-rc3 and is intended for
2020 Jul 01
8
[PATCH v3 0/5] mm/hmm/nouveau: add PMD system memory mapping
The goal for this series is to introduce the hmm_pfn_to_map_order() function. This allows a device driver to know that a given 4K PFN is actually mapped by the CPU using a larger sized CPU page table entry and therefore the device driver can safely map system memory using larger device MMU PTEs. The series is based on 5.8.0-rc3 and is intended for Jason Gunthorpe's hmm tree. These were
2020 Apr 22
2
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...nsolidating the two different sets of defines (NVIF_VMM_PFNMAP_V0_ vs NVKM_VMM_PFN_) to make the code a little easier to follow. > npages = (range->end - range->start) >> PAGE_SHIFT; > for (i = 0; i < npages; ++i) { > struct page *page; > > + if (!(range->hmm_pfns[i] & HMM_PFN_VALID)) { > + ioctl_addr[i] = 0; > continue; > + } Can't we rely on the caller pre-zeroing the array? > + page = hmm_pfn_to_page(range->hmm_pfns[i]); > + if (is_device_private_page(page)) > + ioctl_addr[i] = nouveau_dmem_page_addr(page) | >...
2020 May 08
11
[PATCH 0/6] nouveau/hmm: add support for mapping large pages
hmm_range_fault() returns an array of page frame numbers and flags for how the pages are mapped in the requested process' page tables. The PFN can be used to get the struct page with hmm_pfn_to_page() and the page size order can be determined with compound_order(page) but if the page is larger than order 0 (PAGE_SIZE), there is no indication that the page is mapped using a larger page size. To
2020 Jun 26
1
[PATCH v2] nouveau: fix page fault on device private memory
...d39874 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -562,6 +562,7 @@ static int nouveau_range_fault(struct nouveau_svmm *svmm, .end = notifier->notifier.interval_tree.last + 1, .pfn_flags_mask = HMM_PFN_REQ_FAULT | HMM_PFN_REQ_WRITE, .hmm_pfns = hmm_pfns, + .dev_private_owner = drm->dev, }; struct mm_struct *mm = notifier->notifier.mm; int ret; -- 2.20.1
2020 Jun 26
2
[PATCH] nouveau: fix page fault on device private memory
...d39874 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -562,6 +562,7 @@ static int nouveau_range_fault(struct nouveau_svmm *svmm, .end = notifier->notifier.interval_tree.last + 1, .pfn_flags_mask = HMM_PFN_REQ_FAULT | HMM_PFN_REQ_WRITE, .hmm_pfns = hmm_pfns, + .dev_private_owner = drm->dev, }; struct mm_struct *mm = notifier->notifier.mm; int ret; -- 2.20.1
2020 Apr 22
0
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...nce the above should be cleaned before we abuse hmm_range_fault. Put it in the commit message instead? > > npages = (range->end - range->start) >> PAGE_SHIFT; > > for (i = 0; i < npages; ++i) { > > struct page *page; > > > > + if (!(range->hmm_pfns[i] & HMM_PFN_VALID)) { > > + ioctl_addr[i] = 0; > > continue; > > + } > > Can't we rely on the caller pre-zeroing the array? This ends up as args.phys in nouveau_svm_fault - I didn't see a zeroing? I think it makes sense that this routine fully sets the...
2020 Jun 19
0
[PATCH 10/16] nouveau/hmm: support mapping large sysmem pages
...if_object_ioctl() + * The address prepared here is passed through nvif_object_ioctl() * to an eventual DMA map in something like gp100_vmm_pgt_pfn() * * This is all just encoding the internal hmm representation into a * different nouveau internal representation. */ if (!(range->hmm_pfns[0] & HMM_PFN_VALID)) { - ioctl_addr[0] = 0; + args->p.phys[0] = 0; return; } page = hmm_pfn_to_page(range->hmm_pfns[0]); + /* + * Only map compound pages to the GPU if the CPU is also mapping the + * page as a compound page. Otherwise, the PTE protections might not be + * co...
2020 May 08
0
[PATCH 4/6] mm/hmm: add output flag for compound page mapping
...hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, @@ -484,7 +488,7 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask, pfn = pte_pfn(entry) + ((start & ~hmask) >> PAGE_SHIFT); for (; addr < end; addr += PAGE_SIZE, i++, pfn++) - range->hmm_pfns[i] = pfn | cpu_flags; + range->hmm_pfns[i] = pfn | cpu_flags | HMM_PFN_COMPOUND; spin_unlock(ptl); return 0; -- 2.20.1
2020 Jun 19
0
[PATCH 09/16] mm/hmm: add output flag for compound page mapping
...hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, @@ -484,7 +488,7 @@ static int hmm_vma_walk_hugetlb_entry(pte_t *pte, unsigned long hmask, pfn = pte_pfn(entry) + ((start & ~hmask) >> PAGE_SHIFT); for (; addr < end; addr += PAGE_SIZE, i++, pfn++) - range->hmm_pfns[i] = pfn | cpu_flags; + range->hmm_pfns[i] = pfn | cpu_flags | HMM_PFN_COMPOUND; spin_unlock(ptl); return 0; -- 2.20.1
2020 Jun 22
1
[PATCH 08/16] nouveau/hmm: fault one page at a time
..._range_fault() to populate and > return the page frame number of system memory mapped by the CPU. > In preparation for supporting large pages to be mapped by the GPU, > process faults one page at a time. In addition, use the hmm_range > default_flags to fix a corner case where the input hmm_pfns array > is not reinitialized after hmm_range_fault() returns -EBUSY and must > be called again. Are you sure? hmm_range_fault is pretty expensive per call.. Jason
2020 May 05
1
[PATCH hmm v2 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...st > index 9924f2caa0184c..c9f2329113a47f 100644 > --- a/Documentation/vm/hmm.rst > +++ b/Documentation/vm/hmm.rst > @@ -185,9 +185,6 @@ The usage pattern is:: > range.start = ...; > range.end = ...; > range.pfns = ...; That should be: range.hmm_pfns = ...; > - range.flags = ...; > - range.values = ...; > - range.pfn_shift = ...; > > if (!mmget_not_zero(interval_sub->notifier.mm)) > return -EFAULT; > @@ -229,15 +226,10 @@ The hmm_range struct has 2 fields, default_flags and pfn_fla...
2020 Jun 19
22
[PATCH 00/16] mm/hmm/nouveau: THP mapping and migration
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM self tests. Patch 7-8 prepare nouveau for the meat of this series which adds support and testing for compound page mapping of system memory (patches 9-11) and compound page migration to device private memory (patches 12-16). Since these changes are split