search for: hmm_pfn_to_page

Displaying 20 results from an estimated 44 matches for "hmm_pfn_to_page".

2019 Aug 15
4
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...400, Jerome Glisse wrote: > So nor HMM nor driver should dereference the struct page (i do not > think any iommu driver would either), Er, they do technically deref the struct page: nouveau_dmem_convert_pfn(struct nouveau_drm *drm, struct hmm_range *range) struct page *page; page = hmm_pfn_to_page(range, range->pfns[i]); if (!nouveau_dmem_page(drm, page)) { nouveau_dmem_page(struct nouveau_drm *drm, struct page *page) { return is_device_private_page(page) && drm->dmem == page_to_dmem(page) Which does touch 'page->pgmap' Is this OK without having a get_dev_pag...
2019 Jul 01
0
[PATCH 22/22] mm: remove the legacy hmm_pfn_* APIs
...dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c index 40c47d6a7d78..534069ffe20a 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -853,7 +853,7 @@ nouveau_dmem_convert_pfn(struct nouveau_drm *drm, struct page *page; uint64_t addr; - page = hmm_pfn_to_page(range, range->pfns[i]); + page = hmm_device_entry_to_page(range, range->pfns[i]); if (page == NULL) continue; diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 3457cf9182e5..9799fde71f2e 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -290,42 +290,6 @@ stati...
2019 Jul 03
0
[PATCH 5/5] mm: remove the legacy hmm_pfn_* APIs
...dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c index 42c026010938..b9ced2e61667 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @@ -844,7 +844,7 @@ nouveau_dmem_convert_pfn(struct nouveau_drm *drm, struct page *page; uint64_t addr; - page = hmm_pfn_to_page(range, range->pfns[i]); + page = hmm_device_entry_to_page(range, range->pfns[i]); if (page == NULL) continue; diff --git a/include/linux/hmm.h b/include/linux/hmm.h index 657606f48796..cdcd78627393 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -290,40 +290,6 @@ stati...
2020 May 08
2
[PATCH 4/6] mm/hmm: add output flag for compound page mapping
On Fri, May 08, 2020 at 12:20:07PM -0700, Ralph Campbell wrote: > hmm_range_fault() returns an array of page frame numbers and flags for > how the pages are mapped in the requested process' page tables. The PFN > can be used to get the struct page with hmm_pfn_to_page() and the page size > order can be determined with compound_order(page) but if the page is larger > than order 0 (PAGE_SIZE), there is no indication that the page is mapped > using a larger page size. To be fully general, hmm_range_fault() would need > to return the mapping size to hand...
2020 Jun 22
2
[PATCH 09/16] mm/hmm: add output flag for compound page mapping
On Fri, Jun 19, 2020 at 02:56:42PM -0700, Ralph Campbell wrote: > hmm_range_fault() returns an array of page frame numbers and flags for > how the pages are mapped in the requested process' page tables. The PFN > can be used to get the struct page with hmm_pfn_to_page() and the page size > order can be determined with compound_order(page) but if the page is larger > than order 0 (PAGE_SIZE), there is no indication that the page is mapped > using a larger page size. To be fully general, hmm_range_fault() would need > to return the mapping size to hand...
2020 May 08
2
[PATCH 0/6] nouveau/hmm: add support for mapping large pages
...ew Wilcox wrote: > On Fri, May 08, 2020 at 12:20:03PM -0700, Ralph Campbell wrote: >> hmm_range_fault() returns an array of page frame numbers and flags for >> how the pages are mapped in the requested process' page tables. The PFN >> can be used to get the struct page with hmm_pfn_to_page() and the page size >> order can be determined with compound_order(page) but if the page is larger >> than order 0 (PAGE_SIZE), there is no indication that the page is mapped >> using a larger page size. To be fully general, hmm_range_fault() would need >> to return the mapp...
2019 Jul 03
8
hmm_range_fault related fixes and legacy API removal
Hi Jérôme, Ben and Jason, below is a series against the hmm tree which fixes up the mmap_sem locking in nouveau and while at it also removes leftover legacy HMM APIs only used by nouveau.
2020 Jun 30
1
[PATCH v2 2/5] mm/hmm: add output flags for PMD/PUD page mapping
On Tue, Jun 30, 2020 at 12:57:34PM -0700, Ralph Campbell wrote: > hmm_range_fault() returns an array of page frame numbers and flags for > how the pages are mapped in the requested process' page tables. The PFN > can be used to get the struct page with hmm_pfn_to_page() and the page > size order can be determined with compound_order(page) but if the page > is larger than order 0 (PAGE_SIZE), there is no indication that a > compound page is mapped by the CPU using a larger page size. Without > this information, the caller can't safely use a large...
2020 Jul 01
0
[PATCH v3 2/5] mm/hmm: add hmm_mapping order
hmm_range_fault() returns an array of page frame numbers and flags for how the pages are mapped in the requested process' page tables. The PFN can be used to get the struct page with hmm_pfn_to_page() and the page size order can be determined with compound_order(page) but if the page is larger than order 0 (PAGE_SIZE), there is no indication that a compound page is mapped by the CPU using a larger page size. Without this information, the caller can't safely use a large device PTE to map th...
2020 May 08
11
[PATCH 0/6] nouveau/hmm: add support for mapping large pages
hmm_range_fault() returns an array of page frame numbers and flags for how the pages are mapped in the requested process' page tables. The PFN can be used to get the struct page with hmm_pfn_to_page() and the page size order can be determined with compound_order(page) but if the page is larger than order 0 (PAGE_SIZE), there is no indication that the page is mapped using a larger page size. To be fully general, hmm_range_fault() would need to return the mapping size to handle cases like a 1GB...
2019 Aug 16
2
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...mu driver would either), > > > > Er, they do technically deref the struct page: > > > > nouveau_dmem_convert_pfn(struct nouveau_drm *drm, > > struct hmm_range *range) > > struct page *page; > > page = hmm_pfn_to_page(range, range->pfns[i]); > > if (!nouveau_dmem_page(drm, page)) { > > > > > > nouveau_dmem_page(struct nouveau_drm *drm, struct page *page) > > { > > return is_device_private_page(page) && drm->dmem == page_to_dmem(page) >...
2020 Jun 22
1
[PATCH 09/16] mm/hmm: add output flag for compound page mapping
...gt; On Fri, Jun 19, 2020 at 02:56:42PM -0700, Ralph Campbell wrote: > > > hmm_range_fault() returns an array of page frame numbers and flags for > > > how the pages are mapped in the requested process' page tables. The PFN > > > can be used to get the struct page with hmm_pfn_to_page() and the page size > > > order can be determined with compound_order(page) but if the page is larger > > > than order 0 (PAGE_SIZE), there is no indication that the page is mapped > > > using a larger page size. To be fully general, hmm_range_fault() would need > >...
2020 May 26
1
[PATCH 0/6] nouveau/hmm: add support for mapping large pages
...Gunthorpe wrote: > On Fri, May 08, 2020 at 12:20:03PM -0700, Ralph Campbell wrote: >> hmm_range_fault() returns an array of page frame numbers and flags for >> how the pages are mapped in the requested process' page tables. The PFN >> can be used to get the struct page with hmm_pfn_to_page() and the page size >> order can be determined with compound_order(page) but if the page is larger >> than order 0 (PAGE_SIZE), there is no indication that the page is mapped >> using a larger page size. To be fully general, hmm_range_fault() would need >> to return the mapp...
2020 May 26
1
[PATCH 4/6] mm/hmm: add output flag for compound page mapping
...>> On Fri, May 08, 2020 at 12:20:07PM -0700, Ralph Campbell wrote: >>> hmm_range_fault() returns an array of page frame numbers and flags for >>> how the pages are mapped in the requested process' page tables. The PFN >>> can be used to get the struct page with hmm_pfn_to_page() and the page size >>> order can be determined with compound_order(page) but if the page is larger >>> than order 0 (PAGE_SIZE), there is no indication that the page is mapped >>> using a larger page size. To be fully general, hmm_range_fault() would need >>> to...
2020 Apr 22
2
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...nge->start) >> PAGE_SHIFT; > for (i = 0; i < npages; ++i) { > struct page *page; > > + if (!(range->hmm_pfns[i] & HMM_PFN_VALID)) { > + ioctl_addr[i] = 0; > continue; > + } Can't we rely on the caller pre-zeroing the array? > + page = hmm_pfn_to_page(range->hmm_pfns[i]); > + if (is_device_private_page(page)) > + ioctl_addr[i] = nouveau_dmem_page_addr(page) | > + NVIF_VMM_PFNMAP_V0_V | > + NVIF_VMM_PFNMAP_V0_VRAM; > + else > + ioctl_addr[i] = page_to_phys(page) | > + NVIF_VMM_PFNMAP_V0_V | > + NVI...
2020 Jun 30
6
[PATCH v2 0/5] mm/hmm/nouveau: add PMD system memory mapping
The goal for this series is to introduce the hmm_range_fault() output array flags HMM_PFN_PMD and HMM_PFN_PUD. This allows a device driver to know that a given 4K PFN is actually mapped by the CPU using either a PMD sized or PUD sized CPU page table entry and therefore the device driver can safely map system memory using larger device MMU PTEs. The series is based on 5.8.0-rc3 and is intended for
2020 Apr 22
11
[PATCH hmm 0/5] Adjust hmm_range_fault() API
...r hmm_range_fault() to populate pages, * it shouldn't return an error entry on success. */ - WARN_ON(*pfns == range->values[HMM_PFN_ERROR]); + WARN_ON(*pfns & HMM_PFN_ERROR); + WARN_ON(!(*pfns & HMM_PFN_VALID)); - page = hmm_device_entry_to_page(range, *pfns); + page = hmm_pfn_to_page(*pfns); WARN_ON(!page); entry = page; - if (*pfns & range->flags[HMM_PFN_WRITE]) + if (*pfns & HMM_PFN_WRITE) entry = xa_tag_pointer(entry, DPT_XA_TAG_WRITE); - else if (range->default_flags & range->flags[HMM_PFN_WRITE]) + else if (WARN_ON(range->default_fla...
2020 Apr 22
0
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...@ -867,7 +853,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages) * the notifier_lock, and mmu_interval_read_retry() must be done first. */ for (i = 0; i < ttm->num_pages; i++) - pages[i] = hmm_device_entry_to_page(range, range->pfns[i]); + pages[i] = hmm_pfn_to_page(range->hmm_pfns[i]); gtt->range = range; mmput(mm); @@ -877,7 +863,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages) out_unlock: up_read(&mm->mmap_sem); out_free_pfns: - kvfree(range->pfns); + kvfree(range->hmm_pfns); out_free_ranges: k...
2019 Jul 03
10
hmm_range_fault related fixes and legacy API removal v2
Hi Jérôme, Ben and Jason, below is a series against the hmm tree which fixes up the mmap_sem locking in nouveau and while at it also removes leftover legacy HMM APIs only used by nouveau. Changes since v1: - don't return the valid state from hmm_range_unregister - additional nouveau cleanups
2020 May 01
0
[PATCH hmm v2 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...@ -867,7 +853,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages) * the notifier_lock, and mmu_interval_read_retry() must be done first. */ for (i = 0; i < ttm->num_pages; i++) - pages[i] = hmm_device_entry_to_page(range, range->pfns[i]); + pages[i] = hmm_pfn_to_page(range->hmm_pfns[i]); gtt->range = range; mmput(mm); @@ -877,7 +863,7 @@ int amdgpu_ttm_tt_get_user_pages(struct amdgpu_bo *bo, struct page **pages) out_unlock: up_read(&mm->mmap_sem); out_free_pfns: - kvfree(range->pfns); + kvfree(range->hmm_pfns); out_free_ranges: k...