search for: hmm_pfn_req_fault

Displaying 20 results from an estimated 20 matches for "hmm_pfn_req_fault".

2020 May 05
1
[PATCH hmm v2 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...e.flags[HMM_PFN_VALID] = (1 << 63); > - range.flags[HMM_PFN_WRITE] = (1 << 62); > - > -and the device driver wants pages for a range with at least read permission, > -it sets:: > - > - range->default_flags = (1 << 63); > + range->default_flags = HMM_PFN_REQ_FAULT; > range->pfn_flags_mask = 0; > > and calls hmm_range_fault() as described above. This will fill fault all pages > @@ -246,18 +238,18 @@ in the range with at least read permission. > Now let's say the driver wants to do the same except for one page in the range fo...
2020 Jun 19
0
[PATCH 08/16] nouveau/hmm: fault one page at a time
...; /* Have HMM fault pages within the fault window to the GPU. */ + unsigned long hmm_pfns[1]; struct hmm_range range = { .notifier = &notifier->notifier, .start = notifier->notifier.interval_tree.start, .end = notifier->notifier.interval_tree.last + 1, - .pfn_flags_mask = HMM_PFN_REQ_FAULT | HMM_PFN_REQ_WRITE, + .default_flags = hmm_flags, .hmm_pfns = hmm_pfns, }; struct mm_struct *mm = notifier->notifier.mm; @@ -575,11 +571,6 @@ static int nouveau_range_fault(struct nouveau_svmm *svmm, ret = hmm_range_fault(&range); mmap_read_unlock(mm); if (ret) { - /* -...
2020 Jul 01
0
[PATCH v3 1/5] nouveau/hmm: fault one page at a time
...; /* Have HMM fault pages within the fault window to the GPU. */ + unsigned long hmm_pfns[1]; struct hmm_range range = { .notifier = &notifier->notifier, .start = notifier->notifier.interval_tree.start, .end = notifier->notifier.interval_tree.last + 1, - .pfn_flags_mask = HMM_PFN_REQ_FAULT | HMM_PFN_REQ_WRITE, + .default_flags = hmm_flags, .hmm_pfns = hmm_pfns, }; struct mm_struct *mm = notifier->notifier.mm; @@ -575,11 +571,6 @@ static int nouveau_range_fault(struct nouveau_svmm *svmm, ret = hmm_range_fault(&range); mmap_read_unlock(mm); if (ret) { - /* -...
2020 May 01
0
[PATCH hmm v2 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...+permission, it sets:: - range.flags[HMM_PFN_VALID] = (1 << 63); - range.flags[HMM_PFN_WRITE] = (1 << 62); - -and the device driver wants pages for a range with at least read permission, -it sets:: - - range->default_flags = (1 << 63); + range->default_flags = HMM_PFN_REQ_FAULT; range->pfn_flags_mask = 0; and calls hmm_range_fault() as described above. This will fill fault all pages @@ -246,18 +238,18 @@ in the range with at least read permission. Now let's say the driver wants to do the same except for one page in the range for which it wants to have wri...
2020 Apr 22
1
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...; 63); > - range.flags[HMM_PFN_WRITE] = (1 << 62); > - > -and the device driver wants pages for a range with at least read permission, > -it sets:: > - > - range->default_flags = (1 << 63); > + range->default_flags = HMM_PFN_REQ_VALID; This should be HMM_PFN_REQ_FAULT. > range->pfn_flags_mask = 0; > > and calls hmm_range_fault() as described above. This will fill fault all pages > @@ -246,18 +238,18 @@ in the range with at least read permission. > Now let's say the driver wants to do the same except for one page in the range for...
2020 Apr 22
0
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...gs = hmm_range_flags; - range->values = hmm_range_values; - range->pfn_shift = PAGE_SHIFT; range->start = bo->notifier.interval_tree.start; range->end = bo->notifier.interval_tree.last + 1; - range->default_flags = hmm_range_flags[HMM_PFN_VALID]; + range->default_flags = HMM_PFN_REQ_FAULT; if (!amdgpu_ttm_tt_is_readonly(ttm)) - range->default_flags |= range->flags[HMM_PFN_WRITE]; + range->default_flags |= HMM_PFN_REQ_WRITE; - range->pfns = kvmalloc_array(ttm->num_pages, sizeof(*range->pfns), - GFP_KERNEL); - if (unlikely(!range->pfns)) { + range-&g...
2020 May 01
13
[PATCH hmm v2 0/5] Adjust hmm_range_fault() API
...es. This small series removes the customizable pfn format and simplifies the return code of hmm_range_fault() All the drivers are adjusted to process in the simplified format. I would appreciated tested-by's for the two drivers, thanks! v2: - Move call chain to commit message - Fix typo of HMM_PFN_REQ_FAULT - Move nouveau_hmm_convert_pfn() to nouveau_svm.c - Add acks and tested-bys v1: https://lore.kernel.org/r/0-v1-4eb72686de3c+5062-hmm_no_flags_jgg at mellanox.com Cc: Christoph Hellwig <hch at lst.de> Cc: John Hubbard <jhubbard at nvidia.com> Cc: J?r?me Glisse <jglisse at redhat.co...
2020 Apr 22
11
[PATCH hmm 0/5] Adjust hmm_range_fault() API
...->notifier, - .pfns = pfns, - .flags = dmirror_hmm_flags, - .values = dmirror_hmm_values, - .pfn_shift = DPT_SHIFT, + .hmm_pfns = pfns, .pfn_flags_mask = 0, - .default_flags = dmirror_hmm_flags[HMM_PFN_VALID] | - (write ? dmirror_hmm_flags[HMM_PFN_WRITE] : 0), + .default_flags = + HMM_PFN_REQ_FAULT | (write ? HMM_PFN_REQ_WRITE : 0), .dev_private_owner = dmirror->mdevice, }; int ret = 0; @@ -754,19 +734,20 @@ static int dmirror_migrate(struct dmirror *dmirror, } static void dmirror_mkentry(struct dmirror *dmirror, struct hmm_range *range, - unsigned char *perm, uint64_t ent...
2020 Jun 26
1
[PATCH v2] nouveau: fix page fault on device private memory
...au/nouveau_svm.c index ba9f9359c30e..6586d9d39874 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -562,6 +562,7 @@ static int nouveau_range_fault(struct nouveau_svmm *svmm, .end = notifier->notifier.interval_tree.last + 1, .pfn_flags_mask = HMM_PFN_REQ_FAULT | HMM_PFN_REQ_WRITE, .hmm_pfns = hmm_pfns, + .dev_private_owner = drm->dev, }; struct mm_struct *mm = notifier->notifier.mm; int ret; -- 2.20.1
2020 Jun 26
2
[PATCH] nouveau: fix page fault on device private memory
...au/nouveau_svm.c index ba9f9359c30e..6586d9d39874 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c @@ -562,6 +562,7 @@ static int nouveau_range_fault(struct nouveau_svmm *svmm, .end = notifier->notifier.interval_tree.last + 1, .pfn_flags_mask = HMM_PFN_REQ_FAULT | HMM_PFN_REQ_WRITE, .hmm_pfns = hmm_pfns, + .dev_private_owner = drm->dev, }; struct mm_struct *mm = notifier->notifier.mm; int ret; -- 2.20.1
2020 Jul 01
0
[PATCH v3 2/5] mm/hmm: add hmm_mapping order
...| 14 +++++++++++--- 2 files changed, 33 insertions(+), 5 deletions(-) diff --git a/include/linux/hmm.h b/include/linux/hmm.h index f4a09ed223ac..e7a21a21f11f 100644 --- a/include/linux/hmm.h +++ b/include/linux/hmm.h @@ -37,16 +37,17 @@ * will fail. Must be combined with HMM_PFN_REQ_FAULT. */ enum hmm_pfn_flags { - /* Output flags */ + /* Output fields and flags */ HMM_PFN_VALID = 1UL << (BITS_PER_LONG - 1), HMM_PFN_WRITE = 1UL << (BITS_PER_LONG - 2), HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3), + HMM_PFN_ORDER_SHIFT = (BITS_PER_LONG - 8), /* Input flag...
2024 Oct 15
5
[PATCH v1 0/4] GPU Direct RDMA (P2P DMA) for Device Private Pages
From: Yonatan Maman <Ymaman at Nvidia.com> This patch series aims to enable Peer-to-Peer (P2P) DMA access in GPU-centric applications that utilize RDMA and private device pages. This enhancement is crucial for minimizing data transfer overhead by allowing the GPU to directly expose device private page data to devices such as NICs, eliminating the need to traverse system RAM, which is the
2020 Jun 30
6
[PATCH v2 0/5] mm/hmm/nouveau: add PMD system memory mapping
The goal for this series is to introduce the hmm_range_fault() output array flags HMM_PFN_PMD and HMM_PFN_PUD. This allows a device driver to know that a given 4K PFN is actually mapped by the CPU using either a PMD sized or PUD sized CPU page table entry and therefore the device driver can safely map system memory using larger device MMU PTEs. The series is based on 5.8.0-rc3 and is intended for
2020 Jul 01
8
[PATCH v3 0/5] mm/hmm/nouveau: add PMD system memory mapping
The goal for this series is to introduce the hmm_pfn_to_map_order() function. This allows a device driver to know that a given 4K PFN is actually mapped by the CPU using a larger sized CPU page table entry and therefore the device driver can safely map system memory using larger device MMU PTEs. The series is based on 5.8.0-rc3 and is intended for Jason Gunthorpe's hmm tree. These were
2020 May 08
11
[PATCH 0/6] nouveau/hmm: add support for mapping large pages
hmm_range_fault() returns an array of page frame numbers and flags for how the pages are mapped in the requested process' page tables. The PFN can be used to get the struct page with hmm_pfn_to_page() and the page size order can be determined with compound_order(page) but if the page is larger than order 0 (PAGE_SIZE), there is no indication that the page is mapped using a larger page size. To
2020 May 08
0
[PATCH 4/6] mm/hmm: add output flag for compound page mapping
...+ b/include/linux/hmm.h @@ -41,12 +41,14 @@ enum hmm_pfn_flags { HMM_PFN_VALID = 1UL << (BITS_PER_LONG - 1), HMM_PFN_WRITE = 1UL << (BITS_PER_LONG - 2), HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3), + HMM_PFN_COMPOUND = 1UL << (BITS_PER_LONG - 4), /* Input flags */ HMM_PFN_REQ_FAULT = HMM_PFN_VALID, HMM_PFN_REQ_WRITE = HMM_PFN_WRITE, - HMM_PFN_FLAGS = HMM_PFN_VALID | HMM_PFN_WRITE | HMM_PFN_ERROR, + HMM_PFN_FLAGS = HMM_PFN_VALID | HMM_PFN_WRITE | HMM_PFN_ERROR | + HMM_PFN_COMPOUND, }; /* diff --git a/mm/hmm.c b/mm/hmm.c index 41673a6d8d46..a9dd06e190a1 100644 --- a/m...
2020 Jun 19
0
[PATCH 09/16] mm/hmm: add output flag for compound page mapping
...+ b/include/linux/hmm.h @@ -41,12 +41,14 @@ enum hmm_pfn_flags { HMM_PFN_VALID = 1UL << (BITS_PER_LONG - 1), HMM_PFN_WRITE = 1UL << (BITS_PER_LONG - 2), HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3), + HMM_PFN_COMPOUND = 1UL << (BITS_PER_LONG - 4), /* Input flags */ HMM_PFN_REQ_FAULT = HMM_PFN_VALID, HMM_PFN_REQ_WRITE = HMM_PFN_WRITE, - HMM_PFN_FLAGS = HMM_PFN_VALID | HMM_PFN_WRITE | HMM_PFN_ERROR, + HMM_PFN_FLAGS = HMM_PFN_VALID | HMM_PFN_WRITE | HMM_PFN_ERROR | + HMM_PFN_COMPOUND, }; /* diff --git a/mm/hmm.c b/mm/hmm.c index e9a545751108..d145d44256df 100644 --- a/m...
2020 Jun 30
0
[PATCH v2 2/5] mm/hmm: add output flags for PMD/PUD page mapping
...num hmm_pfn_flags { HMM_PFN_VALID = 1UL << (BITS_PER_LONG - 1), HMM_PFN_WRITE = 1UL << (BITS_PER_LONG - 2), HMM_PFN_ERROR = 1UL << (BITS_PER_LONG - 3), + HMM_PFN_PMD = 1UL << (BITS_PER_LONG - 4), + HMM_PFN_PUD = 1UL << (BITS_PER_LONG - 5), /* Input flags */ HMM_PFN_REQ_FAULT = HMM_PFN_VALID, HMM_PFN_REQ_WRITE = HMM_PFN_WRITE, - HMM_PFN_FLAGS = HMM_PFN_VALID | HMM_PFN_WRITE | HMM_PFN_ERROR, + HMM_PFN_FLAGS = HMM_PFN_VALID | HMM_PFN_WRITE | HMM_PFN_ERROR | + HMM_PFN_PMD | HMM_PFN_PUD, }; /* diff --git a/mm/hmm.c b/mm/hmm.c index e9a545751108..d9de95450be3 10064...
2024 Dec 01
5
[RFC 0/5] GPU Direct RDMA (P2P DMA) for Device Private Pages
From: Yonatan Maman <Ymaman at Nvidia.com> Based on: Provide a new two step DMA mapping API patchset https://lore.kernel.org/kvm/20241114170247.GA5813 at lst.de/T/#t This patch series aims to enable Peer-to-Peer (P2P) DMA access in GPU-centric applications that utilize RDMA and private device pages. This enhancement reduces data transfer overhead by allowing the GPU to directly expose
2020 Jun 19
22
[PATCH 00/16] mm/hmm/nouveau: THP mapping and migration
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM self tests. Patch 7-8 prepare nouveau for the meat of this series which adds support and testing for compound page mapping of system memory (patches 9-11) and compound page migration to device private memory (patches 12-16). Since these changes are split