similar to: [PATCH v2] nouveau: fix page fault on device private memory

Displaying 20 results from an estimated 1000 matches similar to: "[PATCH v2] nouveau: fix page fault on device private memory"

2020 Jun 26
2
[PATCH] nouveau: fix page fault on device private memory
If system memory is migrated to device private memory and no GPU MMU page table entry exists, the GPU will fault and call hmm_range_fault() to get the PFN for the page. Since the .dev_private_owner pointer in struct hmm_range is not set, hmm_range_fault returns an error which results in the GPU program stopping with a fatal fault. Fix this by setting .dev_private_owner appropriately. Fixes:
2020 Apr 22
11
[PATCH hmm 0/5] Adjust hmm_range_fault() API
From: Jason Gunthorpe <jgg at mellanox.com> The API is a bit complicated for the uses we actually have, and disucssions for simplifying have come up a number of times. This small series removes the customizable pfn format and simplifies the return code of hmm_range_fault() All the drivers are adjusted to process in the simplified format. I would appreciated tested-by's for the two
2020 May 01
13
[PATCH hmm v2 0/5] Adjust hmm_range_fault() API
From: Jason Gunthorpe <jgg at mellanox.com> The API is a bit complicated for the uses we actually have, and disucssions for simplifying have come up a number of times. This small series removes the customizable pfn format and simplifies the return code of hmm_range_fault() All the drivers are adjusted to process in the simplified format. I would appreciated tested-by's for the two
2020 May 05
1
[PATCH hmm v2 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
On 2020-05-01 11:20, Jason Gunthorpe wrote: > From: Jason Gunthorpe <jgg at mellanox.com> > > Presumably the intent here was that hmm_range_fault() could put the data > into some HW specific format and thus avoid some work. However, nothing > actually does that, and it isn't clear how anything actually could do that > as hmm_range_fault() provides CPU addresses which
2020 Apr 22
1
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
[+Philip Yang] Am 2020-04-21 um 8:21 p.m. schrieb Jason Gunthorpe: > From: Jason Gunthorpe <jgg at mellanox.com> > > Presumably the intent here was that hmm_range_fault() could put the data > into some HW specific format and thus avoid some work. However, nothing > actually does that, and it isn't clear how anything actually could do that > as hmm_range_fault()
2020 Apr 22
0
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
From: Jason Gunthorpe <jgg at mellanox.com> Presumably the intent here was that hmm_range_fault() could put the data into some HW specific format and thus avoid some work. However, nothing actually does that, and it isn't clear how anything actually could do that as hmm_range_fault() provides CPU addresses which must be DMA mapped. Perhaps there is some special HW that does not need
2020 May 01
0
[PATCH hmm v2 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
From: Jason Gunthorpe <jgg at mellanox.com> Presumably the intent here was that hmm_range_fault() could put the data into some HW specific format and thus avoid some work. However, nothing actually does that, and it isn't clear how anything actually could do that as hmm_range_fault() provides CPU addresses which must be DMA mapped. Perhaps there is some special HW that does not need
2020 Jun 19
0
[PATCH 08/16] nouveau/hmm: fault one page at a time
The SVM page fault handler groups faults into a range of contiguous virtual addresses and requests hmm_range_fault() to populate and return the page frame number of system memory mapped by the CPU. In preparation for supporting large pages to be mapped by the GPU, process faults one page at a time. In addition, use the hmm_range default_flags to fix a corner case where the input hmm_pfns array is
2020 Jul 01
0
[PATCH v3 1/5] nouveau/hmm: fault one page at a time
The SVM page fault handler groups faults into a range of contiguous virtual addresses and requests hmm_range_fault() to populate and return the page frame number of system memory mapped by the CPU. In preparation for supporting large pages to be mapped by the GPU, process faults one page at a time. In addition, use the hmm_range default_flags to fix a corner case where the input hmm_pfns array is
2020 Jun 30
6
[PATCH v2 0/5] mm/hmm/nouveau: add PMD system memory mapping
The goal for this series is to introduce the hmm_range_fault() output array flags HMM_PFN_PMD and HMM_PFN_PUD. This allows a device driver to know that a given 4K PFN is actually mapped by the CPU using either a PMD sized or PUD sized CPU page table entry and therefore the device driver can safely map system memory using larger device MMU PTEs. The series is based on 5.8.0-rc3 and is intended for
2020 Jul 01
8
[PATCH v3 0/5] mm/hmm/nouveau: add PMD system memory mapping
The goal for this series is to introduce the hmm_pfn_to_map_order() function. This allows a device driver to know that a given 4K PFN is actually mapped by the CPU using a larger sized CPU page table entry and therefore the device driver can safely map system memory using larger device MMU PTEs. The series is based on 5.8.0-rc3 and is intended for Jason Gunthorpe's hmm tree. These were
2020 May 08
11
[PATCH 0/6] nouveau/hmm: add support for mapping large pages
hmm_range_fault() returns an array of page frame numbers and flags for how the pages are mapped in the requested process' page tables. The PFN can be used to get the struct page with hmm_pfn_to_page() and the page size order can be determined with compound_order(page) but if the page is larger than order 0 (PAGE_SIZE), there is no indication that the page is mapped using a larger page size. To
2020 Apr 22
2
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
On Tue, Apr 21, 2020 at 09:21:46PM -0300, Jason Gunthorpe wrote: > +void nouveau_hmm_convert_pfn(struct nouveau_drm *drm, struct hmm_range *range, > + u64 *ioctl_addr) > { > unsigned long i, npages; > > + /* > + * The ioctl_addr prepared here is passed through nvif_object_ioctl() > + * to an eventual DMA map on some call chain like: > + *
2020 Jun 19
22
[PATCH 00/16] mm/hmm/nouveau: THP mapping and migration
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM self tests. Patch 7-8 prepare nouveau for the meat of this series which adds support and testing for compound page mapping of system memory (patches 9-11) and compound page migration to device private memory (patches 12-16). Since these changes are split
2020 Mar 16
14
ensure device private pages have an owner v2
When acting on device private mappings a driver needs to know if the device (or other entity in case of kvmppc) actually owns this private mapping. This series adds an owner field and converts the migrate_vma code over to check it. I looked into doing the same for hmm_range_fault, but as far as I can tell that code has never been wired up to actually work for device private memory, so instead of
2019 May 20
3
[PATCH] drm/nouveau/svm: Convert to use hmm_range_fault()
Convert to use hmm_range_fault(). Signed-off-by: Souptick Joarder <jrdr.linux at gmail.com> --- drivers/gpu/drm/nouveau/nouveau_svm.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c index 93ed43c..8d56bd6 100644 --- a/drivers/gpu/drm/nouveau/nouveau_svm.c +++
2020 Mar 17
2
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
On Tue, Mar 17, 2020 at 09:15:36AM -0300, Jason Gunthorpe wrote: > > Getting rid of HMM_PFN_DEVICE_PRIVATE seems reasonable to me since a driver can > > look at the struct page but what if a driver needs to fault in a page from > > another device's private memory? Should it call handle_mm_fault()? > > Isn't that what this series basically does? > > The
2020 Mar 16
4
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
On 3/16/20 12:32 PM, Christoph Hellwig wrote: > Remove the code to fault device private pages back into system memory > that has never been used by any driver. Also replace the usage of the > HMM_PFN_DEVICE_PRIVATE flag in the pfns array with a simple > is_device_private_page check in nouveau. > > Signed-off-by: Christoph Hellwig <hch at lst.de> Getting rid of
2020 Mar 19
2
ensure device private pages have an owner v2
On Wed, Mar 18, 2020 at 09:28:49PM -0300, Jason Gunthorpe wrote: > > Changes since v1: > > - split out the pgmap->owner addition into a separate patch > > - check pgmap->owner is set for device private mappings > > - rename the dev_private_owner field in struct migrate_vma to src_owner > > - refuse to migrate private pages if src_owner is not set > >
2020 Mar 16
6
[PATCH 2/2] mm: remove device private page support from hmm_range_fault
On 3/16/20 10:52 AM, Christoph Hellwig wrote: > No driver has actually used properly wire up and support this feature. > There is various code related to it in nouveau, but as far as I can tell > it never actually got turned on, and the only changes since the initial > commit are global cleanups. This is not actually true. OpenCL 2.x does support SVM with nouveau and device private