search for: hmm_pfn_device_priv

Displaying 20 results from an estimated 33 matches for "hmm_pfn_device_priv".

2020 Mar 16
4
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
On 3/16/20 12:32 PM, Christoph Hellwig wrote: > Remove the code to fault device private pages back into system memory > that has never been used by any driver. Also replace the usage of the > HMM_PFN_DEVICE_PRIVATE flag in the pfns array with a simple > is_device_private_page check in nouveau. > > Signed-off-by: Christoph Hellwig <hch at lst.de> Getting rid of HMM_PFN_DEVICE_PRIVATE seems reasonable to me since a driver can look at the struct page but what if a driver needs to fault in a p...
2020 Mar 16
2
[PATCH 2/2] mm: remove device private page support from hmm_range_fault
...; device private memory via clEnqueueSVMMigrateMem(). > > Also, Ben Skeggs has accepted a set of patches to map GPU memory after being > > migrated and this change would conflict with that. > > Can you explain me how we actually invoke this code? > > For that we'd need HMM_PFN_DEVICE_PRIVATE NVIF_VMM_PFNMAP_V0_VRAM > set in ->pfns before calling hmm_range_fault, which isn't happening. Oh, I got tripped on this too The logic is backwards from what you'd think.. If you *set* HMM_PFN_DEVICE_PRIVATE then this triggers: hmm_pte_need_fault(): if ((cpu_flags & range-&...
2020 Mar 16
6
[PATCH 2/2] mm: remove device private page support from hmm_range_fault
...s/gpu/drm/amd/amdgpu/amdgpu_ttm.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c > @@ -776,7 +776,6 @@ struct amdgpu_ttm_tt { > static const uint64_t hmm_range_flags[HMM_PFN_FLAG_MAX] = { > (1 << 0), /* HMM_PFN_VALID */ > (1 << 1), /* HMM_PFN_WRITE */ > - 0 /* HMM_PFN_DEVICE_PRIVATE */ > }; > > static const uint64_t hmm_range_values[HMM_PFN_VALUE_MAX] = { > diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c > index 7605c4c48985..42808efceaf2 100644 > --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c > +++ b/dr...
2020 Mar 16
0
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
Remove the code to fault device private pages back into system memory that has never been used by any driver. Also replace the usage of the HMM_PFN_DEVICE_PRIVATE flag in the pfns array with a simple is_device_private_page check in nouveau. Signed-off-by: Christoph Hellwig <hch at lst.de> --- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 1 - drivers/gpu/drm/nouveau/nouveau_dmem.c | 5 +++-- drivers/gpu/drm/nouveau/nouveau_svm.c | 1 - include/l...
2020 Mar 16
4
ensure device private pages have an owner
When acting on device private mappings a driver needs to know if the device (or other entity in case of kvmppc) actually owns this private mapping. This series adds an owner field and converts the migrate_vma code over to check it. I looked into doing the same for hmm_range_fault, but as far as I can tell that code has never been wired up to actually work for device private memory, so instead of
2020 Mar 16
0
[PATCH 2/2] mm: remove device private page support from hmm_range_fault
....90821ce5e6ca 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -776,7 +776,6 @@ struct amdgpu_ttm_tt { static const uint64_t hmm_range_flags[HMM_PFN_FLAG_MAX] = { (1 << 0), /* HMM_PFN_VALID */ (1 << 1), /* HMM_PFN_WRITE */ - 0 /* HMM_PFN_DEVICE_PRIVATE */ }; static const uint64_t hmm_range_values[HMM_PFN_VALUE_MAX] = { diff --git a/drivers/gpu/drm/nouveau/nouveau_dmem.c b/drivers/gpu/drm/nouveau/nouveau_dmem.c index 7605c4c48985..42808efceaf2 100644 --- a/drivers/gpu/drm/nouveau/nouveau_dmem.c +++ b/drivers/gpu/drm/nouveau/nouveau_dmem.c @...
2020 Mar 16
0
[PATCH 2/2] mm: remove device private page support from hmm_range_fault
...ory via clEnqueueSVMMigrateMem(). >>> Also, Ben Skeggs has accepted a set of patches to map GPU memory after being >>> migrated and this change would conflict with that. >> >> Can you explain me how we actually invoke this code? >> >> For that we'd need HMM_PFN_DEVICE_PRIVATE NVIF_VMM_PFNMAP_V0_VRAM >> set in ->pfns before calling hmm_range_fault, which isn't happening. > > Oh, I got tripped on this too > > The logic is backwards from what you'd think.. If you *set* > HMM_PFN_DEVICE_PRIVATE then this triggers: > > hmm_pte_need...
2020 Mar 16
14
ensure device private pages have an owner v2
When acting on device private mappings a driver needs to know if the device (or other entity in case of kvmppc) actually owns this private mapping. This series adds an owner field and converts the migrate_vma code over to check it. I looked into doing the same for hmm_range_fault, but as far as I can tell that code has never been wired up to actually work for device private memory, so instead of
2020 Mar 16
1
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
...te_need_fault(const struct hmm_vma_walk *hmm_vma_walk, > /* We aren't ask to do anything ... */ > if (!(pfns & range->flags[HMM_PFN_VALID])) > return; > - /* If this is device memory then only fault if explicitly requested */ > - if ((cpu_flags & range->flags[HMM_PFN_DEVICE_PRIVATE])) { > - /* Do we fault on device memory ? */ > - if (pfns & range->flags[HMM_PFN_DEVICE_PRIVATE]) { > - *write_fault = pfns & range->flags[HMM_PFN_WRITE]; > - *fault = true; > - } > - return; > - } Yes, this is an elegant solution to the input flags....
2019 Aug 16
2
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...ed pgmap. I was also wondering if that is a problem.. just blindly doing a container_of on the page->pgmap does seem like it assumes that only this driver is using DEVICE_PRIVATE. It seems like something missing in hmm_range_fault, it should be told what DEVICE_PRIVATE is acceptable to trigger HMM_PFN_DEVICE_PRIVATE and fault all others? Jason
2020 Mar 17
2
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
On Tue, Mar 17, 2020 at 09:15:36AM -0300, Jason Gunthorpe wrote: > > Getting rid of HMM_PFN_DEVICE_PRIVATE seems reasonable to me since a driver can > > look at the struct page but what if a driver needs to fault in a page from > > another device's private memory? Should it call handle_mm_fault()? > > Isn't that what this series basically does? > > The dev_private_own...
2020 Mar 17
0
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
On Mon, Mar 16, 2020 at 03:49:51PM -0700, Ralph Campbell wrote: > > On 3/16/20 12:32 PM, Christoph Hellwig wrote: > > Remove the code to fault device private pages back into system memory > > that has never been used by any driver. Also replace the usage of the > > HMM_PFN_DEVICE_PRIVATE flag in the pfns array with a simple > > is_device_private_page check in nouveau. > > > > Signed-off-by: Christoph Hellwig <hch at lst.de> > > Getting rid of HMM_PFN_DEVICE_PRIVATE seems reasonable to me since a driver can > look at the struct page but what if...
2020 Mar 17
0
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
On Mon, Mar 16, 2020 at 03:49:51PM -0700, Ralph Campbell wrote: > On 3/16/20 12:32 PM, Christoph Hellwig wrote: >> Remove the code to fault device private pages back into system memory >> that has never been used by any driver. Also replace the usage of the >> HMM_PFN_DEVICE_PRIVATE flag in the pfns array with a simple >> is_device_private_page check in nouveau. >> >> Signed-off-by: Christoph Hellwig <hch at lst.de> > > Getting rid of HMM_PFN_DEVICE_PRIVATE seems reasonable to me since a driver can > look at the struct page but what if a dri...
2020 Mar 17
1
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
...On Mon, Mar 16, 2020 at 03:49:51PM -0700, Ralph Campbell wrote: >> On 3/16/20 12:32 PM, Christoph Hellwig wrote: >>> Remove the code to fault device private pages back into system memory >>> that has never been used by any driver. Also replace the usage of the >>> HMM_PFN_DEVICE_PRIVATE flag in the pfns array with a simple >>> is_device_private_page check in nouveau. >>> >>> Signed-off-by: Christoph Hellwig <hch at lst.de> >> >> Getting rid of HMM_PFN_DEVICE_PRIVATE seems reasonable to me since a driver can >> look at the struc...
2019 Jul 26
0
[PATCH v2 2/7] mm/hmm: a few more C style and comment clean ups
..._walk, /* We aren't ask to do anything ... */ if (!(pfns & range->flags[HMM_PFN_VALID])) return; - /* If this is device memory than only fault if explicitly requested */ + /* If this is device memory then only fault if explicitly requested */ if ((cpu_flags & range->flags[HMM_PFN_DEVICE_PRIVATE])) { /* Do we fault on device memory ? */ if (pfns & range->flags[HMM_PFN_DEVICE_PRIVATE]) { @@ -502,7 +502,7 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, hmm_vma_walk->last = end; return 0; #else - /* If THP is not enabled then we should never reach that code ! */...
2019 Aug 16
0
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...dering if that is a problem.. just blindly doing a > container_of on the page->pgmap does seem like it assumes that only > this driver is using DEVICE_PRIVATE. > > It seems like something missing in hmm_range_fault, it should be told > what DEVICE_PRIVATE is acceptable to trigger HMM_PFN_DEVICE_PRIVATE > and fault all others? The whole device private handling in hmm and migrate_vma seems pretty broken as far as I can tell, and I have some WIP patches. Basically we should not touch (or possibly eventually call migrate to ram eventually in the future) device private pages not owned by the c...
2019 Aug 16
1
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
On Fri, Aug 16, 2019 at 10:21:41AM -0700, Dan Williams wrote: > > We can do a get_dev_pagemap inside the page_walk and touch the pgmap, > > or we can do the 'device mutex && retry' pattern and touch the pgmap > > in the driver, under that lock. > > > > However in all cases the current get_dev_pagemap()'s in the page walk > > are not
2020 Mar 16
0
[PATCH 2/2] mm: remove device private page support from hmm_range_fault
...pport SVM with nouveau and > device private memory via clEnqueueSVMMigrateMem(). > Also, Ben Skeggs has accepted a set of patches to map GPU memory after being > migrated and this change would conflict with that. Can you explain me how we actually invoke this code? For that we'd need HMM_PFN_DEVICE_PRIVATE NVIF_VMM_PFNMAP_V0_VRAM set in ->pfns before calling hmm_range_fault, which isn't happening.
2020 Mar 16
0
[PATCH 2/2] mm: remove device private page support from hmm_range_fault
...hat and work with Christoph's 1/2 patch - Ralph can you check, test, etc? diff --git a/mm/hmm.c b/mm/hmm.c index 9e8f68eb83287a..9fa4748da1b779 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -300,6 +300,20 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr, range->flags[HMM_PFN_DEVICE_PRIVATE]; cpu_flags |= is_write_device_private_entry(entry) ? range->flags[HMM_PFN_WRITE] : 0; + + /* + * If the caller understands this kind of device_private + * page, then leave it as is, otherwise fault it. + */ + hmm_vma_walk->pgmap = get_dev_pagemap( + device_private...
2020 Mar 17
0
[PATCH 3/4] mm: simplify device private page handling in hmm_range_fault
On Tue, Mar 17, 2020 at 01:24:45PM +0100, Christoph Hellwig wrote: > On Tue, Mar 17, 2020 at 09:15:36AM -0300, Jason Gunthorpe wrote: > > > Getting rid of HMM_PFN_DEVICE_PRIVATE seems reasonable to me since a driver can > > > look at the struct page but what if a driver needs to fault in a page from > > > another device's private memory? Should it call handle_mm_fault()? > > > > Isn't that what this series basically does? > >...