Jason Gunthorpe
2019-Aug-16 00:43 UTC
[Nouveau] [PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
On Thu, Aug 15, 2019 at 04:51:33PM -0400, Jerome Glisse wrote:> struct page. In this case any way we can update the > nouveau_dmem_page() to check that page page->pgmap == the > expected pgmap.I was also wondering if that is a problem.. just blindly doing a container_of on the page->pgmap does seem like it assumes that only this driver is using DEVICE_PRIVATE. It seems like something missing in hmm_range_fault, it should be told what DEVICE_PRIVATE is acceptable to trigger HMM_PFN_DEVICE_PRIVATE and fault all others? Jason
Christoph Hellwig
2019-Aug-16 04:44 UTC
[Nouveau] [PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
On Fri, Aug 16, 2019 at 12:43:07AM +0000, Jason Gunthorpe wrote:> On Thu, Aug 15, 2019 at 04:51:33PM -0400, Jerome Glisse wrote: > > > struct page. In this case any way we can update the > > nouveau_dmem_page() to check that page page->pgmap == the > > expected pgmap. > > I was also wondering if that is a problem.. just blindly doing a > container_of on the page->pgmap does seem like it assumes that only > this driver is using DEVICE_PRIVATE. > > It seems like something missing in hmm_range_fault, it should be told > what DEVICE_PRIVATE is acceptable to trigger HMM_PFN_DEVICE_PRIVATE > and fault all others?The whole device private handling in hmm and migrate_vma seems pretty broken as far as I can tell, and I have some WIP patches. Basically we should not touch (or possibly eventually call migrate to ram eventually in the future) device private pages not owned by the caller, where I try to defined the caller by the dev_pagemap_ops instance.
Jason Gunthorpe
2019-Aug-16 12:30 UTC
[Nouveau] [PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
On Fri, Aug 16, 2019 at 06:44:48AM +0200, Christoph Hellwig wrote:> On Fri, Aug 16, 2019 at 12:43:07AM +0000, Jason Gunthorpe wrote: > > On Thu, Aug 15, 2019 at 04:51:33PM -0400, Jerome Glisse wrote: > > > > > struct page. In this case any way we can update the > > > nouveau_dmem_page() to check that page page->pgmap == the > > > expected pgmap. > > > > I was also wondering if that is a problem.. just blindly doing a > > container_of on the page->pgmap does seem like it assumes that only > > this driver is using DEVICE_PRIVATE. > > > > It seems like something missing in hmm_range_fault, it should be told > > what DEVICE_PRIVATE is acceptable to trigger HMM_PFN_DEVICE_PRIVATE > > and fault all others? > > The whole device private handling in hmm and migrate_vma seems pretty > broken as far as I can tell, and I have some WIP patches. Basically we > should not touch (or possibly eventually call migrate to ram eventually > in the future) device private pages not owned by the caller, where I > try to defined the caller by the dev_pagemap_ops instance.I think it needs to be more elaborate. For instance, a system may have multiple DEVICE_PRIVATE map's owned by the same driver - but multiple physical devices using that driver. Each physical device's driver should only ever get DEVICE_PRIVATE pages for it's own on-device memory. Never a DEVICE_PRIVATE for another device's memory. The dev_pagemap_ops would not be unique enough, right? Probably also clusters of same-driver struct device can share a DEVICE_PRIVATE, at least high end GPU's now have private memory coherency busses between their devices. Since we want to trigger migration to CPU on incompatible DEVICE_PRIVATE pages, it seems best to sort this out in the hmm_range_fault? Maybe some sort of unique ID inside the page->pgmap and passed as input? Jason
Possibly Parallel Threads
- [PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
- [PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
- [PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
- [PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
- [PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk