Displaying 3 results from an estimated 3 matches for "hmm_fault".
Did you mean:
hmm_fault_
2019 Jul 30
1
[PATCH 07/13] mm: remove the page_shift member from struct hmm_range
...multi-level page
table, so we could have 2M entries that map to a single DMA or to
another page table w/ 4k pages (have to check on this)
But the driver isn't set up to do that right now.
> The best API for mlx4 would of course be to pass a biovec-style
> variable length structure that hmm_fault could fill out, but that would
> be a major restructure.
It would work, but the driver has to expand that into a page list
right awayhow.
We can't even dma map the biovec with today's dma API as it needs the
ability to remap on a page granularity.
Jason
2019 Jul 30
2
[PATCH 07/13] mm: remove the page_shift member from struct hmm_range
On Tue, Jul 30, 2019 at 08:51:57AM +0300, Christoph Hellwig wrote:
> All users pass PAGE_SIZE here, and if we wanted to support single
> entries for huge pages we should really just add a HMM_FAULT_HUGEPAGE
> flag instead that uses the huge page size instead of having the
> caller calculate that size once, just for the hmm code to verify it.
I suspect this was added for the ODP conversion that does use both
page sizes. I think the ODP code for this is kind of broken, but I
haven't...
2019 Jul 30
0
[PATCH 07/13] mm: remove the page_shift member from struct hmm_range
.... AFAIK ODP is only used by mlx5, and mlx5 unlike other
IB HCAs can use scatterlist style MRs with variable length per entry,
so even if we pass multiple pages per entry from hmm it could coalesce
them. The best API for mlx4 would of course be to pass a biovec-style
variable length structure that hmm_fault could fill out, but that would
be a major restructure.