search for: hmm_walk_ops

Displaying 10 results from an estimated 10 matches for "hmm_walk_ops".

2019 Sep 11
0
[PATCH 1/4] mm/hmm: make full use of walk_page_range()
...low write access either. HMM does not support architectures + * that allow write without read. + */ + if (!(vma->vm_flags & VM_READ)) { + (void) hmm_pfns_fill(start, end, range, HMM_PFN_NONE); + return -EPERM; + } + + return 0; } /* @@ -857,6 +879,7 @@ static const struct mm_walk_ops hmm_walk_ops = { .pmd_entry = hmm_vma_walk_pmd, .pte_hole = hmm_vma_walk_hole, .hugetlb_entry = hmm_vma_walk_hugetlb_entry, + .test_walk = hmm_vma_walk_test, }; /** @@ -889,63 +912,27 @@ static const struct mm_walk_ops hmm_walk_ops = { */ long hmm_range_fault(struct hmm_range *range, unsigned int f...
2020 Apr 22
0
[PATCH hmm 2/5] mm/hmm: make hmm_range_fault return 0 or -1
...@ static int hmm_vma_walk_test(unsigned long start, unsigned long end, return -EFAULT; hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); - hmm_vma_walk->last = end; /* Skip this vma and continue processing the next vma. */ return 1; @@ -555,9 +547,7 @@ static const struct mm_walk_ops hmm_walk_ops = { * hmm_range_fault - try to fault some address in a virtual address range * @range: argument structure * - * Return: the number of valid pages in range->pfns[] (from range start - * address), which may be zero. On error one of the following status codes - * can be returned: + * Return:...
2020 May 01
0
[PATCH hmm v2 2/5] mm/hmm: make hmm_range_fault return 0 or -1
...@ static int hmm_vma_walk_test(unsigned long start, unsigned long end, return -EFAULT; hmm_pfns_fill(start, end, range, HMM_PFN_ERROR); - hmm_vma_walk->last = end; /* Skip this vma and continue processing the next vma. */ return 1; @@ -555,9 +547,7 @@ static const struct mm_walk_ops hmm_walk_ops = { * hmm_range_fault - try to fault some address in a virtual address range * @range: argument structure * - * Return: the number of valid pages in range->pfns[] (from range start - * address), which may be zero. On error one of the following status codes - * can be returned: + * Returns...
2019 Nov 12
0
[PATCH v3 03/14] mm/hmm: allow hmm_range to be used with a mmu_interval_notifier or hmm_mirror
...(struct hmm_range *range) } EXPORT_SYMBOL(hmm_range_unregister); +static bool needs_retry(struct hmm_range *range) +{ + if (range->notifier) + return mmu_interval_check_retry(range->notifier, + range->notifier_seq); + return !range->valid; +} + static const struct mm_walk_ops hmm_walk_ops = { .pud_entry = hmm_vma_walk_pud, .pmd_entry = hmm_vma_walk_pmd, @@ -898,18 +906,23 @@ long hmm_range_fault(struct hmm_range *range, unsigned int flags) const unsigned long device_vma = VM_IO | VM_PFNMAP | VM_MIXEDMAP; unsigned long start = range->start, end; struct hmm_vma_walk hmm_v...
2020 May 01
13
[PATCH hmm v2 0/5] Adjust hmm_range_fault() API
From: Jason Gunthorpe <jgg at mellanox.com> The API is a bit complicated for the uses we actually have, and disucssions for simplifying have come up a number of times. This small series removes the customizable pfn format and simplifies the return code of hmm_range_fault() All the drivers are adjusted to process in the simplified format. I would appreciated tested-by's for the two
2019 Sep 11
6
[PATCH 0/4] HMM tests and minor fixes
These changes are based on Jason's latest hmm branch. Patch 1 was previously posted here [1] but was dropped from the orginal series. Hopefully, the tests will reduce concerns about edge conditions. I'm sure more tests could be usefully added but I thought this was a good starting point. [1] https://lore.kernel.org/linux-mm/20190726005650.2566-6-rcampbell at nvidia.com/ Ralph Campbell
2019 Nov 12
0
[PATCH v3 13/14] mm/hmm: remove hmm_mirror and related
..., sizeof(range->hmm)); -} -EXPORT_SYMBOL(hmm_range_unregister); - -static bool needs_retry(struct hmm_range *range) -{ - if (range->notifier) - return mmu_interval_check_retry(range->notifier, - range->notifier_seq); - return !range->valid; -} - static const struct mm_walk_ops hmm_walk_ops = { .pud_entry = hmm_vma_walk_pud, .pmd_entry = hmm_vma_walk_pmd, @@ -906,20 +638,16 @@ long hmm_range_fault(struct hmm_range *range, unsigned int flags) const unsigned long device_vma = VM_IO | VM_PFNMAP | VM_MIXEDMAP; unsigned long start = range->start, end; struct hmm_vma_walk hmm_v...
2020 Apr 22
11
[PATCH hmm 0/5] Adjust hmm_range_fault() API
From: Jason Gunthorpe <jgg at mellanox.com> The API is a bit complicated for the uses we actually have, and disucssions for simplifying have come up a number of times. This small series removes the customizable pfn format and simplifies the return code of hmm_range_fault() All the drivers are adjusted to process in the simplified format. I would appreciated tested-by's for the two
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1, scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where they only use invalidate_range_start/end and immediately check the invalidating range against some driver data structure to tell if the driver is interested. Half of them use an interval_tree, the others
2019 Oct 28
32
[PATCH v2 00/15] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1, scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where they only use invalidate_range_start/end and immediately check the invalidating range against some driver data structure to tell if the driver is interested. Half of them use an interval_tree, the others