Displaying 5 results from an estimated 5 matches for "1004,14".
Did you mean:
100,14
2019 Aug 07
2
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...@@ static int hmm_vma_walk_pud(pud_t *pudp,
pfns[i] = hmm_device_entry_from_pfn(range, pfn) |
cpu_flags;
}
- if (hmm_vma_walk->pgmap) {
- put_dev_pagemap(hmm_vma_walk->pgmap);
- hmm_vma_walk->pgmap = NULL;
- }
hmm_vma_walk->last = end;
return 0;
}
@@ -1026,6 +1004,14 @@ long hmm_range_fault(struct hmm_range *range, unsigned int flags)
/* Keep trying while the range is valid. */
} while (ret == -EBUSY && range->valid);
+ /*
+ * We do put_dev_pagemap() here so that we can leverage
+ * get_dev_pagemap() optimization which will not re-ta...
2019 Aug 14
0
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...@@ static int hmm_vma_walk_pud(pud_t *pudp,
pfns[i] = hmm_device_entry_from_pfn(range, pfn) |
cpu_flags;
}
- if (hmm_vma_walk->pgmap) {
- put_dev_pagemap(hmm_vma_walk->pgmap);
- hmm_vma_walk->pgmap = NULL;
- }
hmm_vma_walk->last = end;
return 0;
}
@@ -1026,6 +1004,14 @@ long hmm_range_fault(struct hmm_range *range, unsigned int flags)
/* Keep trying while the range is valid. */
} while (ret == -EBUSY && range->valid);
+ /*
+ * We do put_dev_pagemap() here so that we can leverage
+ * get_dev_pagemap() optimization which will not re-ta...
2019 Aug 14
2
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
On Tue, Aug 13, 2019 at 06:36:33PM -0700, Dan Williams wrote:
> Section alignment constraints somewhat save us here. The only example
> I can think of a PMD not containing a uniform pgmap association for
> each pte is the case when the pgmap overlaps normal dram, i.e. shares
> the same 'struct memory_section' for a given span. Otherwise, distinct
> pgmaps arrange to manage
2019 Aug 07
0
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...if (hmm_vma_walk->pgmap) {
> - put_dev_pagemap(hmm_vma_walk->pgmap);
> - hmm_vma_walk->pgmap = NULL;
> - }
> hmm_vma_walk->last = end;
> return 0;
> }
> @@ -1026,6 +1004,14 @@ long hmm_range_fault(struct hmm_range *range, unsigned int flags)
> /* Keep trying while the range is valid. */
> } while (ret == -EBUSY && range->valid);
>
> + /*
> + * We do put_dev_pagemap() here...
2019 Aug 06
24
hmm cleanups, v2
Hi Jérôme, Ben, Felix and Jason,
below is a series against the hmm tree which cleans up various minor
bits and allows HMM_MIRROR to be built on all architectures.
Diffstat:
11 files changed, 94 insertions(+), 210 deletions(-)
A git tree is also available at:
git://git.infradead.org/users/hch/misc.git hmm-cleanups.2
Gitweb: