search for: config_transparent_hugepage

Displaying 20 results from an estimated 81 matches for "config_transparent_hugepage".

2019 Jul 30
0
[PATCH 11/13] mm: cleanup the hmm_vma_handle_pmd stub
Stub out the whole function when CONFIG_TRANSPARENT_HUGEPAGE is not set to make the function easier to read. Signed-off-by: Christoph Hellwig <hch at lst.de> --- mm/hmm.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index 4d3bd41b6522..f4e90ea5779f 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -...
2019 Jul 30
1
[PATCH 11/13] mm: cleanup the hmm_vma_handle_pmd stub
On Tue, Jul 30, 2019 at 08:52:01AM +0300, Christoph Hellwig wrote: > Stub out the whole function when CONFIG_TRANSPARENT_HUGEPAGE is not set > to make the function easier to read. > > Signed-off-by: Christoph Hellwig <hch at lst.de> > mm/hmm.c | 18 +++++++++--------- > 1 file changed, 9 insertions(+), 9 deletions(-) > > diff --git a/mm/hmm.c b/mm/hmm.c > index 4d3bd41b6522..f4e90ea5779f 10064...
2019 Aug 06
0
[PATCH 11/15] mm: cleanup the hmm_vma_handle_pmd stub
Stub out the whole function when CONFIG_TRANSPARENT_HUGEPAGE is not set to make the function easier to read. Signed-off-by: Christoph Hellwig <hch at lst.de> --- mm/hmm.c | 18 ++++++++---------- 1 file changed, 8 insertions(+), 10 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index 5e7afe685213..4aa7135f1094 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@...
2020 Jun 22
1
[PATCH 14/16] mm/thp: add THP allocation helper
...ugepage); > +#endif > + > static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, > pmd_t *pmd, pfn_t pfn, pgprot_t prot, bool write, > pgtable_t pgtable) > -- > 2.20.1 Why use CONFIG_ARCH_ENABLE_THP_MIGRATION to guard THP allocator helper? Shouldn?t CONFIG_TRANSPARENT_HUGEPAGE be used? Also the helper still allocates a THP even if transparent_hugepage_enabled(vma) is false, which is wrong, right? -- Best Regards, Yan Zi -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 854 bytes Desc:...
2019 Jul 30
29
hmm_range_fault related fixes and legacy API removal v3
Hi Jérôme, Ben, Felxi and Jason, below is a series against the hmm tree which cleans up various minor bits and allows HMM_MIRROR to be built on all architectures. Diffstat: 7 files changed, 81 insertions(+), 171 deletions(-) A git tree is also available at: git://git.infradead.org/users/hch/misc.git hmm-cleanups Gitweb:
2020 Jun 22
2
[PATCH 13/16] mm: support THP migration to device private memory
...n entry, then it should be handled and the > VM_BUG_ON() should be that thp_migration_supported() is true > (or maybe remove the VM_BUG_ON?). I disagree. A device private entry is independent of a PMD migration entry, since a device private entry is just a swap entry, which is available when CONFIG_TRANSPARENT_HUGEPAGE. So for architectures support THP but not THP migration (like ARM64), your code should still work. I would suggest you to check all the use of is_swap_pmd() and make sure the code can handle is_device_private_entry(). For new device private code, you might need to guard it either statically or dy...
2019 Aug 06
24
hmm cleanups, v2
Hi Jérôme, Ben, Felix and Jason, below is a series against the hmm tree which cleans up various minor bits and allows HMM_MIRROR to be built on all architectures. Diffstat: 11 files changed, 94 insertions(+), 210 deletions(-) A git tree is also available at: git://git.infradead.org/users/hch/misc.git hmm-cleanups.2 Gitweb:
2020 Jun 22
2
[PATCH 13/16] mm: support THP migration to device private memory
...>>> VM_BUG_ON() should be that thp_migration_supported() is true >>> (or maybe remove the VM_BUG_ON?). >> >> I disagree. A device private entry is independent of a PMD migration entry, since a device private >> entry is just a swap entry, which is available when CONFIG_TRANSPARENT_HUGEPAGE. So for architectures >> support THP but not THP migration (like ARM64), your code should still work. > > I'll fix this up for v2 and you can double check me. Sure. > >> I would suggest you to check all the use of is_swap_pmd() and make sure the code >> can handle i...
2019 Aug 07
2
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...,6 @@ EXPORT_SYMBOL(hmm_mirror_unregister); > > struct hmm_vma_walk { > struct hmm_range *range; > - struct dev_pagemap *pgmap; > unsigned long last; > unsigned int flags; > }; > @@ -475,6 +474,7 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > struct hmm_vma_walk *hmm_vma_walk = walk->private; > struct hmm_range *range = hmm_vma_walk->range; > + struct dev_pagemap *pgmap = NULL; > unsigned long pfn, npages, i; > bool fault, write_fault; > uint64_t cpu_flags; > @@ -490,17 +490,14 @@ static int hmm_vm...
2020 May 08
0
[PATCH 4/6] mm/hmm: add output flag for compound page mapping
...g pmd_to_hmm_pfn_flags(struct hmm_range *range, { if (pmd_protnone(pmd)) return 0; - return pmd_write(pmd) ? (HMM_PFN_VALID | HMM_PFN_WRITE) : HMM_PFN_VALID; + return pmd_write(pmd) ? + (HMM_PFN_VALID | HMM_PFN_COMPOUND | HMM_PFN_WRITE) : + (HMM_PFN_VALID | HMM_PFN_COMPOUND); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE @@ -389,7 +391,9 @@ static inline unsigned long pud_to_hmm_pfn_flags(struct hmm_range *range, { if (!pud_present(pud)) return 0; - return pud_write(pud) ? (HMM_PFN_VALID | HMM_PFN_WRITE) : HMM_PFN_VALID; + return pud_write(pud) ? + (HMM_PFN_VALID | HMM_PFN_COMPOUND | HMM_PFN_WRITE) : + (H...
2020 Jun 19
0
[PATCH 09/16] mm/hmm: add output flag for compound page mapping
...g pmd_to_hmm_pfn_flags(struct hmm_range *range, { if (pmd_protnone(pmd)) return 0; - return pmd_write(pmd) ? (HMM_PFN_VALID | HMM_PFN_WRITE) : HMM_PFN_VALID; + return pmd_write(pmd) ? + (HMM_PFN_VALID | HMM_PFN_COMPOUND | HMM_PFN_WRITE) : + (HMM_PFN_VALID | HMM_PFN_COMPOUND); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE @@ -389,7 +391,9 @@ static inline unsigned long pud_to_hmm_pfn_flags(struct hmm_range *range, { if (!pud_present(pud)) return 0; - return pud_write(pud) ? (HMM_PFN_VALID | HMM_PFN_WRITE) : HMM_PFN_VALID; + return pud_write(pud) ? + (HMM_PFN_VALID | HMM_PFN_COMPOUND | HMM_PFN_WRITE) : + (H...
2020 Nov 06
0
[PATCH v3 4/6] mm/thp: add THP allocation helper
...+++ b/include/linux/gfp.h @@ -564,6 +564,16 @@ static inline struct page *alloc_pages(gfp_t gfp_mask, unsigned int order) #define alloc_page(gfp_mask) alloc_pages(gfp_mask, 0) #define alloc_page_vma(gfp_mask, vma, addr) \ alloc_pages_vma(gfp_mask, 0, vma, addr, numa_node_id(), false) +#ifdef CONFIG_TRANSPARENT_HUGEPAGE +extern struct page *alloc_transhugepage(struct vm_area_struct *vma, + unsigned long addr); +#else +static inline struct page *alloc_transhugepage(struct vm_area_struct *vma, + unsigned long addr) +{ + return NULL; +} +#endif extern unsigned long __get_free_pages(gfp_t gfp_mask, unsigne...
2020 Mar 16
0
[PATCH 4/4] mm: check the device private page owner in hmm_range_fault
...pfn_shift; + void *dev_private_owner; }; /* diff --git a/mm/hmm.c b/mm/hmm.c index cfad65f6a67b..b75b3750e03d 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -216,6 +216,14 @@ int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr, unsigned long end, uint64_t *pfns, pmd_t pmd); #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ +static inline bool hmm_is_device_private_entry(struct hmm_range *range, + swp_entry_t entry) +{ + return is_device_private_entry(entry) && + device_private_entry_to_page(entry)->pgmap->owner == + range->dev_private_owner; +} + static inline uint64_t pte_to_hmm_pfn_flags(s...
2020 Mar 20
2
[PATCH 4/4] mm: check the device private page owner in hmm_range_fault
...PM +0100, Christoph Hellwig wrote: > diff --git a/mm/hmm.c b/mm/hmm.c > index cfad65f6a67b..b75b3750e03d 100644 > +++ b/mm/hmm.c > @@ -216,6 +216,14 @@ int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr, > unsigned long end, uint64_t *pfns, pmd_t pmd); > #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ > > +static inline bool hmm_is_device_private_entry(struct hmm_range *range, > + swp_entry_t entry) > +{ > + return is_device_private_entry(entry) && > + device_private_entry_to_page(entry)->pgmap->owner == > + range->dev_private_owner; > +} Thinkin...
2020 Sep 01
0
[PATCH 3/3] drm/ttm: remove io_reserve_lru handling v2
...mp;ctx)) { - ret = VM_FAULT_OOM; - goto out_io_unlock; - } + if (ttm_tt_populate(bo->ttm, &ctx)) + return VM_FAULT_OOM; } else { /* Iomem should not be marked encrypted */ prot = pgprot_decrypted(prot); } /* We don't prefault on huge faults. Yet. */ - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && fault_page_size != 1) { - ret = ttm_bo_vm_insert_huge(vmf, bo, page_offset, - fault_page_size, prot); - goto out_io_unlock; - } + if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && fault_page_size != 1) + return ttm_bo_vm_insert_huge(vmf, bo, page_offset, + fault...
2020 Jun 30
0
[PATCH v2 2/5] mm/hmm: add output flags for PMD/PUD page mapping
...signed long pmd_to_hmm_pfn_flags(struct hmm_range *range, { if (pmd_protnone(pmd)) return 0; - return pmd_write(pmd) ? (HMM_PFN_VALID | HMM_PFN_WRITE) : HMM_PFN_VALID; + return pmd_write(pmd) ? + (HMM_PFN_VALID | HMM_PFN_PMD | HMM_PFN_WRITE) : + (HMM_PFN_VALID | HMM_PFN_PMD); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE @@ -389,7 +391,9 @@ static inline unsigned long pud_to_hmm_pfn_flags(struct hmm_range *range, { if (!pud_present(pud)) return 0; - return pud_write(pud) ? (HMM_PFN_VALID | HMM_PFN_WRITE) : HMM_PFN_VALID; + return pud_write(pud) ? + (HMM_PFN_VALID | HMM_PFN_PUD | HMM_PFN_WRITE) : + (HMM_PF...
2020 Jul 01
0
[PATCH v3 2/5] mm/hmm: add hmm_mapping order
...e *range, { if (pmd_protnone(pmd)) return 0; - return pmd_write(pmd) ? (HMM_PFN_VALID | HMM_PFN_WRITE) : HMM_PFN_VALID; + return ((unsigned long)(PMD_SHIFT - PAGE_SHIFT) << + HMM_PFN_ORDER_SHIFT) | + pmd_write(pmd) ? (HMM_PFN_VALID | HMM_PFN_WRITE) : + HMM_PFN_VALID; } #ifdef CONFIG_TRANSPARENT_HUGEPAGE @@ -389,7 +392,10 @@ static inline unsigned long pud_to_hmm_pfn_flags(struct hmm_range *range, { if (!pud_present(pud)) return 0; - return pud_write(pud) ? (HMM_PFN_VALID | HMM_PFN_WRITE) : HMM_PFN_VALID; + return ((unsigned long)(PUD_SHIFT - PAGE_SHIFT) << + HMM_PFN_ORDER_SHIFT) | +...
2019 Aug 06
0
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...--- a/mm/hmm.c +++ b/mm/hmm.c @@ -278,7 +278,6 @@ EXPORT_SYMBOL(hmm_mirror_unregister); struct hmm_vma_walk { struct hmm_range *range; - struct dev_pagemap *pgmap; unsigned long last; unsigned int flags; }; @@ -475,6 +474,7 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, #ifdef CONFIG_TRANSPARENT_HUGEPAGE struct hmm_vma_walk *hmm_vma_walk = walk->private; struct hmm_range *range = hmm_vma_walk->range; + struct dev_pagemap *pgmap = NULL; unsigned long pfn, npages, i; bool fault, write_fault; uint64_t cpu_flags; @@ -490,17 +490,14 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk,...
2019 Aug 07
0
[PATCH 04/15] mm: remove the pgmap field from struct hmm_vma_walk
...; > struct hmm_range *range; > > - struct dev_pagemap *pgmap; > > unsigned long last; > > unsigned int flags; > > }; > > @@ -475,6 +474,7 @@ static int hmm_vma_handle_pmd(struct mm_walk *walk, > > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > > struct hmm_vma_walk *hmm_vma_walk = walk->private; > > struct hmm_range *range = hmm_vma_walk->range; > > + struct dev_pagemap *pgmap = NULL; > > unsigned long pfn, npages, i; > > bool fault, write_fault; > > uint64_t c...
2020 Jun 22
0
[PATCH 13/16] mm: support THP migration to device private memory
...be handled and the >> VM_BUG_ON() should be that thp_migration_supported() is true >> (or maybe remove the VM_BUG_ON?). > > I disagree. A device private entry is independent of a PMD migration entry, since a device private > entry is just a swap entry, which is available when CONFIG_TRANSPARENT_HUGEPAGE. So for architectures > support THP but not THP migration (like ARM64), your code should still work. I'll fix this up for v2 and you can double check me. > I would suggest you to check all the use of is_swap_pmd() and make sure the code > can handle is_device_private_entry(). OK. &...