search for: is_huge_zero_pmd

Displaying 9 results from an estimated 9 matches for "is_huge_zero_pmd".

2020 Sep 02
1
[PATCH v2 1/7] mm/thp: fix __split_huge_pmd_locked() for migration PMD
...to > think that __split_huge_pmd_locked() can handle splitting a migrating PMD. > However, the code always increments the page->_mapcount and adjusts the > memory control group accounting assuming the page is mapped. > Also, if the PMD entry is a migration PMD entry, the call to > is_huge_zero_pmd(*pmd) is incorrect because it calls pmd_pfn(pmd) instead > of migration_entry_to_pfn(pmd_to_swp_entry(pmd)). > Fix these problems by checking for a PMD migration entry. > > Signed-off-by: Ralph Campbell <rcampbell at nvidia.com> Thanks for the fix. You can add Reviewed-by: Zi Yan...
2020 Sep 02
0
[PATCH v2 1/7] mm/thp: fix __split_huge_pmd_locked() for migration PMD
...pmd_locked() leading one to think that __split_huge_pmd_locked() can handle splitting a migrating PMD. However, the code always increments the page->_mapcount and adjusts the memory control group accounting assuming the page is mapped. Also, if the PMD entry is a migration PMD entry, the call to is_huge_zero_pmd(*pmd) is incorrect because it calls pmd_pfn(pmd) instead of migration_entry_to_pfn(pmd_to_swp_entry(pmd)). Fix these problems by checking for a PMD migration entry. Signed-off-by: Ralph Campbell <rcampbell at nvidia.com> --- mm/huge_memory.c | 42 +++++++++++++++++++++++-------------------...
2020 Sep 03
1
[PATCH v3] mm/thp: fix __split_huge_pmd_locked() for migration PMD
...pmd_locked() leading one to think that __split_huge_pmd_locked() can handle splitting a migrating PMD. However, the code always increments the page->_mapcount and adjusts the memory control group accounting assuming the page is mapped. Also, if the PMD entry is a migration PMD entry, the call to is_huge_zero_pmd(*pmd) is incorrect because it calls pmd_pfn(pmd) instead of migration_entry_to_pfn(pmd_to_swp_entry(pmd)). Fix these problems by checking for a PMD migration entry. Fixes: 84c3fc4e9c56 ("mm: thp: check pmd migration entry in common path") cc: stable at vger.kernel.org # 4.14+ Signed-off-...
2020 Sep 02
10
[PATCH v2 0/7] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers. An earlier version was posted previously [1]. This version now supports splitting a THP midway in the migration process which led to a number of changes. The patches apply cleanly to the current linux-mm tree. Since there are a couple of patches in linux-mm from Dan
2020 Nov 06
0
[PATCH v3 3/6] mm: support THP migration to device private memory
..._struct *vma, pmd_t *pmd, + unsigned long address, struct page *page); static inline int split_huge_page(struct page *page) { return split_huge_page_to_list(page, NULL); @@ -456,6 +458,11 @@ static inline bool is_huge_zero_page(struct page *page) return false; } +static inline bool is_huge_zero_pmd(pmd_t pmd) +{ + return false; +} + static inline bool is_huge_zero_pud(pud_t pud) { return false; diff --git a/include/linux/memremap.h b/include/linux/memremap.h index 86c6c368ce9b..9b39a896af37 100644 --- a/include/linux/memremap.h +++ b/include/linux/memremap.h @@ -87,6 +87,15 @@ struct dev_...
2020 Jun 19
0
[PATCH 13/16] mm: support THP migration to device private memory
...ntry), vma); + } + + ptl = pmd_lock(mm, pmdp); + + if (check_stable_address_space(mm)) + goto unlock_abort; + + /* + * Check for userfaultfd but do not deliver the fault. Instead, + * just back off. + */ + if (userfaultfd_missing(vma)) + goto unlock_abort; + + if (pmd_present(*pmdp)) { + if (!is_huge_zero_pmd(*pmdp)) + goto unlock_abort; + flush = true; + } else if (!pmd_none(*pmdp)) + goto unlock_abort; + + get_page(page); + page_add_new_anon_rmap(page, vma, haddr, true); + if (!is_device_private_page(page)) + lru_cache_add_active_or_unevictable(page, vma); + if (flush) { + pte_free(mm, pgtable);...
2020 Nov 06
12
[PATCH v3 0/6] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers. Earlier versions were posted previously [1] and [2]. The patches apply cleanly to the linux-mm 5.10.0-rc2 tree. There are a lot of other THP patches being posted. I don't think there are any semantic conflicts but there may be some merge conflicts depending on
2020 Jun 21
2
[PATCH 13/16] mm: support THP migration to device private memory
...check_stable_address_space(mm)) > + goto unlock_abort; > + > + /* > + * Check for userfaultfd but do not deliver the fault. Instead, > + * just back off. > + */ > + if (userfaultfd_missing(vma)) > + goto unlock_abort; > + > + if (pmd_present(*pmdp)) { > + if (!is_huge_zero_pmd(*pmdp)) > + goto unlock_abort; > + flush = true; > + } else if (!pmd_none(*pmdp)) > + goto unlock_abort; > + > + get_page(page); > + page_add_new_anon_rmap(page, vma, haddr, true); > + if (!is_device_private_page(page)) > + lru_cache_add_active_or_unevictable(page, v...
2020 Jun 19
22
[PATCH 00/16] mm/hmm/nouveau: THP mapping and migration
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM self tests. Patch 7-8 prepare nouveau for the meat of this series which adds support and testing for compound page mapping of system memory (patches 9-11) and compound page migration to device private memory (patches 12-16). Since these changes are split