search for: orig_pmd

Displaying 12 results from an estimated 12 matches for "orig_pmd".

2020 Jun 21
2
[PATCH 13/16] mm: support THP migration to device private memory
...100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -1663,23 +1663,35 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, > } else { > struct page *page = NULL; > int flush_needed = 1; > + bool is_anon = false; > > if (pmd_present(orig_pmd)) { > page = pmd_page(orig_pmd); > + is_anon = PageAnon(page); > page_remove_rmap(page, true); > VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); > VM_BUG_ON_PAGE(!PageHead(page), page); > } else if (thp_migration_supported()) { > swp_entry_t entry; &g...
2020 Jun 22
2
[PATCH 13/16] mm: support THP migration to device private memory
....c >>> @@ -1663,23 +1663,35 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, >>> } else { >>> struct page *page = NULL; >>> int flush_needed = 1; >>> + bool is_anon = false; >>> >>> if (pmd_present(orig_pmd)) { >>> page = pmd_page(orig_pmd); >>> + is_anon = PageAnon(page); >>> page_remove_rmap(page, true); >>> VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); >>> VM_BUG_ON_PAGE(!PageHead(page), page); >>> } else if (thp_mi...
2020 Jun 22
2
[PATCH 13/16] mm: support THP migration to device private memory
..._pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, >>>>> } else { >>>>> struct page *page = NULL; >>>>> int flush_needed = 1; >>>>> + bool is_anon = false; >>>>> >>>>> if (pmd_present(orig_pmd)) { >>>>> page = pmd_page(orig_pmd); >>>>> + is_anon = PageAnon(page); >>>>> page_remove_rmap(page, true); >>>>> VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); >>>>> VM_BUG_ON_PAGE(!PageHead(page...
2020 Jun 22
0
[PATCH 13/16] mm: support THP migration to device private memory
...gt;> +++ b/mm/huge_memory.c >> @@ -1663,23 +1663,35 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, >> } else { >> struct page *page = NULL; >> int flush_needed = 1; >> + bool is_anon = false; >> >> if (pmd_present(orig_pmd)) { >> page = pmd_page(orig_pmd); >> + is_anon = PageAnon(page); >> page_remove_rmap(page, true); >> VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); >> VM_BUG_ON_PAGE(!PageHead(page), page); >> } else if (thp_migration_supported()) { &...
2020 Jun 19
0
[PATCH 13/16] mm: support THP migration to device private memory
..._memory.c index 78c84bee7e29..25d95f7b1e98 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1663,23 +1663,35 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, } else { struct page *page = NULL; int flush_needed = 1; + bool is_anon = false; if (pmd_present(orig_pmd)) { page = pmd_page(orig_pmd); + is_anon = PageAnon(page); page_remove_rmap(page, true); VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); VM_BUG_ON_PAGE(!PageHead(page), page); } else if (thp_migration_supported()) { swp_entry_t entry; - VM_BUG_ON(!is_pmd_migration_ent...
2020 Jun 22
0
[PATCH 13/16] mm: support THP migration to device private memory
...+1663,35 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, >>>> } else { >>>> struct page *page = NULL; >>>> int flush_needed = 1; >>>> + bool is_anon = false; >>>> >>>> if (pmd_present(orig_pmd)) { >>>> page = pmd_page(orig_pmd); >>>> + is_anon = PageAnon(page); >>>> page_remove_rmap(page, true); >>>> VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); >>>> VM_BUG_ON_PAGE(!PageHead(page), page); >>&g...
2020 Jun 22
0
[PATCH 13/16] mm: support THP migration to device private memory
...} else { > >>>>> struct page *page = NULL; > >>>>> int flush_needed = 1; > >>>>> + bool is_anon = false; > >>>>> > >>>>> if (pmd_present(orig_pmd)) { > >>>>> page = pmd_page(orig_pmd); > >>>>> + is_anon = PageAnon(page); > >>>>> page_remove_rmap(page, true); > >>>>> VM_BUG_ON_PA...
2020 Nov 06
0
[PATCH v3 3/6] mm: support THP migration to device private memory
..._memory.c index b4141f12ff31..a073e66d0ee2 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1682,23 +1682,35 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma, } else { struct page *page = NULL; int flush_needed = 1; + bool is_anon = false; if (pmd_present(orig_pmd)) { page = pmd_page(orig_pmd); + is_anon = PageAnon(page); page_remove_rmap(page, true); VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); VM_BUG_ON_PAGE(!PageHead(page), page); } else if (thp_migration_supported()) { swp_entry_t entry; - VM_BUG_ON(!is_pmd_migration_ent...
2020 Jun 22
2
[PATCH 13/16] mm: support THP migration to device private memory
...> >>>>> struct page *page = NULL; > > >>>>> int flush_needed = 1; > > >>>>> + bool is_anon = false; > > >>>>> > > >>>>> if (pmd_present(orig_pmd)) { > > >>>>> page = pmd_page(orig_pmd); > > >>>>> + is_anon = PageAnon(page); > > >>>>> page_remove_rmap(page, true); > > >>>>>...
2020 Jun 19
22
[PATCH 00/16] mm/hmm/nouveau: THP mapping and migration
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM self tests. Patch 7-8 prepare nouveau for the meat of this series which adds support and testing for compound page mapping of system memory (patches 9-11) and compound page migration to device private memory (patches 12-16). Since these changes are split
2020 Nov 06
12
[PATCH v3 0/6] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers. Earlier versions were posted previously [1] and [2]. The patches apply cleanly to the linux-mm 5.10.0-rc2 tree. There are a lot of other THP patches being posted. I don't think there are any semantic conflicts but there may be some merge conflicts depending on
2020 Sep 02
10
[PATCH v2 0/7] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers. An earlier version was posted previously [1]. This version now supports splitting a THP midway in the migration process which led to a number of changes. The patches apply cleanly to the current linux-mm tree. Since there are a couple of patches in linux-mm from Dan