search for: thp_migration_support

Displaying 19 results from an estimated 19 matches for "thp_migration_support".

2020 Jun 22
2
[PATCH 13/16] mm: support THP migration to device private memory
...ig_pmd)) { >>> page = pmd_page(orig_pmd); >>> + is_anon = PageAnon(page); >>> page_remove_rmap(page, true); >>> VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); >>> VM_BUG_ON_PAGE(!PageHead(page), page); >>> } else if (thp_migration_supported()) { >>> swp_entry_t entry; >>> >>> - VM_BUG_ON(!is_pmd_migration_entry(orig_pmd)); >>> entry = pmd_to_swp_entry(orig_pmd); >>> - page = pfn_to_page(swp_offset(entry)); >>> + if (is_device_private_entry(entry)) { >>> +...
2020 Jun 22
2
[PATCH 13/16] mm: support THP migration to device private memory
...(orig_pmd); >>>>> + is_anon = PageAnon(page); >>>>> page_remove_rmap(page, true); >>>>> VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); >>>>> VM_BUG_ON_PAGE(!PageHead(page), page); >>>>> } else if (thp_migration_supported()) { >>>>> swp_entry_t entry; >>>>> >>>>> - VM_BUG_ON(!is_pmd_migration_entry(orig_pmd)); >>>>> entry = pmd_to_swp_entry(orig_pmd); >>>>> - page = pfn_to_page(swp_offset(entry)); >>>>> + if...
2020 Jun 22
0
[PATCH 13/16] mm: support THP migration to device private memory
...; if (pmd_present(orig_pmd)) { >> page = pmd_page(orig_pmd); >> + is_anon = PageAnon(page); >> page_remove_rmap(page, true); >> VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); >> VM_BUG_ON_PAGE(!PageHead(page), page); >> } else if (thp_migration_supported()) { >> swp_entry_t entry; >> >> - VM_BUG_ON(!is_pmd_migration_entry(orig_pmd)); >> entry = pmd_to_swp_entry(orig_pmd); >> - page = pfn_to_page(swp_offset(entry)); >> + if (is_device_private_entry(entry)) { >> + page = device_private_en...
2020 Jun 21
2
[PATCH 13/16] mm: support THP migration to device private memory
...bool is_anon = false; > > if (pmd_present(orig_pmd)) { > page = pmd_page(orig_pmd); > + is_anon = PageAnon(page); > page_remove_rmap(page, true); > VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); > VM_BUG_ON_PAGE(!PageHead(page), page); > } else if (thp_migration_supported()) { > swp_entry_t entry; > > - VM_BUG_ON(!is_pmd_migration_entry(orig_pmd)); > entry = pmd_to_swp_entry(orig_pmd); > - page = pfn_to_page(swp_offset(entry)); > + if (is_device_private_entry(entry)) { > + page = device_private_entry_to_page(entry); > +...
2020 Jun 22
0
[PATCH 13/16] mm: support THP migration to device private memory
...page = pmd_page(orig_pmd); >>>> + is_anon = PageAnon(page); >>>> page_remove_rmap(page, true); >>>> VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); >>>> VM_BUG_ON_PAGE(!PageHead(page), page); >>>> } else if (thp_migration_supported()) { >>>> swp_entry_t entry; >>>> >>>> - VM_BUG_ON(!is_pmd_migration_entry(orig_pmd)); >>>> entry = pmd_to_swp_entry(orig_pmd); >>>> - page = pfn_to_page(swp_offset(entry)); >>>> + if (is_device_private_entr...
2020 Jun 22
0
[PATCH 13/16] mm: support THP migration to device private memory
...page_remove_rmap(page, true); > >>>>> VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); > >>>>> VM_BUG_ON_PAGE(!PageHead(page), page); > >>>>> } else if (thp_migration_supported()) { > >>>>> swp_entry_t entry; > >>>>> > >>>>> - VM_BUG_ON(!is_pmd_migration_entry(orig_pmd)); > >>>>> entry = pmd_to_swp_entry(orig_pmd); > >>>...
2020 Jun 22
2
[PATCH 13/16] mm: support THP migration to device private memory
...page_remove_rmap(page, true); > > >>>>> VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); > > >>>>> VM_BUG_ON_PAGE(!PageHead(page), page); > > >>>>> } else if (thp_migration_supported()) { > > >>>>> swp_entry_t entry; > > >>>>> > > >>>>> - VM_BUG_ON(!is_pmd_migration_entry(orig_pmd)); > > >>>>> entry = pmd_to_swp_entry(orig_pmd...
2020 Nov 06
0
[PATCH v3 3/6] mm: support THP migration to device private memory
...e = NULL; int flush_needed = 1; + bool is_anon = false; if (pmd_present(orig_pmd)) { page = pmd_page(orig_pmd); + is_anon = PageAnon(page); page_remove_rmap(page, true); VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); VM_BUG_ON_PAGE(!PageHead(page), page); } else if (thp_migration_supported()) { swp_entry_t entry; - VM_BUG_ON(!is_pmd_migration_entry(orig_pmd)); entry = pmd_to_swp_entry(orig_pmd); - page = pfn_to_page(swp_offset(entry)); + if (is_device_private_entry(entry)) { + page = device_private_entry_to_page(entry); + is_anon = PageAnon(page); + page_re...
2019 Jul 26
0
[PATCH v2 6/7] mm/hmm: remove hugetlbfs check in hmm_vma_walk_pmd
...d58 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -630,9 +630,6 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, if (pmd_none(pmd)) return hmm_vma_walk_hole(start, end, walk); - if (pmd_huge(pmd) && (range->vma->vm_flags & VM_HUGETLB)) - return hmm_pfns_bad(start, end, walk); - if (thp_migration_supported() && is_pmd_migration_entry(pmd)) { bool fault, write_fault; unsigned long npages; -- 2.20.1
2020 Jun 19
0
[PATCH 13/16] mm: support THP migration to device private memory
...e = NULL; int flush_needed = 1; + bool is_anon = false; if (pmd_present(orig_pmd)) { page = pmd_page(orig_pmd); + is_anon = PageAnon(page); page_remove_rmap(page, true); VM_BUG_ON_PAGE(page_mapcount(page) < 0, page); VM_BUG_ON_PAGE(!PageHead(page), page); } else if (thp_migration_supported()) { swp_entry_t entry; - VM_BUG_ON(!is_pmd_migration_entry(orig_pmd)); entry = pmd_to_swp_entry(orig_pmd); - page = pfn_to_page(swp_offset(entry)); + if (is_device_private_entry(entry)) { + page = device_private_entry_to_page(entry); + is_anon = PageAnon(page); + page_re...
2020 Nov 06
12
[PATCH v3 0/6] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers. Earlier versions were posted previously [1] and [2]. The patches apply cleanly to the linux-mm 5.10.0-rc2 tree. There are a lot of other THP patches being posted. I don't think there are any semantic conflicts but there may be some merge conflicts depending on
2020 Sep 02
10
[PATCH v2 0/7] mm/hmm/nouveau: add THP migration to migrate_vma_*
This series adds support for transparent huge page migration to migrate_vma_*() and adds nouveau SVM and HMM selftests as consumers. An earlier version was posted previously [1]. This version now supports splitting a THP midway in the migration process which led to a number of changes. The patches apply cleanly to the current linux-mm tree. Since there are a couple of patches in linux-mm from Dan
2020 Jun 19
22
[PATCH 00/16] mm/hmm/nouveau: THP mapping and migration
These patches apply to linux-5.8.0-rc1. Patches 1-3 should probably go into 5.8, the others can be queued for 5.9. Patches 4-6 improve the HMM self tests. Patch 7-8 prepare nouveau for the meat of this series which adds support and testing for compound page mapping of system memory (patches 9-11) and compound page migration to device private memory (patches 12-16). Since these changes are split
2020 Apr 22
0
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...&range->hmm_pfns[(start - range->start) >> PAGE_SHIFT]; unsigned long npages = (end - start) >> PAGE_SHIFT; unsigned long addr = start; pte_t *ptep; @@ -333,16 +324,16 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, return hmm_vma_walk_hole(start, end, -1, walk); if (thp_migration_supported() && is_pmd_migration_entry(pmd)) { - if (hmm_range_need_fault(hmm_vma_walk, pfns, npages, 0)) { + if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) { hmm_vma_walk->last = addr; pmd_migration_entry_wait(walk->mm, pmdp); return -EBUSY; } - return hmm_pfns...
2020 May 01
0
[PATCH hmm v2 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...&range->hmm_pfns[(start - range->start) >> PAGE_SHIFT]; unsigned long npages = (end - start) >> PAGE_SHIFT; unsigned long addr = start; pte_t *ptep; @@ -333,16 +324,16 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, return hmm_vma_walk_hole(start, end, -1, walk); if (thp_migration_supported() && is_pmd_migration_entry(pmd)) { - if (hmm_range_need_fault(hmm_vma_walk, pfns, npages, 0)) { + if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) { hmm_vma_walk->last = addr; pmd_migration_entry_wait(walk->mm, pmdp); return -EBUSY; } - return hmm_pfns...
2020 Apr 22
1
[PATCH hmm 5/5] mm/hmm: remove the customizable pfn format from hmm_range_fault
...ange->start) >> PAGE_SHIFT]; > unsigned long npages = (end - start) >> PAGE_SHIFT; > unsigned long addr = start; > pte_t *ptep; > @@ -333,16 +324,16 @@ static int hmm_vma_walk_pmd(pmd_t *pmdp, > return hmm_vma_walk_hole(start, end, -1, walk); > > if (thp_migration_supported() && is_pmd_migration_entry(pmd)) { > - if (hmm_range_need_fault(hmm_vma_walk, pfns, npages, 0)) { > + if (hmm_range_need_fault(hmm_vma_walk, hmm_pfns, npages, 0)) { > hmm_vma_walk->last = addr; > pmd_migration_entry_wait(walk->mm, pmdp); > return -EBUSY...
2020 Apr 22
11
[PATCH hmm 0/5] Adjust hmm_range_fault() API
From: Jason Gunthorpe <jgg at mellanox.com> The API is a bit complicated for the uses we actually have, and disucssions for simplifying have come up a number of times. This small series removes the customizable pfn format and simplifies the return code of hmm_range_fault() All the drivers are adjusted to process in the simplified format. I would appreciated tested-by's for the two
2019 Jul 26
13
[PATCH v2 0/7] mm/hmm: more HMM clean up
Here are seven more patches for things I found to clean up. This was based on top of Christoph's seven patches: "hmm_range_fault related fixes and legacy API removal v3". I assume this will go into Jason's tree since there will likely be more HMM changes in this cycle. Changes from v1 to v2: Added AMD GPU to hmm_update removal. Added 2 patches from Christoph. Added 2 patches as
2020 May 01
13
[PATCH hmm v2 0/5] Adjust hmm_range_fault() API
From: Jason Gunthorpe <jgg at mellanox.com> The API is a bit complicated for the uses we actually have, and disucssions for simplifying have come up a number of times. This small series removes the customizable pfn format and simplifies the return code of hmm_range_fault() All the drivers are adjusted to process in the simplified format. I would appreciated tested-by's for the two