David Hildenbrand
2025-Jan-29 11:54 UTC
[PATCH v1 12/12] mm/rmap: keep mapcount untouched for device-exclusive entries
Now that conversion to device-exclusive does no longer perform an rmap walk and the main page_vma_mapped_walk() users were taught to properly handle nonswap entries, let's treat device-exclusive entries just as if they would be present, similar to how we handle device-private entries already. This fixes swapout/migration of folios with device-exclusive entries. Likely there are still some page_vma_mapped_walk() callers that are not fully prepared for these entries, and where we simply want to refuse !pte_present() entries. They have to be fixed independently; the ones in mm/rmap.c are prepared. Fixes: b756a3b5e7ea ("mm: device exclusive memory access") Signed-off-by: David Hildenbrand <david at redhat.com> --- mm/memory.c | 17 +---------------- mm/rmap.c | 7 ------- 2 files changed, 1 insertion(+), 23 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index db38d6ae4e74..cd689cd8a7c8 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -743,20 +743,6 @@ static void restore_exclusive_pte(struct vm_area_struct *vma, VM_BUG_ON_FOLIO(pte_write(pte) && (!folio_test_anon(folio) && PageAnonExclusive(page)), folio); - - /* - * No need to take a page reference as one was already - * created when the swap entry was made. - */ - if (folio_test_anon(folio)) - folio_add_anon_rmap_pte(folio, page, vma, address, RMAP_NONE); - else - /* - * Currently device exclusive access only supports anonymous - * memory so the entry shouldn't point to a filebacked page. - */ - WARN_ON_ONCE(1); - set_pte_at(vma->vm_mm, address, ptep, pte); /* @@ -1628,8 +1614,7 @@ static inline int zap_nonpresent_ptes(struct mmu_gather *tlb, */ WARN_ON_ONCE(!vma_is_anonymous(vma)); rss[mm_counter(folio)]--; - if (is_device_private_entry(entry)) - folio_remove_rmap_pte(folio, page, vma); + folio_remove_rmap_pte(folio, page, vma); folio_put(folio); } else if (!non_swap_entry(entry)) { /* Genuine swap entries, hence a private anon pages */ diff --git a/mm/rmap.c b/mm/rmap.c index 9e2002d97d6f..4acc9f6d743a 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -2495,13 +2495,6 @@ struct page *make_device_exclusive(struct mm_struct *mm, unsigned long addr, /* The pte is writable, uffd-wp does not apply. */ set_pte_at(mm, addr, fw.ptep, swp_pte); - /* - * TODO: The device-exclusive non-swap PTE holds a folio reference but - * does not count as a mapping (mapcount), which is wrong and must be - * fixed, otherwise RMAP walks don't behave as expected. - */ - folio_remove_rmap_pte(folio, page, vma); - folio_walk_end(&fw, vma); *foliop = folio; return page; -- 2.48.1
Simona Vetter
2025-Jan-30 10:37 UTC
[PATCH v1 12/12] mm/rmap: keep mapcount untouched for device-exclusive entries
On Wed, Jan 29, 2025 at 12:54:10PM +0100, David Hildenbrand wrote:> Now that conversion to device-exclusive does no longer perform an > rmap walk and the main page_vma_mapped_walk() users were taught to > properly handle nonswap entries, let's treat device-exclusive entries just > as if they would be present, similar to how we handle device-private > entries already.So the reason for handling device-private entries in rmap is so that drivers can rely on try_to_migrate and related code to invalidate all the various ptes even for device private memory. Otherwise no one should hit this path, at least if my understanding is correct. So I'm very much worried about opening a can of worms here because I think this adds a genuine new case to all the various callers.> This fixes swapout/migration of folios with device-exclusive entries. > > Likely there are still some page_vma_mapped_walk() callers that are not > fully prepared for these entries, and where we simply want to refuse > !pte_present() entries. They have to be fixed independently; the ones in > mm/rmap.c are prepared.The other worry is that maybe breaking migration is a feature, at least in parts. If thp constantly reassembles a pmd entry because hey all the memory is contig and userspace allocated a chunk of memory to place atomics that alternate between cpu and gpu nicely separated by 4k pages, then we'll thrash around invalidating ptes to no end. So might be more fallout here. -Sima> > Fixes: b756a3b5e7ea ("mm: device exclusive memory access") > Signed-off-by: David Hildenbrand <david at redhat.com> > --- > mm/memory.c | 17 +---------------- > mm/rmap.c | 7 ------- > 2 files changed, 1 insertion(+), 23 deletions(-) > > diff --git a/mm/memory.c b/mm/memory.c > index db38d6ae4e74..cd689cd8a7c8 100644 > --- a/mm/memory.c > +++ b/mm/memory.c > @@ -743,20 +743,6 @@ static void restore_exclusive_pte(struct vm_area_struct *vma, > > VM_BUG_ON_FOLIO(pte_write(pte) && (!folio_test_anon(folio) && > PageAnonExclusive(page)), folio); > - > - /* > - * No need to take a page reference as one was already > - * created when the swap entry was made. > - */ > - if (folio_test_anon(folio)) > - folio_add_anon_rmap_pte(folio, page, vma, address, RMAP_NONE); > - else > - /* > - * Currently device exclusive access only supports anonymous > - * memory so the entry shouldn't point to a filebacked page. > - */ > - WARN_ON_ONCE(1); > - > set_pte_at(vma->vm_mm, address, ptep, pte); > > /* > @@ -1628,8 +1614,7 @@ static inline int zap_nonpresent_ptes(struct mmu_gather *tlb, > */ > WARN_ON_ONCE(!vma_is_anonymous(vma)); > rss[mm_counter(folio)]--; > - if (is_device_private_entry(entry)) > - folio_remove_rmap_pte(folio, page, vma); > + folio_remove_rmap_pte(folio, page, vma); > folio_put(folio); > } else if (!non_swap_entry(entry)) { > /* Genuine swap entries, hence a private anon pages */ > diff --git a/mm/rmap.c b/mm/rmap.c > index 9e2002d97d6f..4acc9f6d743a 100644 > --- a/mm/rmap.c > +++ b/mm/rmap.c > @@ -2495,13 +2495,6 @@ struct page *make_device_exclusive(struct mm_struct *mm, unsigned long addr, > /* The pte is writable, uffd-wp does not apply. */ > set_pte_at(mm, addr, fw.ptep, swp_pte); > > - /* > - * TODO: The device-exclusive non-swap PTE holds a folio reference but > - * does not count as a mapping (mapcount), which is wrong and must be > - * fixed, otherwise RMAP walks don't behave as expected. > - */ > - folio_remove_rmap_pte(folio, page, vma); > - > folio_walk_end(&fw, vma); > *foliop = folio; > return page; > -- > 2.48.1 >-- Simona Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch
David Hildenbrand
2025-Jan-30 11:42 UTC
[PATCH v1 12/12] mm/rmap: keep mapcount untouched for device-exclusive entries
On 30.01.25 11:37, Simona Vetter wrote:> On Wed, Jan 29, 2025 at 12:54:10PM +0100, David Hildenbrand wrote: >> Now that conversion to device-exclusive does no longer perform an >> rmap walk and the main page_vma_mapped_walk() users were taught to >> properly handle nonswap entries, let's treat device-exclusive entries just >> as if they would be present, similar to how we handle device-private >> entries already. > > So the reason for handling device-private entries in rmap is so that > drivers can rely on try_to_migrate and related code to invalidate all the > various ptes even for device private memory. Otherwise no one should hit > this path, at least if my understanding is correct.Right, device-private probably only happen to be seen on the migration path so far.> > So I'm very much worried about opening a can of worms here because I think > this adds a genuine new case to all the various callers.To be clear: it can all already happen. Assume you have a THP (or any mTHP today). You can easily trigger the scenario that folio_mapcount() != 0 with active device-exclusive entries, and you start doing rmap walks and stumble over these device-exclusive entries and *not* handle them properly. Note that more and more systems are configured to just give you THP unless you explicitly opted-out using MADV_NOHUGEPAGE early. Note that b756a3b5e7ea added that hunk that still walks these device-exclusive entries in rmap code, but didn't actually update the rmap walkers: @@ -102,7 +104,8 @@ static bool check_pte(struct page_vma_mapped_walk *pvmw) /* Handle un-addressable ZONE_DEVICE memory */ entry = pte_to_swp_entry(*pvmw->pte); - if (!is_device_private_entry(entry)) + if (!is_device_private_entry(entry) && + !is_device_exclusive_entry(entry)) return false; pfn = swp_offset(entry); That was the right thing to do, because they resemble PROT_NONE entries and not migration entries or anything else that doesn't hold a folio reference). Fortunately, it's only the page_vma_mapped_walk() callers that need care. mm/rmap.c is handled with this series. mm/page_vma_mapped.c should work already. mm/migrate.c: does not apply mm/page_idle.c: likely should just skip !pte_present(). mm/ksm.c might be fine, but likely we should just reject !pte_present(). kernel/events/uprobes.c likely should reject !pte_present(). mm/damon/paddr.c likely should reject !pte_present(). I briefly though about a flag to indicate if a page_vma_mapped_walk() supports these non-present entries, but likely just fixing them up is easier+cleaner. Now that I looked at all, I might just write patches for them.> >> This fixes swapout/migration of folios with device-exclusive entries. >> >> Likely there are still some page_vma_mapped_walk() callers that are not >> fully prepared for these entries, and where we simply want to refuse >> !pte_present() entries. They have to be fixed independently; the ones in >> mm/rmap.c are prepared. > > The other worry is that maybe breaking migration is a feature, at least in > parts.Maybe breaking swap and migration is a feature in some reality, in this reality it's a BUG :) If thp constantly reassembles a pmd entry because hey all the> memory is contig and userspace allocated a chunk of memory to place > atomics that alternate between cpu and gpu nicely separated by 4k pages, > then we'll thrash around invalidating ptes to no end. So might be more > fallout here.khugepaged will back off once it sees an exclusive entry, so collapsing could only happen once everything is non-exclusive. See __collapse_huge_page_isolate() as an example. It's really only page_vma_mapped_walk() callers that are affected by this change, not any other page table walkers. It's unfortunate that we now have to fix it all up, that original series should have never been merged that way. -- Cheers, David / dhildenb