search for: folio_unlock

Displaying 9 results from an estimated 9 matches for "folio_unlock".

2025 Jan 24
3
[PATCH v1 0/2] nouveau/svm: fix + cleanup for nouveau_atomic_range_fault()
One fix and a minor cleanup. Only compile-tested due to lack of HW, so I'd be happy if someone with access to HW could test. But not sure how easy this is to trigger. Likely some concurrent MADV_DONTNEED on the PTE we just converted might be able to trigger it. Cc: Karol Herbst <kherbst at redhat.com> Cc: Lyude Paul <lyude at redhat.com> Cc: Danilo Krummrich <dakr at
2023 Mar 28
3
[PATCH] mm: Take a page reference when removing device exclusive entries
...ner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma, vma->vm_mm, vmf->address & PAGE_MASK, (vmf->address & PAGE_MASK) + PAGE_SIZE, NULL); @@ -3637,6 +3648,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); folio_unlock(folio); + put_page(vmf->page); mmu_notifier_invalidate_range_end(&range); return 0; -- 2.39.2
2023 Mar 30
4
[PATCH v2] mm: Take a page reference when removing device exclusive entries
...it_owner(&range, MMU_NOTIFY_EXCLUSIVE, 0, vma->vm_mm, vmf->address & PAGE_MASK, (vmf->address & PAGE_MASK) + PAGE_SIZE, NULL); @@ -3577,6 +3590,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) pte_unmap_unlock(vmf->pte, vmf->ptl); folio_unlock(folio); + folio_put(folio); mmu_notifier_invalidate_range_end(&range); return 0; -- 2.39.2
2023 Mar 07
3
remove most callers of write_one_page v4
Hi all, this series removes most users of the write_one_page API. These helpers internally call ->writepage which we are gradually removing from the kernel. Changes since v3: - drop all patches merged in v6.3-rc1 - re-add the jfs patch Changes since v2: - more minix error handling fixes Changes since v1: - drop the btrfs changes (queue up in the btrfs tree) - drop the finaly move to
2023 Mar 29
1
[PATCH] mm: Take a page reference when removing device exclusive entries
...LUSIVE, 0, vma, > vma->vm_mm, vmf->address & PAGE_MASK, > (vmf->address & PAGE_MASK) + PAGE_SIZE, NULL); > @@ -3637,6 +3648,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) > > pte_unmap_unlock(vmf->pte, vmf->ptl); > folio_unlock(folio); > + put_page(vmf->page); folio_put(folio) There, I just saved you 3 calls to compound_head(), saving roughly 150 bytes of kernel text. > mmu_notifier_invalidate_range_end(&range); > return 0; > -- > 2.39.2 > >
2022 Jul 08
0
[PATCH v2 07/19] mm/migrate: Convert expected_page_refs() to folio_expected_refs()
...gt; - if (folio_get_private(folio) && folio_trylock(folio)) { > - if (folio_get_private(folio)) > + if (folio_counted_private(folio) && > + folio_trylock(folio)) { > + if (folio_counted_private(folio)) > filemap_release_folio(folio, 0); > folio_unlock(folio); > }
2022 Jul 08
0
[PATCH v2 07/19] mm/migrate: Convert expected_page_refs() to folio_expected_refs()
...gt; - if (folio_get_private(folio) && folio_trylock(folio)) { > - if (folio_get_private(folio)) > + if (folio_counted_private(folio) && > + folio_trylock(folio)) { > + if (folio_counted_private(folio)) > filemap_release_folio(folio, 0); > folio_unlock(folio); > }
2023 Mar 28
1
[PATCH] mm: Take a page reference when removing device exclusive entries
...LUSIVE, 0, vma, > vma->vm_mm, vmf->address & PAGE_MASK, > (vmf->address & PAGE_MASK) + PAGE_SIZE, NULL); > @@ -3637,6 +3648,7 @@ static vm_fault_t remove_device_exclusive_entry(struct vm_fault *vmf) > > pte_unmap_unlock(vmf->pte, vmf->ptl); > folio_unlock(folio); > + put_page(vmf->page); > > mmu_notifier_invalidate_range_end(&range); > return 0;
2023 Jun 18
11
[PATCH v1 0/5] clean up block_commit_write
*** BLURB HERE *** Bean Huo (5): fs/buffer: clean up block_commit_write fs/buffer.c: convert block_commit_write to return void ext4: No need to check return value of block_commit_write() fs/ocfs2: No need to check return value of block_commit_write() udf: No need to check return value of block_commit_write() fs/buffer.c | 24 +++++++-----------------