search for: unpin_user_pages

Displaying 17 results from an estimated 17 matches for "unpin_user_pages".

2020 May 29
6
[PATCH 0/2] vhost, docs: convert to pin_user_pages(), new "case 5"
Hi, It recently became clear to me that there are some get_user_pages*() callers that don't fit neatly into any of the four cases that are so far listed in pin_user_pages.rst. vhost.c is one of those. Add a Case 5 to the documentation, and refer to that when converting vhost.c. Thanks to Jan Kara for helping me (again) in understanding the interaction between get_user_pages() and page
2020 Jun 12
1
[PATCH 1/2] docs: mm/gup: pin_user_pages.rst: add a "case 5"
...okes that pattern. In > +other words, if the code is neither Case 1 nor Case 2, it may still require > +FOLL_PIN, for patterns like this: > + > +Correct (uses FOLL_PIN calls): > + pin_user_pages() > + access the data within the pages > + set_page_dirty_lock() > + unpin_user_pages() > + > +INCORRECT (uses FOLL_GET calls): > + get_user_pages() > + access the data within the pages > + set_page_dirty_lock() > + put_page() Why does this case need to pin? Why can't it just do ... get_user_pages() lock_page(page); ... modify the data ... se...
2020 Jun 01
3
[PATCH v2 0/2] vhost, docs: convert to pin_user_pages(), new "case 5"
This is based on Linux 5.7, plus one prerequisite patch: "mm/gup: update pin_user_pages.rst for "case 3" (mmu notifiers)" [1] Changes since v1: removed references to set_page_dirty*(), in response to Souptick Joarder's review (thanks!). Cover letter for v1, edited/updated slightly: It recently became clear to me that there are some get_user_pages*() callers that
2023 Mar 10
0
[PATCH] vhost-vdpa: cleanup memory maps when closing vdpa fds
..._user_pages > > > vhost_vdpa_map > > > iommu_map > > > > > > 3. kill QEMU > > > > > > 4. vhost_vdpa_release > > > vhost_vdpa_free_domain > > > > > > In this case, we have no opportunity to invoke unpin_user_pages or > > > iommu_unmap to free the memory. > > > > We do: > > > > vhost_vdpa_cleanup() > > vhost_vdpa_remove_as() > > vhost_vdpa_iotlb_unmap() > > vhost_vdpa_pa_unmap() > > unpin_user_pages() > &...
2020 May 31
1
[PATCH 1/2] docs: mm/gup: pin_user_pages.rst: add a "case 5"
...okes that pattern. In > +other words, if the code is neither Case 1 nor Case 2, it may still require > +FOLL_PIN, for patterns like this: > + > +Correct (uses FOLL_PIN calls): > + pin_user_pages() > + access the data within the pages > + set_page_dirty_lock() > + unpin_user_pages() > + > +INCORRECT (uses FOLL_GET calls): > + get_user_pages() > + access the data within the pages > + set_page_dirty_lock() > + put_page() > + > page_maybe_dma_pinned(): the whole point of pinning > =================================================== >...
2020 Nov 03
0
[PATCH 1/2] Revert "vhost-vdpa: fix page pinning leakage in error path"
...) > lock_limit) { > - ret = -ENOMEM; > - goto unlock; > - } > > - pinned = pin_user_pages(msg->uaddr & PAGE_MASK, npages, gup_flags, > - page_list, vmas); > - if (npages != pinned) { > - if (pinned < 0) { > - ret = pinned; > - } else { > - unpin_user_pages(page_list, pinned); > - ret = -ENOMEM; > - } > - goto unlock; > + if (locked > lock_limit) { > + ret = -ENOMEM; > + goto out; > } > > + cur_base = msg->uaddr & PAGE_MASK; > iova &= PAGE_MASK; > - map_pfn = page_to_pfn(page_list[0]); >...
2020 Oct 01
0
[PATCH] vhost-vdpa: fix page pinning leakage in error path
...t = -ENOMEM; - goto out; + goto unlock; } - cur_base = msg->uaddr & PAGE_MASK; - iova &= PAGE_MASK; + pinned = pin_user_pages(msg->uaddr & PAGE_MASK, npages, gup_flags, + page_list, vmas); + if (npages != pinned) { + if (pinned < 0) { + ret = pinned; + } else { + unpin_user_pages(page_list, pinned); + ret = -ENOMEM; + } + goto unlock; + } - while (npages) { - pinned = min_t(unsigned long, npages, list_size); - ret = pin_user_pages(cur_base, pinned, - gup_flags, page_list, NULL); - if (ret != pinned) - goto out; - - if (!last_pfn) - map_pfn = page_to_pf...
2020 Oct 01
0
[PATCH v2] vhost-vdpa: fix page pinning leakage in error path
...t = -ENOMEM; - goto out; + goto unlock; } - cur_base = msg->uaddr & PAGE_MASK; - iova &= PAGE_MASK; + pinned = pin_user_pages(msg->uaddr & PAGE_MASK, npages, gup_flags, + page_list, vmas); + if (npages != pinned) { + if (pinned < 0) { + ret = pinned; + } else { + unpin_user_pages(page_list, pinned); + ret = -ENOMEM; + } + goto unlock; + } - while (npages) { - pinned = min_t(unsigned long, npages, list_size); - ret = pin_user_pages(cur_base, pinned, - gup_flags, page_list, NULL); - if (ret != pinned) - goto out; - - if (!last_pfn) - map_pfn = page_to_pf...
2020 May 29
0
[PATCH 1/2] docs: mm/gup: pin_user_pages.rst: add a "case 5"
...e 1, plus Case 2, plus anything that invokes that pattern. In +other words, if the code is neither Case 1 nor Case 2, it may still require +FOLL_PIN, for patterns like this: + +Correct (uses FOLL_PIN calls): + pin_user_pages() + access the data within the pages + set_page_dirty_lock() + unpin_user_pages() + +INCORRECT (uses FOLL_GET calls): + get_user_pages() + access the data within the pages + set_page_dirty_lock() + put_page() + page_maybe_dma_pinned(): the whole point of pinning =================================================== -- 2.26.2
2020 May 29
0
[PATCH 2/2] vhost: convert get_user_pages() --> pin_user_pages()
This code was using get_user_pages*(), in approximately a "Case 5" scenario (accessing the data within a page), using the categorization from [1]. That means that it's time to convert the get_user_pages*() + put_page() calls to pin_user_pages*() + unpin_user_pages() calls. There is some helpful background in [2]: basically, this is a small part of fixing a long-standing disconnect between pinning pages, and file systems' use of those pages. [1] Documentation/core-api/pin_user_pages.rst [2] "Explicit pinning of user-space pages": https://...
2020 Jun 01
0
[PATCH v2 1/2] docs: mm/gup: pin_user_pages.rst: add a "case 5"
...idered a +superset of Case 1, plus Case 2, plus anything that invokes that pattern. In +other words, if the code is neither Case 1 nor Case 2, it may still require +FOLL_PIN, for patterns like this: + +Correct (uses FOLL_PIN calls): + pin_user_pages() + write to the data within the pages + unpin_user_pages() + +INCORRECT (uses FOLL_GET calls): + get_user_pages() + write to the data within the pages + put_page() + page_maybe_dma_pinned(): the whole point of pinning =================================================== -- 2.26.2
2020 Sep 25
0
[PATCH 2/2] mm: remove extra ZONE_DEVICE struct page refcount
ZONE_DEVICE struct pages have an extra reference count that complicates the code for put_page() and several places in the kernel that need to check the reference count to see that a page is not being used (gup, compaction, migration, etc.). Clean up the code so the reference count doesn't need to be treated specially for ZONE_DEVICE. Signed-off-by: Ralph Campbell <rcampbell at
2020 Oct 01
0
[RFC PATCH v3 2/2] mm: remove extra ZONE_DEVICE struct page refcount
ZONE_DEVICE struct pages have an extra reference count that complicates the code for put_page() and several places in the kernel that need to check the reference count to see that a page is not being used (gup, compaction, migration, etc.). Clean up the code so the reference count doesn't need to be treated specially for ZONE_DEVICE. Signed-off-by: Ralph Campbell <rcampbell at
2023 Feb 16
0
[RFC PATCH v1 07/12] vsock/virtio: MGS_ZEROCOPY flag support
On Mon, Feb 06, 2023 at 07:00:35AM +0000, Arseniy Krasnov wrote: >This adds main logic of MSG_ZEROCOPY flag processing for packet >creation. When this flag is set and user's iov iterator fits for >zerocopy transmission, call 'get_user_pages()' and add returned >pages to the newly created skb. > >Signed-off-by: Arseniy Krasnov <AVKrasnov at sberdevices.ru>
2020 Sep 25
6
[RFC PATCH v2 0/2] mm: remove extra ZONE_DEVICE struct page refcount
Matthew Wilcox, Ira Weiny, and others have complained that ZONE_DEVICE struct page reference counting is ugly because they are "free" when the reference count is one instead of zero. This leads to explicit checks for ZONE_DEVICE pages in places like put_page(), GUP, THP splitting, and page migration which have to adjust the expected reference count when determining if the page is
2020 Oct 01
8
[RFC PATCH v3 0/2] mm: remove extra ZONE_DEVICE struct page refcount
This is still an RFC because after looking at the pmem/dax code some more, I realized that the ZONE_DEVICE struct pages are being inserted into the process' page tables with vmf_insert_mixed() and a zero refcount on the ZONE_DEVICE struct page. This is sort of OK because insert_pfn() increments the reference count on the pgmap which is what prevents memunmap_pages() from freeing the struct
2020 Sep 14
5
[PATCH] mm: remove extra ZONE_DEVICE struct page refcount
ZONE_DEVICE struct pages have an extra reference count that complicates the code for put_page() and several places in the kernel that need to check the reference count to see that a page is not being used (gup, compaction, migration, etc.). Clean up the code so the reference count doesn't need to be treated specially for ZONE_DEVICE. Signed-off-by: Ralph Campbell <rcampbell at