search for: set_page_dirty_lock

Displaying 20 results from an estimated 66 matches for "set_page_dirty_lock".

2020 May 29
6
[PATCH 0/2] vhost, docs: convert to pin_user_pages(), new "case 5"
Hi, It recently became clear to me that there are some get_user_pages*() callers that don't fit neatly into any of the four cases that are so far listed in pin_user_pages.rst. vhost.c is one of those. Add a Case 5 to the documentation, and refer to that when converting vhost.c. Thanks to Jan Kara for helping me (again) in understanding the interaction between get_user_pages() and page
2020 Jun 12
1
[PATCH 1/2] docs: mm/gup: pin_user_pages.rst: add a "case 5"
...s Case 2, plus anything that invokes that pattern. In > +other words, if the code is neither Case 1 nor Case 2, it may still require > +FOLL_PIN, for patterns like this: > + > +Correct (uses FOLL_PIN calls): > + pin_user_pages() > + access the data within the pages > + set_page_dirty_lock() > + unpin_user_pages() > + > +INCORRECT (uses FOLL_GET calls): > + get_user_pages() > + access the data within the pages > + set_page_dirty_lock() > + put_page() Why does this case need to pin? Why can't it just do ... get_user_pages() lock_page(page);...
2020 May 31
1
[PATCH 1/2] docs: mm/gup: pin_user_pages.rst: add a "case 5"
...s Case 2, plus anything that invokes that pattern. In > +other words, if the code is neither Case 1 nor Case 2, it may still require > +FOLL_PIN, for patterns like this: > + > +Correct (uses FOLL_PIN calls): > + pin_user_pages() > + access the data within the pages > + set_page_dirty_lock() > + unpin_user_pages() > + > +INCORRECT (uses FOLL_GET calls): > + get_user_pages() > + access the data within the pages > + set_page_dirty_lock() > + put_page() > + > page_maybe_dma_pinned(): the whole point of pinning > ============================...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...45:57AM +0800, Jason Wang wrote: > > On 2019/3/7 ??12:31, Michael S. Tsirkin wrote: > > > +static void vhost_set_vmap_dirty(struct vhost_vmap *used) > > > +{ > > > + int i; > > > + > > > + for (i = 0; i < used->npages; i++) > > > + set_page_dirty_lock(used->pages[i]); > > This seems to rely on page lock to mark page dirty. > > > > Could it happen that page writeback will check the > > page, find it clean, and then you mark it dirty and then > > invalidate callback is called? > > > > > > Yes....
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...45:57AM +0800, Jason Wang wrote: > > On 2019/3/7 ??12:31, Michael S. Tsirkin wrote: > > > +static void vhost_set_vmap_dirty(struct vhost_vmap *used) > > > +{ > > > + int i; > > > + > > > + for (i = 0; i < used->npages; i++) > > > + set_page_dirty_lock(used->pages[i]); > > This seems to rely on page lock to mark page dirty. > > > > Could it happen that page writeback will check the > > page, find it clean, and then you mark it dirty and then > > invalidate callback is called? > > > > > > Yes....
2019 Apr 09
2
[PATCH net] vhost: flush dcache page when logging dirty pages
...c b/drivers/vhost/vhost.c index 351af88231ad..34a1cedbc5ba 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -1711,6 +1711,7 @@ static int set_bit_to_user(int nr, void __user *addr) base = kmap_atomic(page); set_bit(bit, base); kunmap_atomic(base); + flush_dcache_page(page); set_page_dirty_lock(page); put_page(page); return 0; -- 2.19.1
2019 Apr 09
2
[PATCH net] vhost: flush dcache page when logging dirty pages
...c b/drivers/vhost/vhost.c index 351af88231ad..34a1cedbc5ba 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -1711,6 +1711,7 @@ static int set_bit_to_user(int nr, void __user *addr) base = kmap_atomic(page); set_bit(bit, base); kunmap_atomic(base); + flush_dcache_page(page); set_page_dirty_lock(page); put_page(page); return 0; -- 2.19.1
2020 May 29
0
[PATCH 1/2] docs: mm/gup: pin_user_pages.rst: add a "case 5"
...nsidered a +superset of Case 1, plus Case 2, plus anything that invokes that pattern. In +other words, if the code is neither Case 1 nor Case 2, it may still require +FOLL_PIN, for patterns like this: + +Correct (uses FOLL_PIN calls): + pin_user_pages() + access the data within the pages + set_page_dirty_lock() + unpin_user_pages() + +INCORRECT (uses FOLL_GET calls): + get_user_pages() + access the data within the pages + set_page_dirty_lock() + put_page() + page_maybe_dma_pinned(): the whole point of pinning =================================================== -- 2.26.2
2019 Mar 07
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...t; > > On 2019/3/7 ??12:31, Michael S. Tsirkin wrote: > > > > +static void vhost_set_vmap_dirty(struct vhost_vmap *used) > > > > +{ > > > > + int i; > > > > + > > > > + for (i = 0; i < used->npages; i++) > > > > + set_page_dirty_lock(used->pages[i]); > > > This seems to rely on page lock to mark page dirty. > > > > > > Could it happen that page writeback will check the > > > page, find it clean, and then you mark it dirty and then > > > invalidate callback is called? > > &g...
2019 Mar 07
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...backed by anon or tmpfs or hugetlbfs to solve this of course). It sounds like we should at least optimize away the _lock from set_page_dirty if it's anon/hugetlbfs/tmpfs, would be nice if there was a clean way to do that. Now assuming we don't nak the use on ext4 VM_SHARED and we stick to set_page_dirty_lock for such case: could you recap how that __writepage ext4 crash was solved if try_to_free_buffers() run on a pinned GUP page (in our vhost case try_to_unmap would have gotten rid of the pins through the mmu notifier and the page would have been freed just fine). The first two things that come to mi...
2019 May 07
4
[PATCH RFC] vhost: don't use kmap() to log dirty pages
...< 0) return r; BUG_ON(r != 1); - base = kmap_atomic(page); - set_bit(bit, base); - kunmap_atomic(base); + + r = futex_atomic_cmpxchg_inatomic(&old_log, addr, 0, 0); + if (r < 0) + return r; + + old_log |= 1 << nr; + r = put_user(old_log, addr); + if (r < 0) + return r; + set_page_dirty_lock(page); put_page(page); return 0; @@ -1727,8 +1730,8 @@ static int log_write(void __user *log_base, write_length += write_address % VHOST_PAGE_SIZE; for (;;) { u64 base = (u64)(unsigned long)log_base; - u64 log = base + write_page / 8; - int bit = write_page % 8; + u64 log = base + wri...
2019 May 07
4
[PATCH RFC] vhost: don't use kmap() to log dirty pages
...< 0) return r; BUG_ON(r != 1); - base = kmap_atomic(page); - set_bit(bit, base); - kunmap_atomic(base); + + r = futex_atomic_cmpxchg_inatomic(&old_log, addr, 0, 0); + if (r < 0) + return r; + + old_log |= 1 << nr; + r = put_user(old_log, addr); + if (r < 0) + return r; + set_page_dirty_lock(page); put_page(page); return 0; @@ -1727,8 +1730,8 @@ static int log_write(void __user *log_base, write_length += write_address % VHOST_PAGE_SIZE; for (;;) { u64 base = (u64)(unsigned long)log_base; - u64 log = base + write_page / 8; - int bit = write_page % 8; + u64 log = base + wri...
2019 Mar 06
2
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...normal copy_user() implementation gracefully. The > invalidation was synchronized with datapath through vq mutex, and in > order to avoid hold vq mutex during range checking, MMU notifier was > teared down when trying to modify vq metadata. > > Dirty page checking is done by calling set_page_dirty_locked() > explicitly for the page that used ring stay after each round of > processing. > > Note that this was only done when device IOTLB is not enabled. We > could use similar method to optimize it in the future. > > Tests shows at most about 22% improvement on TX PPS when usin...
2019 Mar 06
2
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...normal copy_user() implementation gracefully. The > invalidation was synchronized with datapath through vq mutex, and in > order to avoid hold vq mutex during range checking, MMU notifier was > teared down when trying to modify vq metadata. > > Dirty page checking is done by calling set_page_dirty_locked() > explicitly for the page that used ring stay after each round of > processing. > > Note that this was only done when device IOTLB is not enabled. We > could use similar method to optimize it in the future. > > Tests shows at most about 22% improvement on TX PPS when usin...
2019 Jul 24
20
[PATCH 00/12] block/bio, fs: convert put_page() to put_user_page*()
From: John Hubbard <jhubbard at nvidia.com> Hi, This is mostly Jerome's work, converting the block/bio and related areas to call put_user_page*() instead of put_page(). Because I've changed Jerome's patches, in some cases significantly, I'd like to get his feedback before we actually leave him listed as the author (he might want to disown some or all of these). I added a
2019 Jul 24
20
[PATCH 00/12] block/bio, fs: convert put_page() to put_user_page*()
From: John Hubbard <jhubbard at nvidia.com> Hi, This is mostly Jerome's work, converting the block/bio and related areas to call put_user_page*() instead of put_page(). Because I've changed Jerome's patches, in some cases significantly, I'd like to get his feedback before we actually leave him listed as the author (he might want to disown some or all of these). I added a
2019 May 09
2
[RFC PATCH V2] vhost: don't use kmap() to log dirty pages
...r = get_user_pages_fast(log, 1, 1, &page); if (r < 0) return r; BUG_ON(r != 1); - base = kmap_atomic(page); - set_bit(bit, base); - kunmap_atomic(base); + + r = arch_futex_atomic_op_inuser(FUTEX_OP_ADD, 1 << nr, &old, addr); + /* TODO: fallback to kmap() when -ENOSYS? */ + set_page_dirty_lock(page); put_page(page); - return 0; + return r; } -static int log_write(void __user *log_base, +static int log_write(u32 __user *log_base, u64 write_address, u64 write_length) { u64 write_page = write_address / VHOST_PAGE_SIZE; @@ -1726,12 +1727,10 @@ static int log_write(void __user...
2019 May 09
2
[RFC PATCH V2] vhost: don't use kmap() to log dirty pages
...r = get_user_pages_fast(log, 1, 1, &page); if (r < 0) return r; BUG_ON(r != 1); - base = kmap_atomic(page); - set_bit(bit, base); - kunmap_atomic(base); + + r = arch_futex_atomic_op_inuser(FUTEX_OP_ADD, 1 << nr, &old, addr); + /* TODO: fallback to kmap() when -ENOSYS? */ + set_page_dirty_lock(page); put_page(page); - return 0; + return r; } -static int log_write(void __user *log_base, +static int log_write(u32 __user *log_base, u64 write_address, u64 write_length) { u64 write_page = write_address / VHOST_PAGE_SIZE; @@ -1726,12 +1727,10 @@ static int log_write(void __user...
2019 Mar 08
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...anon memory or tmpfs or > hugetlbfs as backing store for the virtio ring. It wouldn't make sense > for qemu to risk triggering I/O on a VM_SHARED ext4, so we shouldn't > be even exposed to what seems to be an orthogonal kernel bug. > > I suppose whatever solution will fix the set_page_dirty_lock on > VM_SHARED ext4 for the other places that don't or can't use mmu > notifiers, will then work for vhost too which uses mmu notifiers and > will be less affected from the start if something. > > Reading the lwn link about the discussion about the long term GUP pin > from...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...ring qemu will only use anon memory or tmpfs or hugetlbfs as backing store for the virtio ring. It wouldn't make sense for qemu to risk triggering I/O on a VM_SHARED ext4, so we shouldn't be even exposed to what seems to be an orthogonal kernel bug. I suppose whatever solution will fix the set_page_dirty_lock on VM_SHARED ext4 for the other places that don't or can't use mmu notifiers, will then work for vhost too which uses mmu notifiers and will be less affected from the start if something. Reading the lwn link about the discussion about the long term GUP pin from Jan vs set_page_dirty_lock:...