search for: gup_fast

Displaying 20 results from an estimated 23 matches for "gup_fast".

2019 Mar 08
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...s by just dropping vmap/vunmap. You can just use kmap (or kmap_atomic if you're in preemptible section, should work from bh/irq). In short the mmu notifier to invalidate only sets a "struct page * userringpage" pointer to NULL without calls to vunmap. In all cases immediately after gup_fast returns you can always call put_page immediately (which explains why I'd like an option to drop FOLL_GET from gup_fast to speed it up). Then you can check the sequence_counter and inc/dec counter increased by _start/_end. That will tell you if the page you got and you called put_page to immedi...
2019 Mar 08
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...s by just dropping vmap/vunmap. You can just use kmap (or kmap_atomic if you're in preemptible section, should work from bh/irq). In short the mmu notifier to invalidate only sets a "struct page * userringpage" pointer to NULL without calls to vunmap. In all cases immediately after gup_fast returns you can always call put_page immediately (which explains why I'd like an option to drop FOLL_GET from gup_fast to speed it up). Then you can check the sequence_counter and inc/dec counter increased by _start/_end. That will tell you if the page you got and you called put_page to immedi...
2019 Mar 11
4
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...map_atomic if you're in preemptible > > section, should work from bh/irq). > > > > In short the mmu notifier to invalidate only sets a "struct page * > > userringpage" pointer to NULL without calls to vunmap. > > > > In all cases immediately after gup_fast returns you can always call > > put_page immediately (which explains why I'd like an option to drop > > FOLL_GET from gup_fast to speed it up). > > > > Then you can check the sequence_counter and inc/dec counter increased > > by _start/_end. That will tell you if...
2019 Mar 11
4
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...map_atomic if you're in preemptible > > section, should work from bh/irq). > > > > In short the mmu notifier to invalidate only sets a "struct page * > > userringpage" pointer to NULL without calls to vunmap. > > > > In all cases immediately after gup_fast returns you can always call > > put_page immediately (which explains why I'd like an option to drop > > FOLL_GET from gup_fast to speed it up). > > > > Then you can check the sequence_counter and inc/dec counter increased > > by _start/_end. That will tell you if...
2019 Mar 11
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...> You can just use kmap (or kmap_atomic if you're in preemptible > section, should work from bh/irq). > > In short the mmu notifier to invalidate only sets a "struct page * > userringpage" pointer to NULL without calls to vunmap. > > In all cases immediately after gup_fast returns you can always call > put_page immediately (which explains why I'd like an option to drop > FOLL_GET from gup_fast to speed it up). > > Then you can check the sequence_counter and inc/dec counter increased > by _start/_end. That will tell you if the page you got and you c...
2019 Mar 08
2
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On 2019/3/8 ??5:27, Andrea Arcangeli wrote: > Hello Jerome, > > On Thu, Mar 07, 2019 at 03:17:22PM -0500, Jerome Glisse wrote: >> So for the above the easiest thing is to call set_page_dirty() from >> the mmu notifier callback. It is always safe to use the non locking >> variant from such callback. Well it is safe only if the page was >> map with write permission
2019 Mar 08
2
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
On 2019/3/8 ??5:27, Andrea Arcangeli wrote: > Hello Jerome, > > On Thu, Mar 07, 2019 at 03:17:22PM -0500, Jerome Glisse wrote: >> So for the above the easiest thing is to call set_page_dirty() from >> the mmu notifier callback. It is always safe to use the non locking >> variant from such callback. Well it is safe only if the page was >> map with write permission
2019 Mar 12
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...u're in preemptible >>> section, should work from bh/irq). >>> >>> In short the mmu notifier to invalidate only sets a "struct page * >>> userringpage" pointer to NULL without calls to vunmap. >>> >>> In all cases immediately after gup_fast returns you can always call >>> put_page immediately (which explains why I'd like an option to drop >>> FOLL_GET from gup_fast to speed it up). >>> >>> Then you can check the sequence_counter and inc/dec counter increased >>> by _start/_end. That wil...
2019 Mar 12
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...section, should work from bh/irq). > > > > > > > > In short the mmu notifier to invalidate only sets a "struct page * > > > > userringpage" pointer to NULL without calls to vunmap. > > > > > > > > In all cases immediately after gup_fast returns you can always call > > > > put_page immediately (which explains why I'd like an option to drop > > > > FOLL_GET from gup_fast to speed it up). > > > > > > > > Then you can check the sequence_counter and inc/dec counter increased > &gt...
2019 Mar 08
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...pping around,? the pages for used ring was marked as > dirty after a round of virtqueue processing when we're sure vhost wrote > something there. Thanks for the clarification. So we need to convert it to set_page_dirty and move it to the mmu notifier invalidate but in those cases where gup_fast was called with write=1 (1 out of 3). If using ->invalidate_range the page pin also must be removed immediately after get_user_pages returns (not ok to hold the pin in vmap until ->invalidate_range is called) to avoid false positive gup pin checks in things like KSM, or the pin must be relea...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...could re-instantiate a mapping to the old page in between the set_pte_at and the invalidate_range_end (which internally calls ->invalidate_range). Jerome documented it nicely in Documentation/vm/mmu_notifier.rst . Now you don't really walk the pagetable in hardware in vhost, but if you use gup_fast after usemm() it's similar. For vhost the invalidate would be really fast, there are no IPI to deliver at all, the problem is just the mutex. > That's a separate issue from set_page_dirty when memory is file backed. Yes. I don't yet know why the ext4 internal __writepage cannot re...
2019 Mar 07
3
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...could re-instantiate a mapping to the old page in between the set_pte_at and the invalidate_range_end (which internally calls ->invalidate_range). Jerome documented it nicely in Documentation/vm/mmu_notifier.rst . Now you don't really walk the pagetable in hardware in vhost, but if you use gup_fast after usemm() it's similar. For vhost the invalidate would be really fast, there are no IPI to deliver at all, the problem is just the mutex. > That's a separate issue from set_page_dirty when memory is file backed. Yes. I don't yet know why the ext4 internal __writepage cannot re...
2020 Mar 21
1
[PATCH 4/4] mm: check the device private page owner in hmm_range_fault
...d we do something like if (is_device_private_entry()) { rcu_read_lock() if (READ_ONCE(*ptep) != pte) return -EBUSY; hmm_is_device_private_entry() rcu_read_unlock() } ? Then pgmap needs a synchronize_rcu before the struct page's are destroyed (possibly gup_fast already requires this?) I've got some other patches trying to close some of these styles of bugs, but > note that current mainline doesn't even use it for this path.. Don't follow? Jason
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/14 ??6:42, Michael S. Tsirkin wrote: >>>>> Which means after we fix vhost to add the flush_dcache_page after >>>>> kunmap, Parisc will get a double hit (but it also means Parisc >>>>> was the only one of those archs needed explicit cache flushes, >>>>> where vhost worked correctly so far.. so it kinds of proofs your
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/14 ??6:42, Michael S. Tsirkin wrote: >>>>> Which means after we fix vhost to add the flush_dcache_page after >>>>> kunmap, Parisc will get a double hit (but it also means Parisc >>>>> was the only one of those archs needed explicit cache flushes, >>>>> where vhost worked correctly so far.. so it kinds of proofs your
2019 Mar 14
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...essy, and it would prevent the use of gigapages in the direct mapping too and it'd require vmap for 4k tracking. To make sure set_page_dirty is run a single time no matter if the invalidate known when a mapping is tear down, I suggested the below model: access = FOLL_WRITE repeat: page = gup_fast(access) put_page(page) /* need a way to drop FOLL_GET from gup_fast instead! */ spin_lock(mmu_notifier_lock); if (race with invalidate) { spin_unlock.. goto repeat; } if (access == FOLL_WRITE) set_page_dirty(page) establish writable mapping in secondary MMU on page spin_u...
2019 Mar 08
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...y in > > Documentation/vm/mmu_notifier.rst . > > > Right, I've actually gone through this several times but some details were > missed by me obviously. > > > > > > Now you don't really walk the pagetable in hardware in vhost, but if > > you use gup_fast after usemm() it's similar. > > > > For vhost the invalidate would be really fast, there are no IPI to > > deliver at all, the problem is just the mutex. > > > Yes. A possible solution is to introduce a valid flag for VA. Vhost may only > try to access kernel VA...
2019 Mar 08
1
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...y in > > Documentation/vm/mmu_notifier.rst . > > > Right, I've actually gone through this several times but some details were > missed by me obviously. > > > > > > Now you don't really walk the pagetable in hardware in vhost, but if > > you use gup_fast after usemm() it's similar. > > > > For vhost the invalidate would be really fast, there are no IPI to > > deliver at all, the problem is just the mutex. > > > Yes. A possible solution is to introduce a valid flag for VA. Vhost may only > try to access kernel VA...
2020 Mar 20
2
[PATCH 4/4] mm: check the device private page owner in hmm_range_fault
On Mon, Mar 16, 2020 at 08:32:16PM +0100, Christoph Hellwig wrote: > diff --git a/mm/hmm.c b/mm/hmm.c > index cfad65f6a67b..b75b3750e03d 100644 > +++ b/mm/hmm.c > @@ -216,6 +216,14 @@ int hmm_vma_handle_pmd(struct mm_walk *walk, unsigned long addr, > unsigned long end, uint64_t *pfns, pmd_t pmd); > #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ > > +static inline bool
2019 Mar 11
0
[RFC PATCH V2 5/5] vhost: access vq metadata through kernel virtual address
...>> Documentation/vm/mmu_notifier.rst . >> >> Right, I've actually gone through this several times but some details were >> missed by me obviously. >> >> >>> Now you don't really walk the pagetable in hardware in vhost, but if >>> you use gup_fast after usemm() it's similar. >>> >>> For vhost the invalidate would be really fast, there are no IPI to >>> deliver at all, the problem is just the mutex. >> >> Yes. A possible solution is to introduce a valid flag for VA. Vhost may only >> try to acc...