search for: mmu_notifier_lock

Displaying 3 results from an estimated 3 matches for "mmu_notifier_lock".

2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/14 ??6:42, Michael S. Tsirkin wrote: >>>>> Which means after we fix vhost to add the flush_dcache_page after >>>>> kunmap, Parisc will get a double hit (but it also means Parisc >>>>> was the only one of those archs needed explicit cache flushes, >>>>> where vhost worked correctly so far.. so it kinds of proofs your
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/14 ??6:42, Michael S. Tsirkin wrote: >>>>> Which means after we fix vhost to add the flush_dcache_page after >>>>> kunmap, Parisc will get a double hit (but it also means Parisc >>>>> was the only one of those archs needed explicit cache flushes, >>>>> where vhost worked correctly so far.. so it kinds of proofs your
2019 Mar 14
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...4k tracking. To make sure set_page_dirty is run a single time no matter if the invalidate known when a mapping is tear down, I suggested the below model: access = FOLL_WRITE repeat: page = gup_fast(access) put_page(page) /* need a way to drop FOLL_GET from gup_fast instead! */ spin_lock(mmu_notifier_lock); if (race with invalidate) { spin_unlock.. goto repeat; } if (access == FOLL_WRITE) set_page_dirty(page) establish writable mapping in secondary MMU on page spin_unlock (replace spin_lock with mutex_lock for vhost of course if you stick to a mutex and _start/_end instead of...