search for: srcu_read_lock

Displaying 20 results from an estimated 51 matches for "srcu_read_lock".

Did you mean: rcu_read_lock
2020 Jul 21
0
[PATCH v9 34/84] KVM: x86: page_track: add support for preread, prewrite and preexec
...pa_t gpa, gva_t gva, + int bytes) +{ + struct kvm_page_track_notifier_head *head; + struct kvm_page_track_notifier_node *n; + int idx; + bool ret = true; + + head = &vcpu->kvm->arch.track_notifier_head; + + if (hlist_empty(&head->track_notifier_list)) + return ret; + + idx = srcu_read_lock(&head->track_srcu); + hlist_for_each_entry_rcu(n, &head->track_notifier_list, node) + if (n->track_preread) + if (!n->track_preread(vcpu, gpa, gva, bytes, n)) + ret = false; + srcu_read_unlock(&head->track_srcu, idx); + return ret; +} + +/* + * Notify the node that...
2019 Aug 09
0
[RFC PATCH v6 27/92] kvm: introspection: use page track
...nvolving access + * bits that we've specifically cleared + */ + if ((~allowed_access) & access) + return true; + + return false; +} + +static void kvmi_clear_mem_access(struct kvm *kvm) +{ + void **slot; + struct radix_tree_iter iter; + struct kvmi *ikvm = IKVM(kvm); + int idx; + + idx = srcu_read_lock(&kvm->srcu); + spin_lock(&kvm->mmu_lock); + write_lock(&ikvm->access_tree_lock); + + radix_tree_for_each_slot(slot, &ikvm->access_tree, &iter, 0) { + struct kvmi_mem_access *m = *slot; + + m->access = full_access; + kvmi_arch_update_page_tracking(kvm, NULL, m);...
2020 Jul 21
0
[PATCH v9 77/84] KVM: introspection: add KVMI_VM_SET_PAGE_ACCESS
...-241,12 +251,38 @@ static void free_vcpui(struct kvm_vcpu *vcpu, bool restore_interception) kvmi_make_request(vcpu, false); } +static void kvmi_clear_mem_access(struct kvm *kvm) +{ + struct kvm_introspection *kvmi = KVMI(kvm); + struct radix_tree_iter iter; + void **slot; + int idx; + + idx = srcu_read_lock(&kvm->srcu); + spin_lock(&kvm->mmu_lock); + + radix_tree_for_each_slot(slot, &kvmi->access_tree, &iter, 0) { + struct kvmi_mem_access *m = *slot; + + m->access = full_access; + kvmi_arch_update_page_tracking(kvm, NULL, m); + + radix_tree_iter_delete(&kvmi->acc...
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/14 ??6:42, Michael S. Tsirkin wrote: >>>>> Which means after we fix vhost to add the flush_dcache_page after >>>>> kunmap, Parisc will get a double hit (but it also means Parisc >>>>> was the only one of those archs needed explicit cache flushes, >>>>> where vhost worked correctly so far.. so it kinds of proofs your
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/14 ??6:42, Michael S. Tsirkin wrote: >>>>> Which means after we fix vhost to add the flush_dcache_page after >>>>> kunmap, Parisc will get a double hit (but it also means Parisc >>>>> was the only one of those archs needed explicit cache flushes, >>>>> where vhost worked correctly so far.. so it kinds of proofs your
2019 Mar 14
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...happen in fast and frequent paths like the write() syscall. Dropping mm_take_all_locks would bring other downsides: a regular quiescent point can happen in between _start/_end and _start must be always called first all the mmu notifier retry counters we rely on would break. One way would be to add srcu_read_lock _before_ you can call mm_has_mm_has_notifiers(mm), then yes we could replace mm_take_all_locks with synchronize_srcu. It would save a lot of CPU and a ton of locked operations, but it'd potentially increase the latency of the registration so the first O_DIRECT write() in a process could still p...
2016 Dec 19
2
[PATCH v7 08/11] x86, kvm/x86.c: support vcpu preempted check
...ext here. * kvm_write_guest_offset_cached() would call might_fault() @@ -2853,7 +2854,13 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) * paging. */ pagefault_disable(); + /* + * kvm_memslots() will be called by + * kvm_write_guest_offset_cached() so take the srcu lock. + */ + idx = srcu_read_lock(&vcpu->kvm->srcu); kvm_steal_time_set_preempted(vcpu); + srcu_read_unlock(&vcpu->kvm->srcu, idx); pagefault_enable(); kvm_x86_ops->vcpu_put(vcpu); kvm_put_guest_fpu(vcpu);
2016 Dec 19
2
[PATCH v7 08/11] x86, kvm/x86.c: support vcpu preempted check
...ext here. * kvm_write_guest_offset_cached() would call might_fault() @@ -2853,7 +2854,13 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) * paging. */ pagefault_disable(); + /* + * kvm_memslots() will be called by + * kvm_write_guest_offset_cached() so take the srcu lock. + */ + idx = srcu_read_lock(&vcpu->kvm->srcu); kvm_steal_time_set_preempted(vcpu); + srcu_read_unlock(&vcpu->kvm->srcu, idx); pagefault_enable(); kvm_x86_ops->vcpu_put(vcpu); kvm_put_guest_fpu(vcpu);
2019 Aug 02
5
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...r the various mm locks is a deadlock situation. > Then I try spinlock and mutex: > > 1) spinlock: add lots of overhead on datapath, this leads 0 performance > improvement. I think the topic here is correctness not performance improvement > 2) SRCU: full memory barrier requires on srcu_read_lock(), which still leads > little performance improvement > 3) mutex: a possible issue is need to wait for the page to be swapped in (is > this unacceptable ?), another issue is that we need hold vq lock during > range overlap check. I have a feeling that mmu notififers cannot safely bec...
2019 Aug 02
5
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...r the various mm locks is a deadlock situation. > Then I try spinlock and mutex: > > 1) spinlock: add lots of overhead on datapath, this leads 0 performance > improvement. I think the topic here is correctness not performance improvement > 2) SRCU: full memory barrier requires on srcu_read_lock(), which still leads > little performance improvement > 3) mutex: a possible issue is need to wait for the page to be swapped in (is > this unacceptable ?), another issue is that we need hold vq lock during > range overlap check. I have a feeling that mmu notififers cannot safely bec...
2019 Aug 05
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...ock and mutex: >> >> 1) spinlock: add lots of overhead on datapath, this leads 0 performance >> improvement. > I think the topic here is correctness not performance improvement But the whole series is to speed up vhost. > >> 2) SRCU: full memory barrier requires on srcu_read_lock(), which still leads >> little performance improvement > >> 3) mutex: a possible issue is need to wait for the page to be swapped in (is >> this unacceptable ?), another issue is that we need hold vq lock during >> range overlap check. > I have a feeling that mmu no...
2020 Jul 21
0
[PATCH v9 04/84] KVM: add kvm_get_max_gfn()
..._vm_ioctl_set_memory_region(struct kvm *kvm, return kvm_set_memory_region(kvm, mem); } +gfn_t kvm_get_max_gfn(struct kvm *kvm) +{ + u32 skip_mask = KVM_MEM_READONLY | KVM_MEMSLOT_INVALID; + struct kvm_memory_slot *memslot; + struct kvm_memslots *slots; + gfn_t max_gfn = 0; + int idx; + + idx = srcu_read_lock(&kvm->srcu); + spin_lock(&kvm->mmu_lock); + + slots = kvm_memslots(kvm); + kvm_for_each_memslot(memslot, slots) + if (memslot->id < KVM_USER_MEM_SLOTS && + (memslot->flags & skip_mask) == 0) + max_gfn = max(max_gfn, memslot->base_gfn + + memslot-&gt...
2019 Aug 01
3
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
On Thu, Aug 01, 2019 at 01:02:18PM +0800, Jason Wang wrote: > > On 2019/8/1 ??3:30, Jason Gunthorpe wrote: > > On Wed, Jul 31, 2019 at 09:28:20PM +0800, Jason Wang wrote: > > > On 2019/7/31 ??8:39, Jason Gunthorpe wrote: > > > > On Wed, Jul 31, 2019 at 04:46:53AM -0400, Jason Wang wrote: > > > > > We used to use RCU to synchronize MMU notifier with
2019 Aug 01
3
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
On Thu, Aug 01, 2019 at 01:02:18PM +0800, Jason Wang wrote: > > On 2019/8/1 ??3:30, Jason Gunthorpe wrote: > > On Wed, Jul 31, 2019 at 09:28:20PM +0800, Jason Wang wrote: > > > On 2019/7/31 ??8:39, Jason Gunthorpe wrote: > > > > On Wed, Jul 31, 2019 at 04:46:53AM -0400, Jason Wang wrote: > > > > > We used to use RCU to synchronize MMU notifier with
2020 Feb 07
0
[RFC PATCH v7 31/78] KVM: x86: page track: add track_create_slot() callback
...x; int i; for (i = 0; i < KVM_PAGE_TRACK_MAX; i++) { @@ -45,6 +48,17 @@ int kvm_page_track_create_memslot(struct kvm_memory_slot *slot, goto track_free; } + head = &kvm->arch.track_notifier_head; + + if (hlist_empty(&head->track_notifier_list)) + return 0; + + idx = srcu_read_lock(&head->track_srcu); + hlist_for_each_entry_rcu(n, &head->track_notifier_list, node) + if (n->track_create_slot) + n->track_create_slot(kvm, slot, npages, n); + srcu_read_unlock(&head->track_srcu, idx); + return 0; track_free: diff --git a/arch/x86/kvm/x86.c b/arch...
2019 Oct 28
0
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...n_mm, + struct mm_struct *mm) { struct mmu_notifier *mn; int id; + if (mmn_mm->has_interval) + mn_itree_release(mmn_mm, mm); + + if (hlist_empty(&mmn_mm->list)) + return; + /* * SRCU here will block mmu_notifier_unregister until * ->release returns. */ id = srcu_read_lock(&srcu); - hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) + hlist_for_each_entry_rcu(mn, &mmn_mm->list, hlist) /* * If ->release runs before mmu_notifier_unregister it must be * handled, as it's the only way for the driver to flush all @@ -72,9...
2019 Nov 12
0
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
.... */ -void __mmu_notifier_release(struct mm_struct *mm) +static void mn_hlist_release(struct mmu_notifier_mm *mmn_mm, + struct mm_struct *mm) { struct mmu_notifier *mn; int id; @@ -62,7 +307,7 @@ void __mmu_notifier_release(struct mm_struct *mm) * ->release returns. */ id = srcu_read_lock(&srcu); - hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) + hlist_for_each_entry_rcu(mn, &mmn_mm->list, hlist) /* * If ->release runs before mmu_notifier_unregister it must be * handled, as it's the only way for the driver to flush all @@ -72,1...
2019 Aug 02
0
[PATCH V2 7/9] vhost: do not use RCU to synchronize MMU notifier with worker
...er barrier, like a spinlock, mutex, or > synchronize_rcu. I start with synchronize_rcu() but both you and Michael raise some concern. Then I try spinlock and mutex: 1) spinlock: add lots of overhead on datapath, this leads 0 performance improvement. 2) SRCU: full memory barrier requires on srcu_read_lock(), which still leads little performance improvement 3) mutex: a possible issue is need to wait for the page to be swapped in (is this unacceptable ?), another issue is that we need hold vq lock during range overlap check. 4) using vhost_flush_work() instead of synchronize_rcu(): still need to...
2016 Dec 19
0
[PATCH v7 08/11] x86, kvm/x86.c: support vcpu preempted check
...) would call might_fault() > @@ -2853,7 +2854,13 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) > * paging. > */ > pagefault_disable(); > + /* > + * kvm_memslots() will be called by > + * kvm_write_guest_offset_cached() so take the srcu lock. > + */ > + idx = srcu_read_lock(&vcpu->kvm->srcu); > kvm_steal_time_set_preempted(vcpu); > + srcu_read_unlock(&vcpu->kvm->srcu, idx); > pagefault_enable(); > kvm_x86_ops->vcpu_put(vcpu); > kvm_put_guest_fpu(vcpu); > >
2019 Oct 29
1
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...se. See my comments below, I think one of them is wrong. I suspect this one, because __mmu_notifier_release follows the same pattern as the other notifiers. > + > /* > * SRCU here will block mmu_notifier_unregister until > * ->release returns. > */ > id = srcu_read_lock(&srcu); > - hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) > + hlist_for_each_entry_rcu(mn, &mmn_mm->list, hlist) > /* > * If ->release runs before mmu_notifier_unregister it must be > * handled, as it's the only way for the...