search for: synchronize_srcu

Displaying 13 results from an estimated 13 matches for "synchronize_srcu".

Did you mean: synchronize_rcu
2012 May 16
1
Very High Load on Dovecot 2 and Errors in mail.err.
...796 Current Load: 08:55:44 up 14:04, 3 users, load average: 19,64, 14,47, 10,49 Load is increasing. The only strange thing I can see is this: # ps -ostat,pid,time,wchan='WCHAN-xxxxxxxxxxxxxxxxxxxx',cmd ax |grep D STAT PID TIME WCHAN-xxxxxxxxxxxxxxxxxxxx CMD D 18713 00:00:00 synchronize_srcu dovecot/imap D 18736 00:00:00 synchronize_srcu dovecot/imap D 18775 00:00:05 synchronize_srcu dovecot/imap D 20330 00:00:00 synchronize_srcu dovecot/imap D 20357 00:00:00 synchronize_srcu dovecot/imap D 20422 00:00:00 synchronize_srcu...
2013 Aug 21
2
High Load Average on POP/IMAP.
Hi, We have a serious issue running on our POP/IMAP servers these days. The load average of a servers spikes up to 400-500 as a uptime command result, for a particular time period , to be specific mostly in noon time and evening, but it last for few minutes only. We have 2 servers running dovecot 1.1.20 , in loadbanlancer, We have used KEEPLIVE (1.1.13) for loadbalacing. Server
2019 Aug 09
0
[RFC PATCH v6 27/92] kvm: introspection: use page track
...); + srcu_read_unlock(&kvm->srcu, idx); + + kvmi_put(kvm); +} + static void kvmi_end_introspection(struct kvmi *ikvm) { struct kvm *kvm = ikvm->kvm; @@ -290,6 +563,22 @@ static void kvmi_end_introspection(struct kvmi *ikvm) */ kvmi_abort_events(kvm); + /* + * This may sleep on synchronize_srcu() so it's not allowed to be + * called under kvmi_put(). + * Also synchronize_srcu() may deadlock on (page tracking) read-side + * regions that are waiting for reply to events, so must be called + * after kvmi_abort_events(). + */ + kvm_page_track_unregister_notifier(kvm, &ikvm->kpt...
2019 Oct 29
1
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...t; hlist_del_init_rcu(&mn->hlist); > } > - spin_unlock(&mm->mmu_notifier_mm->lock); > + spin_unlock(&mmn_mm->lock); > srcu_read_unlock(&srcu, id); > > /* > @@ -100,6 +341,17 @@ void __mmu_notifier_release(struct mm_struct *mm) > synchronize_srcu(&srcu); > } > > +void __mmu_notifier_release(struct mm_struct *mm) > +{ > + struct mmu_notifier_mm *mmn_mm = mm->mmu_notifier_mm; > + > + if (mmn_mm->has_interval) > + mn_itree_release(mmn_mm, mm); If mmn_mm->list is not empty, this will be done twice bec...
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/14 ??6:42, Michael S. Tsirkin wrote: >>>>> Which means after we fix vhost to add the flush_dcache_page after >>>>> kunmap, Parisc will get a double hit (but it also means Parisc >>>>> was the only one of those archs needed explicit cache flushes, >>>>> where vhost worked correctly so far.. so it kinds of proofs your
2019 Mar 14
2
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
On 2019/3/14 ??6:42, Michael S. Tsirkin wrote: >>>>> Which means after we fix vhost to add the flush_dcache_page after >>>>> kunmap, Parisc will get a double hit (but it also means Parisc >>>>> was the only one of those archs needed explicit cache flushes, >>>>> where vhost worked correctly so far.. so it kinds of proofs your
2019 Mar 14
0
[RFC PATCH V2 0/5] vhost: accelerate metadata access through vmap()
...: a regular quiescent point can happen in between _start/_end and _start must be always called first all the mmu notifier retry counters we rely on would break. One way would be to add srcu_read_lock _before_ you can call mm_has_mm_has_notifiers(mm), then yes we could replace mm_take_all_locks with synchronize_srcu. It would save a lot of CPU and a ton of locked operations, but it'd potentially increase the latency of the registration so the first O_DIRECT write() in a process could still potentially stall (still better than mm_take_all_locks which would use a lot more CPU and hurt SMP scalability in thre...
2019 Oct 28
0
[PATCH v2 02/15] mm/mmu_notifier: add an interval tree notifier
...mu_notifier_release(struct mm_struct *mm) */ hlist_del_init_rcu(&mn->hlist); } - spin_unlock(&mm->mmu_notifier_mm->lock); + spin_unlock(&mmn_mm->lock); srcu_read_unlock(&srcu, id); /* @@ -100,6 +341,17 @@ void __mmu_notifier_release(struct mm_struct *mm) synchronize_srcu(&srcu); } +void __mmu_notifier_release(struct mm_struct *mm) +{ + struct mmu_notifier_mm *mmn_mm = mm->mmu_notifier_mm; + + if (mmn_mm->has_interval) + mn_itree_release(mmn_mm, mm); + + if (!hlist_empty(&mmn_mm->list)) + mn_hlist_release(mmn_mm, mm); +} + /* * If no young b...
2019 Nov 12
0
[PATCH v3 02/14] mm/mmu_notifier: add an interval tree notifier
...mu_notifier_release(struct mm_struct *mm) */ hlist_del_init_rcu(&mn->hlist); } - spin_unlock(&mm->mmu_notifier_mm->lock); + spin_unlock(&mmn_mm->lock); srcu_read_unlock(&srcu, id); /* @@ -100,6 +344,17 @@ void __mmu_notifier_release(struct mm_struct *mm) synchronize_srcu(&srcu); } +void __mmu_notifier_release(struct mm_struct *mm) +{ + struct mmu_notifier_mm *mmn_mm = mm->mmu_notifier_mm; + + if (mmn_mm->has_itree) + mn_itree_release(mmn_mm, mm); + + if (!hlist_empty(&mmn_mm->list)) + mn_hlist_release(mmn_mm, mm); +} + /* * If no young bitf...
2019 Oct 28
32
[PATCH v2 00/15] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1, scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where they only use invalidate_range_start/end and immediately check the invalidating range against some driver data structure to tell if the driver is interested. Half of them use an interval_tree, the others
2019 Nov 12
20
[PATCH hmm v3 00/14] Consolidate the mmu notifier interval_tree and locking
From: Jason Gunthorpe <jgg at mellanox.com> 8 of the mmu_notifier using drivers (i915_gem, radeon_mn, umem_odp, hfi1, scif_dma, vhost, gntdev, hmm) drivers are using a common pattern where they only use invalidate_range_start/end and immediately check the invalidating range against some driver data structure to tell if the driver is interested. Half of them use an interval_tree, the others
2019 Aug 09
117
[RFC PATCH v6 00/92] VM introspection
The KVM introspection subsystem provides a facility for applications running on the host or in a separate VM, to control the execution of other VM-s (pause, resume, shutdown), query the state of the vCPUs (GPRs, MSRs etc.), alter the page access bits in the shadow page tables (only for the hardware backed ones, eg. Intel's EPT) and receive notifications when events of interest have taken place
2019 Aug 09
117
[RFC PATCH v6 00/92] VM introspection
The KVM introspection subsystem provides a facility for applications running on the host or in a separate VM, to control the execution of other VM-s (pause, resume, shutdown), query the state of the vCPUs (GPRs, MSRs etc.), alter the page access bits in the shadow page tables (only for the hardware backed ones, eg. Intel's EPT) and receive notifications when events of interest have taken place