search for: spin_yield

Displaying 11 results from an estimated 11 matches for "spin_yield".

Did you mean: __spin_yield
2020 Jul 06
0
[PATCH v3 3/6] powerpc: move spinlock implementation to simple_spinlock
...processor is holding a lock, + * we put 0x80000000 | smp_processor_id() in the lock when it is + * held. Conveniently, we have a word in the paca that holds this + * value. + */ + +#if defined(CONFIG_PPC_SPLPAR) +/* We only yield to the hypervisor if we are in shared processor mode */ +void splpar_spin_yield(arch_spinlock_t *lock); +void splpar_rw_yield(arch_rwlock_t *lock); +#else /* SPLPAR */ +static inline void splpar_spin_yield(arch_spinlock_t *lock) {}; +static inline void splpar_rw_yield(arch_rwlock_t *lock) {}; +#endif + +static inline void spin_yield(arch_spinlock_t *lock) +{ + if (is_shared_pr...
2016 Oct 21
3
[PATCH 2/5] stop_machine: yield CPU during stop machine
On Fri, Oct 21, 2016 at 01:58:55PM +0200, Christian Borntraeger wrote: > stop_machine can take a very long time if the hypervisor does > overcommitment for guest CPUs. When waiting for "the one", lets > give up our CPU by using the new cpu_relax_yield. This seems something that would apply to most other virt stuff. Lets Cc a few more lists for that. > Signed-off-by:
2016 Oct 21
3
[PATCH 2/5] stop_machine: yield CPU during stop machine
On Fri, Oct 21, 2016 at 01:58:55PM +0200, Christian Borntraeger wrote: > stop_machine can take a very long time if the hypervisor does > overcommitment for guest CPUs. When waiting for "the one", lets > give up our CPU by using the new cpu_relax_yield. This seems something that would apply to most other virt stuff. Lets Cc a few more lists for that. > Signed-off-by:
2020 Jul 03
7
[PATCH v2 0/6] powerpc: queued spinlocks and rwlocks
v2 is updated to account for feedback from Will, Peter, and Waiman (thank you), and trims off a couple of RFC and unrelated patches. Thanks, Nick Nicholas Piggin (6): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued
2020 Jul 24
8
[PATCH v4 0/6] powerpc: queued spinlocks and rwlocks
Updated with everybody's feedback (thanks all), and more performance results. What I've found is I might have been measuring the worst load point for the paravirt case, and by looking at a range of loads it's clear that queued spinlocks are overall better even on PV, doubly so when you look at the generally much improved worst case latencies. I have defaulted it to N even though
2016 Oct 22
1
[PATCH 2/5] stop_machine: yield CPU during stop machine
...)? As a step to removing cpu_yield_lowlatency this series is nice so I have no objection. But "general" kernel coders still have basically no chance of using this properly. I wonder what can be done about that. I've got that spin_do/while series I'll rebase on top of this, but a spin_yield variant of them is of no more help to the caller. What makes this unique? Long latency and not performance critical? Most places where we spin and maybe yield have been moved to arch code, but I wonder whether we can make an easier to use architecture independent API? Thanks, Nick
2020 Jul 02
12
[PATCH 0/8] powerpc: queued spinlocks and rwlocks
This series adds an option to use queued spinlocks for powerpc, and makes it the default for the Book3S-64 subarch. This effort starts with the generic code so it's very simple but still very performant. There are optimisations that can be made to slowpaths, but I think it's better to attack those incrementally if/when we find things, and try to add the improvements to generic code as
2016 Oct 24
0
[PATCH 2/5] stop_machine: yield CPU during stop machine
...cpu_yield_lowlatency this series is nice so I > have no objection. But "general" kernel coders still have basically > no chance of using this properly. > > I wonder what can be done about that. I've got that spin_do/while > series I'll rebase on top of this, but a spin_yield variant of them > is of no more help to the caller. > > What makes this unique? Long latency and not performance critical? I think what makes this unique is that ALL cpus spin and wait for one. It was really the only place that I noticed a regression with Heikos first patch. > Most p...
2020 Jul 06
0
[PATCH v3 2/6] powerpc/pseries: move some PAPR paravirt functions to their own file
...-{ - if (!static_branch_unlikely(&shared_processor)) - return false; - return !!(be32_to_cpu(lppaca_of(cpu).yield_count) & 1); -} -#endif - static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) { return lock.slock == 0; @@ -110,15 +97,6 @@ static inline void splpar_spin_yield(arch_spinlock_t *lock) {}; static inline void splpar_rw_yield(arch_rwlock_t *lock) {}; #endif -static inline bool is_shared_processor(void) -{ -#ifdef CONFIG_PPC_SPLPAR - return static_branch_unlikely(&shared_processor); -#else - return false; -#endif -} - static inline void spin_yield(arc...
2020 Jul 06
13
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
v3 is updated to use __pv_queued_spin_unlock, noticed by Waiman (thank you). Thanks, Nick Nicholas Piggin (6): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued spinlocks and rwlocks powerpc/pseries: implement paravirt
2020 Jul 06
13
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
v3 is updated to use __pv_queued_spin_unlock, noticed by Waiman (thank you). Thanks, Nick Nicholas Piggin (6): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued spinlocks and rwlocks powerpc/pseries: implement paravirt