search for: smp_mb__after_spinlock

Displaying 20 results from an estimated 22 matches for "smp_mb__after_spinlock".

2020 Jul 02
2
[PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
...ck.h > @@ -0,0 +1,20 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +#ifndef _ASM_POWERPC_QSPINLOCK_H > +#define _ASM_POWERPC_QSPINLOCK_H > + > +#include <asm-generic/qspinlock_types.h> > + > +#define _Q_PENDING_LOOPS (1 << 9) /* not tuned */ > + > +#define smp_mb__after_spinlock() smp_mb() > + > +static __always_inline int queued_spin_is_locked(struct qspinlock *lock) > +{ > + smp_mb(); > + return atomic_read(&lock->val); > +} Why do you need the smp_mb() here? Will
2020 Jul 02
2
[PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
...ck.h > @@ -0,0 +1,20 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +#ifndef _ASM_POWERPC_QSPINLOCK_H > +#define _ASM_POWERPC_QSPINLOCK_H > + > +#include <asm-generic/qspinlock_types.h> > + > +#define _Q_PENDING_LOOPS (1 << 9) /* not tuned */ > + > +#define smp_mb__after_spinlock() smp_mb() > + > +static __always_inline int queued_spin_is_locked(struct qspinlock *lock) > +{ > + smp_mb(); > + return atomic_read(&lock->val); > +} Why do you need the smp_mb() here? Will
2020 Jul 02
3
[PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
...> >> +#ifndef _ASM_POWERPC_QSPINLOCK_H > >> +#define _ASM_POWERPC_QSPINLOCK_H > >> + > >> +#include <asm-generic/qspinlock_types.h> > >> + > >> +#define _Q_PENDING_LOOPS (1 << 9) /* not tuned */ > >> + > >> +#define smp_mb__after_spinlock() smp_mb() > >> + > >> +static __always_inline int queued_spin_is_locked(struct qspinlock *lock) > >> +{ > >> + smp_mb(); > >> + return atomic_read(&lock->val); > >> +} > > > > Why do you need the smp_mb() here? > >...
2020 Jul 02
3
[PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
...> >> +#ifndef _ASM_POWERPC_QSPINLOCK_H > >> +#define _ASM_POWERPC_QSPINLOCK_H > >> + > >> +#include <asm-generic/qspinlock_types.h> > >> + > >> +#define _Q_PENDING_LOOPS (1 << 9) /* not tuned */ > >> + > >> +#define smp_mb__after_spinlock() smp_mb() > >> + > >> +static __always_inline int queued_spin_is_locked(struct qspinlock *lock) > >> +{ > >> + smp_mb(); > >> + return atomic_read(&lock->val); > >> +} > > > > Why do you need the smp_mb() here? > >...
2020 Jul 02
0
[PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
...+/* SPDX-License-Identifier: GPL-2.0 */ >> +#ifndef _ASM_POWERPC_QSPINLOCK_H >> +#define _ASM_POWERPC_QSPINLOCK_H >> + >> +#include <asm-generic/qspinlock_types.h> >> + >> +#define _Q_PENDING_LOOPS (1 << 9) /* not tuned */ >> + >> +#define smp_mb__after_spinlock() smp_mb() >> + >> +static __always_inline int queued_spin_is_locked(struct qspinlock *lock) >> +{ >> + smp_mb(); >> + return atomic_read(&lock->val); >> +} > > Why do you need the smp_mb() here? A long and sad tale that ends here 51d7d5205d338...
2020 Jul 02
0
[PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
...POWERPC_QSPINLOCK_H >> >> +#define _ASM_POWERPC_QSPINLOCK_H >> >> + >> >> +#include <asm-generic/qspinlock_types.h> >> >> + >> >> +#define _Q_PENDING_LOOPS (1 << 9) /* not tuned */ >> >> + >> >> +#define smp_mb__after_spinlock() smp_mb() >> >> + >> >> +static __always_inline int queued_spin_is_locked(struct qspinlock *lock) >> >> +{ >> >> + smp_mb(); >> >> + return atomic_read(&lock->val); >> >> +} >> > >> > Why do you nee...
2020 Jul 06
0
[PATCH v3 3/6] powerpc: move spinlock implementation to simple_spinlock
...tile__("# write_unlock\n\t" + PPC_RELEASE_BARRIER: : :"memory"); + rw->lock = 0; +} + +#define arch_spin_relax(lock) spin_yield(lock) +#define arch_read_relax(lock) rw_yield(lock) +#define arch_write_relax(lock) rw_yield(lock) + +/* See include/linux/spinlock.h */ +#define smp_mb__after_spinlock() smp_mb() + +#endif /* __KERNEL__ */ +#endif /* __ASM_SIMPLE_SPINLOCK_H */ diff --git a/arch/powerpc/include/asm/simple_spinlock_types.h b/arch/powerpc/include/asm/simple_spinlock_types.h new file mode 100644 index 000000000000..7c2b48ce62dc --- /dev/null +++ b/arch/powerpc/include/asm/simple_sp...
2019 Dec 26
0
[PATCH v2 5/6] KVM: arm64: Add interface to support VCPU preempted check
...asm/spinlock.h index b093b287babf..45ff1b2949a6 100644 --- a/arch/arm64/include/asm/spinlock.h +++ b/arch/arm64/include/asm/spinlock.h @@ -7,8 +7,15 @@ #include <asm/qrwlock.h> #include <asm/qspinlock.h> +#include <asm/paravirt.h> /* See include/linux/spinlock.h */ #define smp_mb__after_spinlock() smp_mb() +#define vcpu_is_preempted vcpu_is_preempted +static inline bool vcpu_is_preempted(long cpu) +{ + return pv_vcpu_is_preempted(cpu); +} + #endif /* __ASM_SPINLOCK_H */ diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile index fc6488660f64..b23cdae433a4 100644 --- a/arc...
2020 Jul 03
7
[PATCH v2 0/6] powerpc: queued spinlocks and rwlocks
v2 is updated to account for feedback from Will, Peter, and Waiman (thank you), and trims off a couple of RFC and unrelated patches. Thanks, Nick Nicholas Piggin (6): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued
2020 Jul 02
0
[PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
...- /dev/null +++ b/arch/powerpc/include/asm/qspinlock.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_QSPINLOCK_H +#define _ASM_POWERPC_QSPINLOCK_H + +#include <asm-generic/qspinlock_types.h> + +#define _Q_PENDING_LOOPS (1 << 9) /* not tuned */ + +#define smp_mb__after_spinlock() smp_mb() + +static __always_inline int queued_spin_is_locked(struct qspinlock *lock) +{ + smp_mb(); + return atomic_read(&lock->val); +} +#define queued_spin_is_locked queued_spin_is_locked + +#include <asm-generic/qspinlock.h> + +#endif /* _ASM_POWERPC_QSPINLOCK_H */ diff --git a/...
2020 Jul 24
8
[PATCH v4 0/6] powerpc: queued spinlocks and rwlocks
Updated with everybody's feedback (thanks all), and more performance results. What I've found is I might have been measuring the worst load point for the paravirt case, and by looking at a range of loads it's clear that queued spinlocks are overall better even on PV, doubly so when you look at the generally much improved worst case latencies. I have defaulted it to N even though
2020 Jul 06
0
[PATCH v3 4/6] powerpc/64s: implement queued spinlocks and rwlocks
...- /dev/null +++ b/arch/powerpc/include/asm/qspinlock.h @@ -0,0 +1,25 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_QSPINLOCK_H +#define _ASM_POWERPC_QSPINLOCK_H + +#include <asm-generic/qspinlock_types.h> + +#define _Q_PENDING_LOOPS (1 << 9) /* not tuned */ + +#define smp_mb__after_spinlock() smp_mb() + +static __always_inline int queued_spin_is_locked(struct qspinlock *lock) +{ + /* + * This barrier was added to simple spinlocks by commit 51d7d5205d338, + * but it should now be possible to remove it, asm arm64 has done with + * commit c6f5d02b6a0f. + */ + smp_mb(); + return ato...
2020 Jul 02
12
[PATCH 0/8] powerpc: queued spinlocks and rwlocks
This series adds an option to use queued spinlocks for powerpc, and makes it the default for the Book3S-64 subarch. This effort starts with the generic code so it's very simple but still very performant. There are optimisations that can be made to slowpaths, but I think it's better to attack those incrementally if/when we find things, and try to add the improvements to generic code as
2020 Jul 02
0
[PATCH 6/8] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
...l); +#endif + +static __always_inline void queued_spin_lock(struct qspinlock *lock) +{ + u32 val = 0; + + if (likely(atomic_try_cmpxchg_acquire(&lock->val, &val, _Q_LOCKED_VAL))) + return; + + queued_spin_lock_slowpath(lock, val); +} +#define queued_spin_lock queued_spin_lock + #define smp_mb__after_spinlock() smp_mb() static __always_inline int queued_spin_is_locked(struct qspinlock *lock) @@ -15,6 +42,34 @@ static __always_inline int queued_spin_is_locked(struct qspinlock *lock) } #define queued_spin_is_locked queued_spin_is_locked +#ifdef CONFIG_PARAVIRT_SPINLOCKS +#define SPIN_THRESHOLD (1...
2020 Jul 03
0
[PATCH v2 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
...l); +#endif + +static __always_inline void queued_spin_lock(struct qspinlock *lock) +{ + u32 val = 0; + + if (likely(atomic_try_cmpxchg_acquire(&lock->val, &val, _Q_LOCKED_VAL))) + return; + + queued_spin_lock_slowpath(lock, val); +} +#define queued_spin_lock queued_spin_lock + #define smp_mb__after_spinlock() smp_mb() static __always_inline int queued_spin_is_locked(struct qspinlock *lock) @@ -20,6 +47,34 @@ static __always_inline int queued_spin_is_locked(struct qspinlock *lock) } #define queued_spin_is_locked queued_spin_is_locked +#ifdef CONFIG_PARAVIRT_SPINLOCKS +#define SPIN_THRESHOLD (1...
2020 Jul 06
0
[PATCH v3 5/6] powerpc/pseries: implement paravirt qspinlocks for SPLPAR
...l); +#endif + +static __always_inline void queued_spin_lock(struct qspinlock *lock) +{ + u32 val = 0; + + if (likely(atomic_try_cmpxchg_acquire(&lock->val, &val, _Q_LOCKED_VAL))) + return; + + queued_spin_lock_slowpath(lock, val); +} +#define queued_spin_lock queued_spin_lock + #define smp_mb__after_spinlock() smp_mb() static __always_inline int queued_spin_is_locked(struct qspinlock *lock) @@ -20,6 +58,34 @@ static __always_inline int queued_spin_is_locked(struct qspinlock *lock) } #define queued_spin_is_locked queued_spin_is_locked +#ifdef CONFIG_PARAVIRT_SPINLOCKS +#define SPIN_THRESHOLD (1...
2019 Dec 26
1
[PATCH v2 5/6] KVM: arm64: Add interface to support VCPU preempted check
Hi Zengruan, Thank you for the patch! Yet something to improve: [auto build test ERROR on kvmarm/next] [also build test ERROR on kvm/linux-next linus/master v5.5-rc3 next-20191220] [cannot apply to arm64/for-next/core] [if your patch is applied to the wrong git tree, please drop us a note to help improve the system. BTW, we also suggest to use '--base' option to specify the base tree in
2020 Jul 06
13
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
v3 is updated to use __pv_queued_spin_unlock, noticed by Waiman (thank you). Thanks, Nick Nicholas Piggin (6): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued spinlocks and rwlocks powerpc/pseries: implement paravirt
2020 Jul 06
13
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
v3 is updated to use __pv_queued_spin_unlock, noticed by Waiman (thank you). Thanks, Nick Nicholas Piggin (6): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued spinlocks and rwlocks powerpc/pseries: implement paravirt
2019 Dec 26
7
[PATCH v2 0/6] KVM: arm64: VCPU preempted check support
This patch set aims to support the vcpu_is_preempted() functionality under KVM/arm64, which allowing the guest to obtain the VCPU is currently running or not. This will enhance lock performance on overcommitted hosts (more runnable VCPUs than physical CPUs in the system) as doing busy waits for preempted VCPUs will hurt system performance far worse than early yielding. We have observed some