search for: 51d7d5205d338

Displaying 11 results from an estimated 11 matches for "51d7d5205d338".

2020 Jul 02
3
[PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
...>> +static __always_inline int queued_spin_is_locked(struct qspinlock *lock) > >> +{ > >> + smp_mb(); > >> + return atomic_read(&lock->val); > >> +} > > > > Why do you need the smp_mb() here? > > A long and sad tale that ends here 51d7d5205d338 > > Should probably at least refer to that commit from here, since this one > is not going to git blame back there. I'll add something. Is this still an issue, though? See 38b850a73034 (where we added a similar barrier on arm64) and then c6f5d02b6a0f (where we removed it). Will
2020 Jul 02
3
[PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
...>> +static __always_inline int queued_spin_is_locked(struct qspinlock *lock) > >> +{ > >> + smp_mb(); > >> + return atomic_read(&lock->val); > >> +} > > > > Why do you need the smp_mb() here? > > A long and sad tale that ends here 51d7d5205d338 > > Should probably at least refer to that commit from here, since this one > is not going to git blame back there. I'll add something. Is this still an issue, though? See 38b850a73034 (where we added a similar barrier on arm64) and then c6f5d02b6a0f (where we removed it). Will
2020 Jul 02
2
[PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
On Thu, Jul 02, 2020 at 05:48:36PM +1000, Nicholas Piggin wrote: > diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h > new file mode 100644 > index 000000000000..f84da77b6bb7 > --- /dev/null > +++ b/arch/powerpc/include/asm/qspinlock.h > @@ -0,0 +1,20 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +#ifndef _ASM_POWERPC_QSPINLOCK_H >
2020 Jul 02
2
[PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
On Thu, Jul 02, 2020 at 05:48:36PM +1000, Nicholas Piggin wrote: > diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h > new file mode 100644 > index 000000000000..f84da77b6bb7 > --- /dev/null > +++ b/arch/powerpc/include/asm/qspinlock.h > @@ -0,0 +1,20 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +#ifndef _ASM_POWERPC_QSPINLOCK_H >
2020 Jul 02
0
[PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
..._after_spinlock() smp_mb() >> + >> +static __always_inline int queued_spin_is_locked(struct qspinlock *lock) >> +{ >> + smp_mb(); >> + return atomic_read(&lock->val); >> +} > > Why do you need the smp_mb() here? A long and sad tale that ends here 51d7d5205d338 Should probably at least refer to that commit from here, since this one is not going to git blame back there. I'll add something. Thanks, Nick
2020 Jul 02
0
[PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
...int queued_spin_is_locked(struct qspinlock *lock) >> >> +{ >> >> + smp_mb(); >> >> + return atomic_read(&lock->val); >> >> +} >> > >> > Why do you need the smp_mb() here? >> >> A long and sad tale that ends here 51d7d5205d338 >> >> Should probably at least refer to that commit from here, since this one >> is not going to git blame back there. I'll add something. > > Is this still an issue, though? > > See 38b850a73034 (where we added a similar barrier on arm64) and then > c6f5d02...
2020 Jul 06
0
[PATCH v3 4/6] powerpc/64s: implement queued spinlocks and rwlocks
...H + +#include <asm-generic/qspinlock_types.h> + +#define _Q_PENDING_LOOPS (1 << 9) /* not tuned */ + +#define smp_mb__after_spinlock() smp_mb() + +static __always_inline int queued_spin_is_locked(struct qspinlock *lock) +{ + /* + * This barrier was added to simple spinlocks by commit 51d7d5205d338, + * but it should now be possible to remove it, asm arm64 has done with + * commit c6f5d02b6a0f. + */ + smp_mb(); + return atomic_read(&lock->val); +} +#define queued_spin_is_locked queued_spin_is_locked + +#include <asm-generic/qspinlock.h> + +#endif /* _ASM_POWERPC_QSPINLOCK_H */...
2020 Jul 03
7
[PATCH v2 0/6] powerpc: queued spinlocks and rwlocks
v2 is updated to account for feedback from Will, Peter, and Waiman (thank you), and trims off a couple of RFC and unrelated patches. Thanks, Nick Nicholas Piggin (6): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued
2020 Jul 24
8
[PATCH v4 0/6] powerpc: queued spinlocks and rwlocks
Updated with everybody's feedback (thanks all), and more performance results. What I've found is I might have been measuring the worst load point for the paravirt case, and by looking at a range of loads it's clear that queued spinlocks are overall better even on PV, doubly so when you look at the generally much improved worst case latencies. I have defaulted it to N even though
2020 Jul 06
13
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
v3 is updated to use __pv_queued_spin_unlock, noticed by Waiman (thank you). Thanks, Nick Nicholas Piggin (6): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued spinlocks and rwlocks powerpc/pseries: implement paravirt
2020 Jul 06
13
[PATCH v3 0/6] powerpc: queued spinlocks and rwlocks
v3 is updated to use __pv_queued_spin_unlock, noticed by Waiman (thank you). Thanks, Nick Nicholas Piggin (6): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued spinlocks and rwlocks powerpc/pseries: implement paravirt