Will Deacon
2020-Jul-02 10:35 UTC
[PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
On Thu, Jul 02, 2020 at 08:25:43PM +1000, Nicholas Piggin wrote:> Excerpts from Will Deacon's message of July 2, 2020 6:02 pm: > > On Thu, Jul 02, 2020 at 05:48:36PM +1000, Nicholas Piggin wrote: > >> diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h > >> new file mode 100644 > >> index 000000000000..f84da77b6bb7 > >> --- /dev/null > >> +++ b/arch/powerpc/include/asm/qspinlock.h > >> @@ -0,0 +1,20 @@ > >> +/* SPDX-License-Identifier: GPL-2.0 */ > >> +#ifndef _ASM_POWERPC_QSPINLOCK_H > >> +#define _ASM_POWERPC_QSPINLOCK_H > >> + > >> +#include <asm-generic/qspinlock_types.h> > >> + > >> +#define _Q_PENDING_LOOPS (1 << 9) /* not tuned */ > >> + > >> +#define smp_mb__after_spinlock() smp_mb() > >> + > >> +static __always_inline int queued_spin_is_locked(struct qspinlock *lock) > >> +{ > >> + smp_mb(); > >> + return atomic_read(&lock->val); > >> +} > > > > Why do you need the smp_mb() here? > > A long and sad tale that ends here 51d7d5205d338 > > Should probably at least refer to that commit from here, since this one > is not going to git blame back there. I'll add something.Is this still an issue, though? See 38b850a73034 (where we added a similar barrier on arm64) and then c6f5d02b6a0f (where we removed it). Will
Nicholas Piggin
2020-Jul-02 10:47 UTC
[PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
Excerpts from Will Deacon's message of July 2, 2020 8:35 pm:> On Thu, Jul 02, 2020 at 08:25:43PM +1000, Nicholas Piggin wrote: >> Excerpts from Will Deacon's message of July 2, 2020 6:02 pm: >> > On Thu, Jul 02, 2020 at 05:48:36PM +1000, Nicholas Piggin wrote: >> >> diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h >> >> new file mode 100644 >> >> index 000000000000..f84da77b6bb7 >> >> --- /dev/null >> >> +++ b/arch/powerpc/include/asm/qspinlock.h >> >> @@ -0,0 +1,20 @@ >> >> +/* SPDX-License-Identifier: GPL-2.0 */ >> >> +#ifndef _ASM_POWERPC_QSPINLOCK_H >> >> +#define _ASM_POWERPC_QSPINLOCK_H >> >> + >> >> +#include <asm-generic/qspinlock_types.h> >> >> + >> >> +#define _Q_PENDING_LOOPS (1 << 9) /* not tuned */ >> >> + >> >> +#define smp_mb__after_spinlock() smp_mb() >> >> + >> >> +static __always_inline int queued_spin_is_locked(struct qspinlock *lock) >> >> +{ >> >> + smp_mb(); >> >> + return atomic_read(&lock->val); >> >> +} >> > >> > Why do you need the smp_mb() here? >> >> A long and sad tale that ends here 51d7d5205d338 >> >> Should probably at least refer to that commit from here, since this one >> is not going to git blame back there. I'll add something. > > Is this still an issue, though? > > See 38b850a73034 (where we added a similar barrier on arm64) and then > c6f5d02b6a0f (where we removed it). >Oh nice, I didn't know that went away. Thanks for the heads up. I'm going to say I'm too scared to remove it while changing the spinlock algorithm, but I'll open an issue and we should look at removing it. Thanks, Nick
Will Deacon
2020-Jul-02 10:48 UTC
[PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
On Thu, Jul 02, 2020 at 08:47:05PM +1000, Nicholas Piggin wrote:> Excerpts from Will Deacon's message of July 2, 2020 8:35 pm: > > On Thu, Jul 02, 2020 at 08:25:43PM +1000, Nicholas Piggin wrote: > >> Excerpts from Will Deacon's message of July 2, 2020 6:02 pm: > >> > On Thu, Jul 02, 2020 at 05:48:36PM +1000, Nicholas Piggin wrote: > >> >> diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h > >> >> new file mode 100644 > >> >> index 000000000000..f84da77b6bb7 > >> >> --- /dev/null > >> >> +++ b/arch/powerpc/include/asm/qspinlock.h > >> >> @@ -0,0 +1,20 @@ > >> >> +/* SPDX-License-Identifier: GPL-2.0 */ > >> >> +#ifndef _ASM_POWERPC_QSPINLOCK_H > >> >> +#define _ASM_POWERPC_QSPINLOCK_H > >> >> + > >> >> +#include <asm-generic/qspinlock_types.h> > >> >> + > >> >> +#define _Q_PENDING_LOOPS (1 << 9) /* not tuned */ > >> >> + > >> >> +#define smp_mb__after_spinlock() smp_mb() > >> >> + > >> >> +static __always_inline int queued_spin_is_locked(struct qspinlock *lock) > >> >> +{ > >> >> + smp_mb(); > >> >> + return atomic_read(&lock->val); > >> >> +} > >> > > >> > Why do you need the smp_mb() here? > >> > >> A long and sad tale that ends here 51d7d5205d338 > >> > >> Should probably at least refer to that commit from here, since this one > >> is not going to git blame back there. I'll add something. > > > > Is this still an issue, though? > > > > See 38b850a73034 (where we added a similar barrier on arm64) and then > > c6f5d02b6a0f (where we removed it). > > > > Oh nice, I didn't know that went away. Thanks for the heads up. > > I'm going to say I'm too scared to remove it while changing the > spinlock algorithm, but I'll open an issue and we should look at > removing it.Makes sense to me -- it certainly needs a deeper look! In the meantime, please put some of this in a comment next to the barrier. Cheers, Will
Michael Ellerman
2020-Jul-03 10:52 UTC
[PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
Nicholas Piggin <npiggin at gmail.com> writes:> Excerpts from Will Deacon's message of July 2, 2020 8:35 pm: >> On Thu, Jul 02, 2020 at 08:25:43PM +1000, Nicholas Piggin wrote: >>> Excerpts from Will Deacon's message of July 2, 2020 6:02 pm: >>> > On Thu, Jul 02, 2020 at 05:48:36PM +1000, Nicholas Piggin wrote: >>> >> diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h >>> >> new file mode 100644 >>> >> index 000000000000..f84da77b6bb7 >>> >> --- /dev/null >>> >> +++ b/arch/powerpc/include/asm/qspinlock.h >>> >> @@ -0,0 +1,20 @@ >>> >> +/* SPDX-License-Identifier: GPL-2.0 */ >>> >> +#ifndef _ASM_POWERPC_QSPINLOCK_H >>> >> +#define _ASM_POWERPC_QSPINLOCK_H >>> >> + >>> >> +#include <asm-generic/qspinlock_types.h> >>> >> + >>> >> +#define _Q_PENDING_LOOPS (1 << 9) /* not tuned */ >>> >> + >>> >> +#define smp_mb__after_spinlock() smp_mb() >>> >> + >>> >> +static __always_inline int queued_spin_is_locked(struct qspinlock *lock) >>> >> +{ >>> >> + smp_mb(); >>> >> + return atomic_read(&lock->val); >>> >> +} >>> > >>> > Why do you need the smp_mb() here? >>> >>> A long and sad tale that ends here 51d7d5205d338 >>> >>> Should probably at least refer to that commit from here, since this one >>> is not going to git blame back there. I'll add something. >> >> Is this still an issue, though? >> >> See 38b850a73034 (where we added a similar barrier on arm64) and then >> c6f5d02b6a0f (where we removed it). >> > > Oh nice, I didn't know that went away. Thanks for the heads up.Argh! I spent so much time chasing that damn bug in the ipc code.> I'm going to say I'm too scared to remove it while changing the > spinlock algorithm, but I'll open an issue and we should look at > removing it.Sounds good. cheers
Reasonably Related Threads
- [PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
- [PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
- [PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
- [PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks
- [PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks