Waiman Long
2023-Sep-11 02:35 UTC
[PATCH V11 04/17] locking/qspinlock: Improve xchg_tail for number of cpus >= 16k
On 9/10/23 04:28, guoren at kernel.org wrote:> From: Guo Ren <guoren at linux.alibaba.com> > > The target of xchg_tail is to write the tail to the lock value, so > adding prefetchw could help the next cmpxchg step, which may > decrease the cmpxchg retry loops of xchg_tail. Some processors may > utilize this feature to give a forward guarantee, e.g., RISC-V > XuanTie processors would block the snoop channel & irq for several > cycles when prefetch.w instruction (from Zicbop extension) retired, > which guarantees the next cmpxchg succeeds. > > Signed-off-by: Guo Ren <guoren at linux.alibaba.com> > Signed-off-by: Guo Ren <guoren at kernel.org> > --- > kernel/locking/qspinlock.c | 5 ++++- > 1 file changed, 4 insertions(+), 1 deletion(-) > > diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c > index d3f99060b60f..96b54e2ade86 100644 > --- a/kernel/locking/qspinlock.c > +++ b/kernel/locking/qspinlock.c > @@ -223,7 +223,10 @@ static __always_inline void clear_pending_set_locked(struct qspinlock *lock) > */ > static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail) > { > - u32 old, new, val = atomic_read(&lock->val); > + u32 old, new, val; > + > + prefetchw(&lock->val); > + val = atomic_read(&lock->val); > > for (;;) { > new = (val & _Q_LOCKED_PENDING_MASK) | tail;That looks a bit weird. You pre-fetch and then immediately read it. How much performance gain you get by this change alone? Maybe you can define an arch specific primitive that default back to atomic_read() if not defined. Cheers, Longman
Maybe Matching Threads
- [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
- [PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
- [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
- [PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
- [LLVMdev] [x86] Prefetch intrinsics and prefetchw