Since Waiman seems incapable of doing simple things; here's my take on the paravirt crap. The first few patches are taken from Waiman's latest series, but the virt support is completely new. Its primary aim is to not mess up the native code. I've not stress tested it, but the virt and paravirt (kvm) cases boot on simple smp guests. I've not done Xen, but the patch should be simple and similar. I ripped out all the unfair nonsense as its not at all required for paravirt and optimizations that make paravirt better at the cost of code clarity and/or native performance are just not worth it. Also; if we were to ever add some of that unfair nonsense you do so _after_ you got the simple things working. The thing I'm least sure about is the head tracking, I chose to do something different from what Waiman did, because his is O(nr_cpus) and had the assumption that guests have small nr_cpus. AFAIK this is not at all true. The biggest problem I have with what I did is that it contains wait loops itself.
Peter Zijlstra
2014-Jun-15 12:46 UTC
[PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
From: Waiman Long <Waiman.Long at hp.com> This patch introduces a new generic queue spinlock implementation that can serve as an alternative to the default ticket spinlock. Compared with the ticket spinlock, this queue spinlock should be almost as fair as the ticket spinlock. It has about the same speed in single-thread and it can be much faster in high contention situations especially when the spinlock is embedded within the data structure to be protected. Only in light to moderate contention where the average queue depth is around 1-3 will this queue spinlock be potentially a bit slower due to the higher slowpath overhead. This queue spinlock is especially suit to NUMA machines with a large number of cores as the chance of spinlock contention is much higher in those machines. The cost of contention is also higher because of slower inter-node memory traffic. Due to the fact that spinlocks are acquired with preemption disabled, the process will not be migrated to another CPU while it is trying to get a spinlock. Ignoring interrupt handling, a CPU can only be contending in one spinlock at any one time. Counting soft IRQ, hard IRQ and NMI, a CPU can only have a maximum of 4 concurrent lock waiting activities. By allocating a set of per-cpu queue nodes and used them to form a waiting queue, we can encode the queue node address into a much smaller 24-bit size (including CPU number and queue node index) leaving one byte for the lock. Please note that the queue node is only needed when waiting for the lock. Once the lock is acquired, the queue node can be released to be used later. Signed-off-by: Waiman Long <Waiman.Long at hp.com> Signed-off-by: Peter Zijlstra <peterz at infradead.org> --- include/asm-generic/qspinlock.h | 118 ++++++++++++++++++++ include/asm-generic/qspinlock_types.h | 61 ++++++++++ kernel/Kconfig.locks | 7 + kernel/locking/Makefile | 1 kernel/locking/mcs_spinlock.h | 1 kernel/locking/qspinlock.c | 197 ++++++++++++++++++++++++++++++++++ 6 files changed, 385 insertions(+) create mode 100644 include/asm-generic/qspinlock.h create mode 100644 include/asm-generic/qspinlock_types.h create mode 100644 kernel/locking/qspinlock.c Index: linux-2.6/include/asm-generic/qspinlock.h ==================================================================--- /dev/null +++ linux-2.6/include/asm-generic/qspinlock.h @@ -0,0 +1,118 @@ +/* + * Queue spinlock + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P. + * + * Authors: Waiman Long <waiman.long at hp.com> + */ +#ifndef __ASM_GENERIC_QSPINLOCK_H +#define __ASM_GENERIC_QSPINLOCK_H + +#include <asm-generic/qspinlock_types.h> + +/** + * queue_spin_is_locked - is the spinlock locked? + * @lock: Pointer to queue spinlock structure + * Return: 1 if it is locked, 0 otherwise + */ +static __always_inline int queue_spin_is_locked(struct qspinlock *lock) +{ + return atomic_read(&lock->val); +} + +/** + * queue_spin_value_unlocked - is the spinlock structure unlocked? + * @lock: queue spinlock structure + * Return: 1 if it is unlocked, 0 otherwise + * + * N.B. Whenever there are tasks waiting for the lock, it is considered + * locked wrt the lockref code to avoid lock stealing by the lockref + * code and change things underneath the lock. This also allows some + * optimizations to be applied without conflict with lockref. + */ +static __always_inline int queue_spin_value_unlocked(struct qspinlock lock) +{ + return !atomic_read(&lock.val); +} + +/** + * queue_spin_is_contended - check if the lock is contended + * @lock : Pointer to queue spinlock structure + * Return: 1 if lock contended, 0 otherwise + */ +static __always_inline int queue_spin_is_contended(struct qspinlock *lock) +{ + return atomic_read(&lock->val) & ~_Q_LOCKED_MASK; +} +/** + * queue_spin_trylock - try to acquire the queue spinlock + * @lock : Pointer to queue spinlock structure + * Return: 1 if lock acquired, 0 if failed + */ +static __always_inline int queue_spin_trylock(struct qspinlock *lock) +{ + if (!atomic_read(&lock->val) && + (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) == 0)) + return 1; + return 0; +} + +extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val); + +/** + * queue_spin_lock - acquire a queue spinlock + * @lock: Pointer to queue spinlock structure + */ +static __always_inline void queue_spin_lock(struct qspinlock *lock) +{ + u32 val; + + val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL); + if (likely(val == 0)) + return; + queue_spin_lock_slowpath(lock, val); +} + +#ifndef queue_spin_unlock +/** + * queue_spin_unlock - release a queue spinlock + * @lock : Pointer to queue spinlock structure + */ +static __always_inline void queue_spin_unlock(struct qspinlock *lock) +{ + /* + * smp_mb__before_atomic() in order to guarantee release semantics + */ + smp_mb__before_atomic_dec(); + atomic_sub(_Q_LOCKED_VAL, &lock->val); +} +#endif + +/* + * Initializier + */ +#define __ARCH_SPIN_LOCK_UNLOCKED { ATOMIC_INIT(0) } + +/* + * Remapping spinlock architecture specific functions to the corresponding + * queue spinlock functions. + */ +#define arch_spin_is_locked(l) queue_spin_is_locked(l) +#define arch_spin_is_contended(l) queue_spin_is_contended(l) +#define arch_spin_value_unlocked(l) queue_spin_value_unlocked(l) +#define arch_spin_lock(l) queue_spin_lock(l) +#define arch_spin_trylock(l) queue_spin_trylock(l) +#define arch_spin_unlock(l) queue_spin_unlock(l) +#define arch_spin_lock_flags(l, f) queue_spin_lock(l) + +#endif /* __ASM_GENERIC_QSPINLOCK_H */ Index: linux-2.6/include/asm-generic/qspinlock_types.h ==================================================================--- /dev/null +++ linux-2.6/include/asm-generic/qspinlock_types.h @@ -0,0 +1,61 @@ +/* + * Queue spinlock + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P. + * + * Authors: Waiman Long <waiman.long at hp.com> + */ +#ifndef __ASM_GENERIC_QSPINLOCK_TYPES_H +#define __ASM_GENERIC_QSPINLOCK_TYPES_H + +/* + * Including atomic.h with PARAVIRT on will cause compilation errors because + * of recursive header file incluson via paravirt_types.h. A workaround is + * to include paravirt_types.h here. + */ +#ifdef CONFIG_PARAVIRT +#include <asm/paravirt_types.h> +#else +#include <linux/types.h> +#include <linux/atomic.h> +#include <linux/bug.h> +#endif + +typedef struct qspinlock { + atomic_t val; +} arch_spinlock_t; + +/* + * Bitfields in the atomic value: + * + * 0- 7: locked byte + * 8- 9: tail index + * 10-31: tail cpu (+1) + */ +#define _Q_SET_MASK(type) (((1U << _Q_ ## type ## _BITS) - 1)\ + << _Q_ ## type ## _OFFSET) +#define _Q_LOCKED_OFFSET 0 +#define _Q_LOCKED_BITS 8 +#define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED) + +#define _Q_TAIL_IDX_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS) +#define _Q_TAIL_IDX_BITS 2 +#define _Q_TAIL_IDX_MASK _Q_SET_MASK(TAIL_IDX) + +#define _Q_TAIL_CPU_OFFSET (_Q_TAIL_IDX_OFFSET + _Q_TAIL_IDX_BITS) +#define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET) +#define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU) + +#define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET) + +#endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */ Index: linux-2.6/kernel/Kconfig.locks ==================================================================--- linux-2.6.orig/kernel/Kconfig.locks +++ linux-2.6/kernel/Kconfig.locks @@ -224,6 +224,13 @@ config MUTEX_SPIN_ON_OWNER def_bool y depends on SMP && !DEBUG_MUTEXES && !PARISC +config ARCH_USE_QUEUE_SPINLOCK + bool + +config QUEUE_SPINLOCK + def_bool y if ARCH_USE_QUEUE_SPINLOCK + depends on SMP && !PARAVIRT_SPINLOCKS + config ARCH_USE_QUEUE_RWLOCK bool Index: linux-2.6/kernel/locking/Makefile ==================================================================--- linux-2.6.orig/kernel/locking/Makefile +++ linux-2.6/kernel/locking/Makefile @@ -16,6 +16,7 @@ endif obj-$(CONFIG_SMP) += spinlock.o obj-$(CONFIG_SMP) += lglock.o obj-$(CONFIG_PROVE_LOCKING) += spinlock.o +obj-$(CONFIG_QUEUE_SPINLOCK) += qspinlock.o obj-$(CONFIG_RT_MUTEXES) += rtmutex.o obj-$(CONFIG_DEBUG_RT_MUTEXES) += rtmutex-debug.o obj-$(CONFIG_RT_MUTEX_TESTER) += rtmutex-tester.o Index: linux-2.6/kernel/locking/mcs_spinlock.h ==================================================================--- linux-2.6.orig/kernel/locking/mcs_spinlock.h +++ linux-2.6/kernel/locking/mcs_spinlock.h @@ -17,6 +17,7 @@ struct mcs_spinlock { struct mcs_spinlock *next; int locked; /* 1 if lock acquired */ + int count; }; #ifndef arch_mcs_spin_lock_contended Index: linux-2.6/kernel/locking/qspinlock.c ==================================================================--- /dev/null +++ linux-2.6/kernel/locking/qspinlock.c @@ -0,0 +1,197 @@ +/* + * Queue spinlock + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P. + * + * Authors: Waiman Long <waiman.long at hp.com> + * Peter Zijlstra <pzijlstr at redhat.com> + */ +#include <linux/smp.h> +#include <linux/bug.h> +#include <linux/cpumask.h> +#include <linux/percpu.h> +#include <linux/hardirq.h> +#include <linux/mutex.h> +#include <asm/qspinlock.h> + +/* + * The basic principle of a queue-based spinlock can best be understood + * by studying a classic queue-based spinlock implementation called the + * MCS lock. The paper below provides a good description for this kind + * of lock. + * + * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf + * + * This queue spinlock implementation is based on the MCS lock, however to make + * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing + * API, we must modify it some. + * + * In particular; where the traditional MCS lock consists of a tail pointer + * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to + * unlock the next pending (next->locked), we compress both these: {tail, + * next->locked} into a single u32 value. + * + * Since a spinlock disables recursion of its own context and there is a limit + * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can + * encode the tail as and index indicating this context and a cpu number. + * + * We can further change the first spinner to spin on a bit in the lock word + * instead of its node; whereby avoiding the need to carry a node from lock to + * unlock, and preserving API. + */ + +#include "mcs_spinlock.h" + +/* + * Per-CPU queue node structures; we can never have more than 4 nested + * contexts: task, softirq, hardirq, nmi. + * + * Exactly fits one cacheline. + */ +static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[4]); + +/* + * We must be able to distinguish between no-tail and the tail at 0:0, + * therefore increment the cpu number by one. + */ + +static inline u32 encode_tail(int cpu, int idx) +{ + u32 tail; + + tail = (cpu + 1) << _Q_TAIL_CPU_OFFSET; + tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */ + + return tail; +} + +static inline struct mcs_spinlock *decode_tail(u32 tail) +{ + int cpu = (tail >> _Q_TAIL_CPU_OFFSET) - 1; + int idx = (tail & _Q_TAIL_IDX_MASK) >> _Q_TAIL_IDX_OFFSET; + + return per_cpu_ptr(&mcs_nodes[idx], cpu); +} + +/** + * queue_spin_lock_slowpath - acquire the queue spinlock + * @lock: Pointer to queue spinlock structure + * @val: Current value of the queue spinlock 32-bit word + * + * (queue tail, lock bit) + * + * fast : slow : unlock + * : : + * uncontended (0,0) --:--> (0,1) --------------------------------:--> (*,0) + * : | ^--------. / : + * : v \ | : + * uncontended : (n,x) --+--> (n,0) | : + * queue : | ^--' | : + * : v | : + * contended : (*,x) --+--> (*,0) -----> (*,1) ---' : + * queue : ^--' : + * + */ +void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val) +{ + struct mcs_spinlock *prev, *next, *node; + u32 new, old, tail; + int idx; + + BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS)); + + node = this_cpu_ptr(&mcs_nodes[0]); + idx = node->count++; + tail = encode_tail(smp_processor_id(), idx); + + node += idx; + node->locked = 0; + node->next = NULL; + + /* + * trylock || xchg(lock, node) + * + * 0,0 -> 0,1 ; trylock + * p,x -> n,x ; prev = xchg(lock, node) + */ + for (;;) { + new = _Q_LOCKED_VAL; + if (val) + new = tail | (val & _Q_LOCKED_MASK); + + old = atomic_cmpxchg(&lock->val, val, new); + if (old == val) + break; + + val = old; + } + + /* + * we won the trylock; forget about queueing. + */ + if (new == _Q_LOCKED_VAL) + goto release; + + /* + * if there was a previous node; link it and wait. + */ + if (old & ~_Q_LOCKED_MASK) { + prev = decode_tail(old); + ACCESS_ONCE(prev->next) = node; + + arch_mcs_spin_lock_contended(&node->locked); + } + + /* + * we're at the head of the waitqueue, wait for the owner to go away. + * + * *,x -> *,0 + */ + while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK) + cpu_relax(); + + /* + * claim the lock: + * + * n,0 -> 0,1 : lock, uncontended + * *,0 -> *,1 : lock, contended + */ + for (;;) { + new = _Q_LOCKED_VAL; + if (val != tail) + new |= val; + + old = atomic_cmpxchg(&lock->val, val, new); + if (old == val) + break; + + val = old; + } + + /* + * contended path; wait for next, release. + */ + if (new != _Q_LOCKED_VAL) { + while (!(next = ACCESS_ONCE(node->next))) + cpu_relax(); + + arch_mcs_spin_unlock_contended(&next->locked); + } + +release: + /* + * release the node + */ + this_cpu_dec(mcs_nodes[0].count); +} +EXPORT_SYMBOL(queue_spin_lock_slowpath);
Peter Zijlstra
2014-Jun-15 12:46 UTC
[PATCH 02/11] qspinlock, x86: Enable x86-64 to use queue spinlock
From: Waiman Long <Waiman.Long at hp.com> This patch makes the necessary changes at the x86 architecture specific layer to enable the use of queue spinlock for x86-64. As x86-32 machines are typically not multi-socket. The benefit of queue spinlock may not be apparent. So queue spinlock is not enabled. Currently, there is some incompatibilities between the para-virtualized spinlock code (which hard-codes the use of ticket spinlock) and the queue spinlock. Therefore, the use of queue spinlock is disabled when the para-virtualized spinlock is enabled. The arch/x86/include/asm/qspinlock.h header file includes some x86 specific optimization which will make the queue spinlock code perform better than the generic implementation. Signed-off-by: Waiman Long <Waiman.Long at hp.com> Signed-off-by: Peter Zijlstra <peterz at infradead.org> --- arch/x86/Kconfig | 1 + arch/x86/include/asm/qspinlock.h | 25 +++++++++++++++++++++++++ arch/x86/include/asm/spinlock.h | 5 +++++ arch/x86/include/asm/spinlock_types.h | 4 ++++ 4 files changed, 35 insertions(+) create mode 100644 arch/x86/include/asm/qspinlock.h --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -29,6 +29,7 @@ config X86 select ARCH_SUPPORTS_NUMA_BALANCING if X86_64 select ARCH_SUPPORTS_INT128 if X86_64 select ARCH_WANTS_PROT_NUMA_PROT_NONE + select ARCH_USE_QUEUE_SPINLOCK select HAVE_IDE select HAVE_OPROFILE select HAVE_PCSPKR_PLATFORM --- /dev/null +++ b/arch/x86/include/asm/qspinlock.h @@ -0,0 +1,25 @@ +#ifndef _ASM_X86_QSPINLOCK_H +#define _ASM_X86_QSPINLOCK_H + +#include <asm-generic/qspinlock_types.h> + +#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE) + +#define queue_spin_unlock queue_spin_unlock +/** + * queue_spin_unlock - release a queue spinlock + * @lock : Pointer to queue spinlock structure + * + * An effective smp_store_release() on the least-significant byte. + */ +static inline void queue_spin_unlock(struct qspinlock *lock) +{ + barrier(); + ACCESS_ONCE(*(u8 *)lock) = 0; +} + +#endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */ + +#include <asm-generic/qspinlock.h> + +#endif /* _ASM_X86_QSPINLOCK_H */ --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h @@ -42,6 +42,10 @@ extern struct static_key paravirt_ticketlocks_enabled; static __always_inline bool static_key_false(struct static_key *key); +#ifdef CONFIG_QUEUE_SPINLOCK +#include <asm/qspinlock.h> +#else + #ifdef CONFIG_PARAVIRT_SPINLOCKS static inline void __ticket_enter_slowpath(arch_spinlock_t *lock) @@ -180,6 +184,7 @@ static __always_inline void arch_spin_lo { arch_spin_lock(lock); } +#endif /* CONFIG_QUEUE_SPINLOCK */ static inline void arch_spin_unlock_wait(arch_spinlock_t *lock) { --- a/arch/x86/include/asm/spinlock_types.h +++ b/arch/x86/include/asm/spinlock_types.h @@ -23,6 +23,9 @@ typedef u32 __ticketpair_t; #define TICKET_SHIFT (sizeof(__ticket_t) * 8) +#ifdef CONFIG_QUEUE_SPINLOCK +#include <asm-generic/qspinlock_types.h> +#else typedef struct arch_spinlock { union { __ticketpair_t head_tail; @@ -33,6 +36,7 @@ typedef struct arch_spinlock { } arch_spinlock_t; #define __ARCH_SPIN_LOCK_UNLOCKED { { 0 } } +#endif /* CONFIG_QUEUE_SPINLOCK */ #ifdef CONFIG_QUEUE_RWLOCK #include <asm-generic/qrwlock_types.h>
Because the qspinlock needs to touch a second cacheline; add a pending bit and allow a single in-word spinner before we punt to the second cacheline. Signed-off-by: Peter Zijlstra <peterz at infradead.org> --- include/asm-generic/qspinlock_types.h | 12 ++- kernel/locking/qspinlock.c | 109 +++++++++++++++++++++++++++------- 2 files changed, 97 insertions(+), 24 deletions(-) --- a/include/asm-generic/qspinlock_types.h +++ b/include/asm-generic/qspinlock_types.h @@ -39,8 +39,9 @@ typedef struct qspinlock { * Bitfields in the atomic value: * * 0- 7: locked byte - * 8- 9: tail index - * 10-31: tail cpu (+1) + * 8: pending + * 9-10: tail index + * 11-31: tail cpu (+1) */ #define _Q_SET_MASK(type) (((1U << _Q_ ## type ## _BITS) - 1)\ << _Q_ ## type ## _OFFSET) @@ -48,7 +49,11 @@ typedef struct qspinlock { #define _Q_LOCKED_BITS 8 #define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED) -#define _Q_TAIL_IDX_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS) +#define _Q_PENDING_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS) +#define _Q_PENDING_BITS 1 +#define _Q_PENDING_MASK _Q_SET_MASK(PENDING) + +#define _Q_TAIL_IDX_OFFSET (_Q_PENDING_OFFSET + _Q_PENDING_BITS) #define _Q_TAIL_IDX_BITS 2 #define _Q_TAIL_IDX_MASK _Q_SET_MASK(TAIL_IDX) @@ -57,5 +62,6 @@ typedef struct qspinlock { #define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU) #define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET) +#define _Q_PENDING_VAL (1U << _Q_PENDING_OFFSET) #endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */ --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -83,24 +83,28 @@ static inline struct mcs_spinlock *decod return per_cpu_ptr(&mcs_nodes[idx], cpu); } +#define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK) + /** * queue_spin_lock_slowpath - acquire the queue spinlock * @lock: Pointer to queue spinlock structure * @val: Current value of the queue spinlock 32-bit word * - * (queue tail, lock bit) - * - * fast : slow : unlock - * : : - * uncontended (0,0) --:--> (0,1) --------------------------------:--> (*,0) - * : | ^--------. / : - * : v \ | : - * uncontended : (n,x) --+--> (n,0) | : - * queue : | ^--' | : - * : v | : - * contended : (*,x) --+--> (*,0) -----> (*,1) ---' : - * queue : ^--' : + * (queue tail, pending bit, lock bit) * + * fast : slow : unlock + * : : + * uncontended (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0) + * : | ^--------.------. / : + * : v \ \ | : + * pending : (0,1,1) +--> (0,1,0) \ | : + * : | ^--' | | : + * : v | | : + * uncontended : (n,x,y) +--> (n,0,0) --' | : + * queue : | ^--' | : + * : v | : + * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' : + * queue : ^--' : */ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val) { @@ -110,6 +114,65 @@ void queue_spin_lock_slowpath(struct qsp BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS)); + /* + * trylock || pending + * + * 0,0,0 -> 0,0,1 ; trylock + * 0,0,1 -> 0,1,1 ; pending + */ + for (;;) { + /* + * If we observe any contention; queue. + */ + if (val & ~_Q_LOCKED_MASK) + goto queue; + + new = _Q_LOCKED_VAL; + if (val == new) + new |= _Q_PENDING_VAL; + + old = atomic_cmpxchg(&lock->val, val, new); + if (old == val) + break; + + val = old; + } + + /* + * we won the trylock + */ + if (new == _Q_LOCKED_VAL) + return; + + /* + * we're pending, wait for the owner to go away. + * + * *,1,1 -> *,1,0 + */ + while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK) + cpu_relax(); + + /* + * take ownership and clear the pending bit. + * + * *,1,0 -> *,0,1 + */ + for (;;) { + new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL; + + old = atomic_cmpxchg(&lock->val, val, new); + if (old == val) + break; + + val = old; + } + return; + + /* + * End of pending bit optimistic spinning and beginning of MCS + * queuing. + */ +queue: node = this_cpu_ptr(&mcs_nodes[0]); idx = node->count++; tail = encode_tail(smp_processor_id(), idx); @@ -119,15 +182,18 @@ void queue_spin_lock_slowpath(struct qsp node->next = NULL; /* + * we already touched the queueing cacheline; don't bother with pending + * stuff. + * * trylock || xchg(lock, node) * - * 0,0 -> 0,1 ; trylock - * p,x -> n,x ; prev = xchg(lock, node) + * 0,0,0 -> 0,0,1 ; trylock + * p,y,x -> n,y,x ; prev = xchg(lock, node) */ for (;;) { new = _Q_LOCKED_VAL; if (val) - new = tail | (val & _Q_LOCKED_MASK); + new = tail | (val & _Q_LOCKED_PENDING_MASK); old = atomic_cmpxchg(&lock->val, val, new); if (old == val) @@ -145,7 +211,7 @@ void queue_spin_lock_slowpath(struct qsp /* * if there was a previous node; link it and wait. */ - if (old & ~_Q_LOCKED_MASK) { + if (old & ~_Q_LOCKED_PENDING_MASK) { prev = decode_tail(old); ACCESS_ONCE(prev->next) = node; @@ -153,18 +219,19 @@ void queue_spin_lock_slowpath(struct qsp } /* - * we're at the head of the waitqueue, wait for the owner to go away. + * we're at the head of the waitqueue, wait for the owner & pending to + * go away. * - * *,x -> *,0 + * *,x,y -> *,0,0 */ - while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK) + while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK) cpu_relax(); /* * claim the lock: * - * n,0 -> 0,1 : lock, uncontended - * *,0 -> *,1 : lock, contended + * n,0,0 -> 0,0,1 : lock, uncontended + * *,0,0 -> *,0,1 : lock, contended */ for (;;) { new = _Q_LOCKED_VAL;
Peter Zijlstra
2014-Jun-15 12:47 UTC
[PATCH 04/11] qspinlock: Extract out the exchange of tail code word
From: Waiman Long <Waiman.Long at hp.com> This patch extracts the logic for the exchange of new and previous tail code words into a new xchg_tail() function which can be optimized in a later patch. Signed-off-by: Waiman Long <Waiman.Long at hp.com> Signed-off-by: Peter Zijlstra <peterz at infradead.org> --- include/asm-generic/qspinlock_types.h | 2 + kernel/locking/qspinlock.c | 58 +++++++++++++++++++++------------- 2 files changed, 38 insertions(+), 22 deletions(-) --- a/include/asm-generic/qspinlock_types.h +++ b/include/asm-generic/qspinlock_types.h @@ -61,6 +61,8 @@ typedef struct qspinlock { #define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET) #define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU) +#define _Q_TAIL_MASK (_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK) + #define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET) #define _Q_PENDING_VAL (1U << _Q_PENDING_OFFSET) --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -86,6 +86,31 @@ static inline struct mcs_spinlock *decod #define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK) /** + * xchg_tail - Put in the new queue tail code word & retrieve previous one + * @lock : Pointer to queue spinlock structure + * @tail : The new queue tail code word + * Return: The previous queue tail code word + * + * xchg(lock, tail) + * + * p,*,* -> n,*,* ; prev = xchg(lock, node) + */ +static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail) +{ + u32 old, new, val = atomic_read(&lock->val); + + for (;;) { + new = (val & _Q_LOCKED_PENDING_MASK) | tail; + old = atomic_cmpxchg(&lock->val, val, new); + if (old == val) + break; + + val = old; + } + return old; +} + +/** * queue_spin_lock_slowpath - acquire the queue spinlock * @lock: Pointer to queue spinlock structure * @val: Current value of the queue spinlock 32-bit word @@ -182,36 +207,25 @@ void queue_spin_lock_slowpath(struct qsp node->next = NULL; /* - * we already touched the queueing cacheline; don't bother with pending - * stuff. - * - * trylock || xchg(lock, node) - * - * 0,0,0 -> 0,0,1 ; trylock - * p,y,x -> n,y,x ; prev = xchg(lock, node) + * We touched a (possibly) cold cacheline in the per-cpu queue node; + * attempt the trylock once more in the hope someone let go while we + * weren't watching. */ - for (;;) { - new = _Q_LOCKED_VAL; - if (val) - new = tail | (val & _Q_LOCKED_PENDING_MASK); - - old = atomic_cmpxchg(&lock->val, val, new); - if (old == val) - break; - - val = old; - } + if (queue_spin_trylock(lock)) + goto release; /* - * we won the trylock; forget about queueing. + * we already touched the queueing cacheline; don't bother with pending + * stuff. + * + * p,*,* -> n,*,* */ - if (new == _Q_LOCKED_VAL) - goto release; + old = xchg_tail(lock, tail); /* * if there was a previous node; link it and wait. */ - if (old & ~_Q_LOCKED_PENDING_MASK) { + if (old & _Q_TAIL_MASK) { prev = decode_tail(old); ACCESS_ONCE(prev->next) = node;
Peter Zijlstra
2014-Jun-15 12:47 UTC
[PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
From: Peter Zijlstra <peterz at infradead.org> When we allow for a max NR_CPUS < 2^14 we can optimize the pending wait-acquire and the xchg_tail() operations. By growing the pending bit to a byte, we reduce the tail to 16bit. This means we can use xchg16 for the tail part and do away with all the repeated compxchg() operations. This in turn allows us to unconditionally acquire; the locked state as observed by the wait loops cannot change. And because both locked and pending are now a full byte we can use simple stores for the state transition, obviating one atomic operation entirely. All this is horribly broken on Alpha pre EV56 (and any other arch that cannot do single-copy atomic byte stores). Signed-off-by: Peter Zijlstra <peterz at infradead.org> --- include/asm-generic/qspinlock_types.h | 13 ++++ kernel/locking/qspinlock.c | 103 ++++++++++++++++++++++++++++++---- 2 files changed, 106 insertions(+), 10 deletions(-) --- a/include/asm-generic/qspinlock_types.h +++ b/include/asm-generic/qspinlock_types.h @@ -38,6 +38,14 @@ typedef struct qspinlock { /* * Bitfields in the atomic value: * + * When NR_CPUS < 16K + * 0- 7: locked byte + * 8: pending + * 9-15: not used + * 16-17: tail index + * 18-31: tail cpu (+1) + * + * When NR_CPUS >= 16K * 0- 7: locked byte * 8: pending * 9-10: tail index @@ -50,7 +58,11 @@ typedef struct qspinlock { #define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED) #define _Q_PENDING_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS) +#if CONFIG_NR_CPUS < (1U << 14) +#define _Q_PENDING_BITS 8 +#else #define _Q_PENDING_BITS 1 +#endif #define _Q_PENDING_MASK _Q_SET_MASK(PENDING) #define _Q_TAIL_IDX_OFFSET (_Q_PENDING_OFFSET + _Q_PENDING_BITS) @@ -61,6 +73,7 @@ typedef struct qspinlock { #define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET) #define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU) +#define _Q_TAIL_OFFSET _Q_TAIL_IDX_OFFSET #define _Q_TAIL_MASK (_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK) #define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET) --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -22,6 +22,7 @@ #include <linux/percpu.h> #include <linux/hardirq.h> #include <linux/mutex.h> +#include <asm/byteorder.h> #include <asm/qspinlock.h> /* @@ -48,6 +49,9 @@ * We can further change the first spinner to spin on a bit in the lock word * instead of its node; whereby avoiding the need to carry a node from lock to * unlock, and preserving API. + * + * N.B. The current implementation only supports architectures that allow + * atomic operations on smaller 8-bit and 16-bit data types. */ #include "mcs_spinlock.h" @@ -85,6 +89,87 @@ static inline struct mcs_spinlock *decod #define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK) +/* + * By using the whole 2nd least significant byte for the pending bit, we + * can allow better optimization of the lock acquisition for the pending + * bit holder. + */ +#if _Q_PENDING_BITS == 8 + +struct __qspinlock { + union { + atomic_t val; + struct { +#ifdef __LITTLE_ENDIAN + u16 locked_pending; + u16 tail; +#else + u16 tail; + u16 locked_pending; +#endif + }; + }; +}; + +/** + * clear_pending_set_locked - take ownership and clear the pending bit. + * @lock: Pointer to queue spinlock structure + * @val : Current value of the queue spinlock 32-bit word + * + * *,1,0 -> *,0,1 + * + * Lock stealing is not allowed if this function is used. + */ +static __always_inline void +clear_pending_set_locked(struct qspinlock *lock, u32 val) +{ + struct __qspinlock *l = (void *)lock; + + ACCESS_ONCE(l->locked_pending) = _Q_LOCKED_VAL; +} + +/* + * xchg_tail - Put in the new queue tail code word & retrieve previous one + * @lock : Pointer to queue spinlock structure + * @tail : The new queue tail code word + * Return: The previous queue tail code word + * + * xchg(lock, tail) + * + * p,*,* -> n,*,* ; prev = xchg(lock, node) + */ +static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail) +{ + struct __qspinlock *l = (void *)lock; + + return (u32)xchg(&l->tail, tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET; +} + +#else /* _Q_PENDING_BITS == 8 */ + +/** + * clear_pending_set_locked - take ownership and clear the pending bit. + * @lock: Pointer to queue spinlock structure + * @val : Current value of the queue spinlock 32-bit word + * + * *,1,0 -> *,0,1 + */ +static __always_inline void +clear_pending_set_locked(struct qspinlock *lock, u32 val) +{ + u32 new, old; + + for (;;) { + new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL; + + old = atomic_cmpxchg(&lock->val, val, new); + if (old == val) + break; + + val = old; + } +} + /** * xchg_tail - Put in the new queue tail code word & retrieve previous one * @lock : Pointer to queue spinlock structure @@ -109,6 +194,7 @@ static __always_inline u32 xchg_tail(str } return old; } +#endif /* _Q_PENDING_BITS == 8 */ /** * queue_spin_lock_slowpath - acquire the queue spinlock @@ -173,8 +259,13 @@ void queue_spin_lock_slowpath(struct qsp * we're pending, wait for the owner to go away. * * *,1,1 -> *,1,0 + * + * this wait loop must be a load-acquire such that we match the + * store-release that clears the locked bit and create lock + * sequentiality; this because not all clear_pending_set_locked() + * implementations imply full barriers. */ - while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK) + while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK) cpu_relax(); /* @@ -182,15 +273,7 @@ void queue_spin_lock_slowpath(struct qsp * * *,1,0 -> *,0,1 */ - for (;;) { - new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL; - - old = atomic_cmpxchg(&lock->val, val, new); - if (old == val) - break; - - val = old; - } + clear_pending_set_locked(lock, val); return; /*
XXX: merge into the pending bit patch.. It is possible so observe the pending bit without the locked bit when the last owner has just released but the pending owner has not yet taken ownership. In this case we would normally queue -- because the pending bit is already taken. However, in this case the pending bit is guaranteed to be released 'soon', therefore wait for it and avoid queueing. Signed-off-by: Peter Zijlstra <peterz at infradead.org> --- kernel/locking/qspinlock.c | 10 ++++++++++ 1 file changed, 10 insertions(+) Index: linux-2.6/kernel/locking/qspinlock.c ==================================================================--- linux-2.6.orig/kernel/locking/qspinlock.c +++ linux-2.6/kernel/locking/qspinlock.c @@ -226,6 +226,16 @@ void queue_spin_lock_slowpath(struct qsp BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS)); /* + * wait for in-progress pending->locked hand-overs + * + * 0,1,0 -> 0,0,1 + */ + if (val == _Q_PENDING_VAL) { + while ((val = atomic_read(&lock->val)) == _Q_PENDING_VAL) + cpu_relax(); + } + + /* * trylock || pending * * 0,0,0 -> 0,0,1 ; trylock
Peter Zijlstra
2014-Jun-15 12:47 UTC
[PATCH 07/11] qspinlock: Use a simple write to grab the lock, if applicable
From: Waiman Long <Waiman.Long at hp.com> Currently, atomic_cmpxchg() is used to get the lock. However, this is not really necessary if there is more than one task in the queue and the queue head don't need to reset the queue code word. For that case, a simple write to set the lock bit is enough as the queue head will be the only one eligible to get the lock as long as it checks that both the lock and pending bits are not set. The current pending bit waiting code will ensure that the bit will not be set as soon as the queue code word (tail) in the lock is set. With that change, the are some slight improvement in the performance of the queue spinlock in the 5M loop micro-benchmark run on a 4-socket Westere-EX machine as shown in the tables below. [Standalone/Embedded - same node] # of tasks Before patch After patch %Change ---------- ----------- ---------- ------- 3 2324/2321 2248/2265 -3%/-2% 4 2890/2896 2819/2831 -2%/-2% 5 3611/3595 3522/3512 -2%/-2% 6 4281/4276 4173/4160 -3%/-3% 7 5018/5001 4875/4861 -3%/-3% 8 5759/5750 5563/5568 -3%/-3% [Standalone/Embedded - different nodes] # of tasks Before patch After patch %Change ---------- ----------- ---------- ------- 3 12242/12237 12087/12093 -1%/-1% 4 10688/10696 10507/10521 -2%/-2% It was also found that this change produced a much bigger performance improvement in the newer IvyBridge-EX chip and was essentially to close the performance gap between the ticket spinlock and queue spinlock. The disk workload of the AIM7 benchmark was run on a 4-socket Westmere-EX machine with both ext4 and xfs RAM disks at 3000 users on a 3.14 based kernel. The results of the test runs were: AIM7 XFS Disk Test kernel JPM Real Time Sys Time Usr Time ----- --- --------- -------- -------- ticketlock 5678233 3.17 96.61 5.81 qspinlock 5750799 3.13 94.83 5.97 AIM7 EXT4 Disk Test kernel JPM Real Time Sys Time Usr Time ----- --- --------- -------- -------- ticketlock 1114551 16.15 509.72 7.11 qspinlock 2184466 8.24 232.99 6.01 The ext4 filesystem run had a much higher spinlock contention than the xfs filesystem run. The "ebizzy -m" test was also run with the following results: kernel records/s Real Time Sys Time Usr Time ----- --------- --------- -------- -------- ticketlock 2075 10.00 216.35 3.49 qspinlock 3023 10.00 198.20 4.80 Signed-off-by: Waiman Long <Waiman.Long at hp.com> Signed-off-by: Peter Zijlstra <peterz at infradead.org> --- kernel/locking/qspinlock.c | 59 ++++++++++++++++++++++++++++++++------------- 1 file changed, 43 insertions(+), 16 deletions(-) --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -93,24 +93,33 @@ static inline struct mcs_spinlock *decod * By using the whole 2nd least significant byte for the pending bit, we * can allow better optimization of the lock acquisition for the pending * bit holder. + * + * This internal structure is also used by the set_locked function which + * is not restricted to _Q_PENDING_BITS == 8. */ -#if _Q_PENDING_BITS == 8 - struct __qspinlock { union { atomic_t val; - struct { #ifdef __LITTLE_ENDIAN + u8 locked; + struct { u16 locked_pending; u16 tail; + }; #else + struct { u16 tail; u16 locked_pending; -#endif }; + struct { + u8 reserved[3]; + u8 locked; + }; +#endif }; }; +#if _Q_PENDING_BITS == 8 /** * clear_pending_set_locked - take ownership and clear the pending bit. * @lock: Pointer to queue spinlock structure @@ -197,6 +206,19 @@ static __always_inline u32 xchg_tail(str #endif /* _Q_PENDING_BITS == 8 */ /** + * set_locked - Set the lock bit and own the lock + * @lock: Pointer to queue spinlock structure + * + * *,*,0 -> *,0,1 + */ +static __always_inline void set_locked(struct qspinlock *lock) +{ + struct __qspinlock *l = (void *)lock; + + ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL; +} + +/** * queue_spin_lock_slowpath - acquire the queue spinlock * @lock: Pointer to queue spinlock structure * @val: Current value of the queue spinlock 32-bit word @@ -328,10 +350,13 @@ void queue_spin_lock_slowpath(struct qsp /* * we're at the head of the waitqueue, wait for the owner & pending to * go away. + * Load-acquired is used here because the set_locked() + * function below may not be a full memory barrier. * * *,x,y -> *,0,0 */ - while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK) + while ((val = smp_load_acquire(&lock->val.counter)) & + _Q_LOCKED_PENDING_MASK) cpu_relax(); /* @@ -339,15 +364,19 @@ void queue_spin_lock_slowpath(struct qsp * * n,0,0 -> 0,0,1 : lock, uncontended * *,0,0 -> *,0,1 : lock, contended + * + * If the queue head is the only one in the queue (lock value == tail), + * clear the tail code and grab the lock. Otherwise, we only need + * to grab the lock. */ for (;;) { - new = _Q_LOCKED_VAL; - if (val != tail) - new |= val; - - old = atomic_cmpxchg(&lock->val, val, new); - if (old == val) + if (val != tail) { + set_locked(lock); break; + } + old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL); + if (old == val) + goto release; /* No contention */ val = old; } @@ -355,12 +384,10 @@ void queue_spin_lock_slowpath(struct qsp /* * contended path; wait for next, release. */ - if (new != _Q_LOCKED_VAL) { - while (!(next = ACCESS_ONCE(node->next))) - cpu_relax(); + while (!(next = ACCESS_ONCE(node->next))) + cpu_relax(); - arch_mcs_spin_unlock_contended(&next->locked); - } + arch_mcs_spin_unlock_contended(&next->locked); release: /*
Peter Zijlstra
2014-Jun-15 12:47 UTC
[PATCH 08/11] qspinlock: Revert to test-and-set on hypervisors
When we detect a hypervisor (!paravirt, see later patches), revert to a simple test-and-set lock to avoid the horrors of queue preemption. Signed-off-by: Peter Zijlstra <peterz at infradead.org> --- arch/x86/include/asm/qspinlock.h | 14 ++++++++++++++ include/asm-generic/qspinlock.h | 7 +++++++ kernel/locking/qspinlock.c | 3 +++ 3 files changed, 24 insertions(+) --- a/arch/x86/include/asm/qspinlock.h +++ b/arch/x86/include/asm/qspinlock.h @@ -1,6 +1,7 @@ #ifndef _ASM_X86_QSPINLOCK_H #define _ASM_X86_QSPINLOCK_H +#include <asm/cpufeature.h> #include <asm-generic/qspinlock_types.h> #if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE) @@ -20,6 +21,19 @@ static inline void queue_spin_unlock(str #endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */ +#define virt_queue_spin_lock virt_queue_spin_lock + +static inline bool virt_queue_spin_lock(struct qspinlock *lock) +{ + if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) + return false; + + while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0) + cpu_relax(); + + return true; +} + #include <asm-generic/qspinlock.h> #endif /* _ASM_X86_QSPINLOCK_H */ --- a/include/asm-generic/qspinlock.h +++ b/include/asm-generic/qspinlock.h @@ -98,6 +98,13 @@ static __always_inline void queue_spin_u } #endif +#ifndef virt_queue_spin_lock +static __always_inline bool virt_queue_spin_lock(struct qspinlock *lock) +{ + return false; +} +#endif + /* * Initializier */ --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -247,6 +247,9 @@ void queue_spin_lock_slowpath(struct qsp BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS)); + if (virt_queue_spin_lock(lock)) + return; + /* * wait for in-progress pending->locked hand-overs *
Peter Zijlstra
2014-Jun-15 12:47 UTC
[PATCH 09/11] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
From: Waiman Long <Waiman.Long at hp.com> This patch renames the paravirt_ticketlocks_enabled static key to a more generic paravirt_spinlocks_enabled name. Signed-off-by: Waiman Long <Waiman.Long at hp.com> Signed-off-by: Peter Zijlstra <peterz at infradead.org> --- arch/x86/include/asm/spinlock.h | 4 ++-- arch/x86/kernel/kvm.c | 2 +- arch/x86/kernel/paravirt-spinlocks.c | 4 ++-- arch/x86/xen/spinlock.c | 2 +- 4 files changed, 6 insertions(+), 6 deletions(-) --- a/arch/x86/include/asm/spinlock.h +++ b/arch/x86/include/asm/spinlock.h @@ -39,7 +39,7 @@ /* How long a lock should spin before we consider blocking */ #define SPIN_THRESHOLD (1 << 15) -extern struct static_key paravirt_ticketlocks_enabled; +extern struct static_key paravirt_spinlocks_enabled; static __always_inline bool static_key_false(struct static_key *key); #ifdef CONFIG_QUEUE_SPINLOCK @@ -150,7 +150,7 @@ static inline void __ticket_unlock_slowp static __always_inline void arch_spin_unlock(arch_spinlock_t *lock) { if (TICKET_SLOWPATH_FLAG && - static_key_false(¶virt_ticketlocks_enabled)) { + static_key_false(¶virt_spinlocks_enabled)) { arch_spinlock_t prev; prev = *lock; --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -819,7 +819,7 @@ static __init int kvm_spinlock_init_jump if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) return 0; - static_key_slow_inc(¶virt_ticketlocks_enabled); + static_key_slow_inc(¶virt_spinlocks_enabled); printk(KERN_INFO "KVM setup paravirtual spinlock\n"); return 0; --- a/arch/x86/kernel/paravirt-spinlocks.c +++ b/arch/x86/kernel/paravirt-spinlocks.c @@ -16,5 +16,5 @@ struct pv_lock_ops pv_lock_ops = { }; EXPORT_SYMBOL(pv_lock_ops); -struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE; -EXPORT_SYMBOL(paravirt_ticketlocks_enabled); +struct static_key paravirt_spinlocks_enabled = STATIC_KEY_INIT_FALSE; +EXPORT_SYMBOL(paravirt_spinlocks_enabled); --- a/arch/x86/xen/spinlock.c +++ b/arch/x86/xen/spinlock.c @@ -293,7 +293,7 @@ static __init int xen_init_spinlocks_jum if (!xen_domain()) return 0; - static_key_slow_inc(¶virt_ticketlocks_enabled); + static_key_slow_inc(¶virt_spinlocks_enabled); return 0; } early_initcall(xen_init_spinlocks_jump);
Add minimal paravirt support. The code aims for minimal impact on the native case. On the lock side we add one jump label (asm_goto) and 4 paravirt callee saved calls that default to NOPs. The only effects are the extra NOPs and some pointless MOVs to accomodate the calling convention. No register spills happen because of this (x86_64). On the unlock side we have one paravirt callee saved call, which defaults to the actual unlock sequence: "movb $0, (%rdi)" and a NOP. The actual paravirt code comes in 3 parts; - init_node; this initializes the extra data members required for PV state. PV state data is kept 1 cacheline ahead of the regular data. - link_and_wait_node/kick_node; these are paired with the regular MCS queueing and are placed resp. before/after the paired MCS ops. - wait_head/queue_unlock; the interesting part here is finding the head node to kick. Tracking the head is done in two parts, firstly the pv_wait_head will store its cpu number in whichever node is pointed to by the tail part of the lock word. Secondly, pv_link_and_wait_node() will propagate the existing head from the old to the new tail node. Signed-off-by: Peter Zijlstra <peterz at infradead.org> --- arch/x86/include/asm/paravirt.h | 39 +++++++ arch/x86/include/asm/paravirt_types.h | 15 ++ arch/x86/include/asm/qspinlock.h | 25 ++++ arch/x86/kernel/paravirt-spinlocks.c | 22 ++++ arch/x86/kernel/paravirt_patch_32.c | 7 + arch/x86/kernel/paravirt_patch_64.c | 7 + include/asm-generic/qspinlock.h | 11 ++ kernel/locking/qspinlock.c | 179 +++++++++++++++++++++++++++++++++- 8 files changed, 302 insertions(+), 3 deletions(-) Index: linux-2.6/arch/x86/include/asm/paravirt.h ==================================================================--- linux-2.6.orig/arch/x86/include/asm/paravirt.h +++ linux-2.6/arch/x86/include/asm/paravirt.h @@ -712,6 +712,44 @@ static inline void __set_fixmap(unsigned #if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS) +#ifdef CONFIG_QUEUE_SPINLOCK + +static __always_inline void pv_init_node(struct mcs_spinlock *node) +{ + PVOP_VCALLEE1(pv_lock_ops.init_node, node); +} + +static __always_inline void pv_link_and_wait_node(u32 old, struct mcs_spinlock *node) +{ + PVOP_VCALLEE2(pv_lock_ops.link_and_wait_node, old, node); +} + +static __always_inline void pv_kick_node(struct mcs_spinlock *node) +{ + PVOP_VCALLEE1(pv_lock_ops.kick_node, node); +} + +static __always_inline void pv_wait_head(struct qspinlock *lock) +{ + PVOP_VCALLEE1(pv_lock_ops.wait_head, lock); +} + +static __always_inline void pv_queue_unlock(struct qspinlock *lock) +{ + PVOP_VCALLEE1(pv_lock_ops.queue_unlock, lock); +} + +static __always_inline void pv_wait(int *ptr, int val) +{ + PVOP_VCALL2(pv_lock_ops.wait, ptr, val); +} + +static __always_inline void pv_kick(int cpu) +{ + PVOP_VCALL1(pv_lock_ops.kick, cpu); +} + +#else static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, __ticket_t ticket) { @@ -723,6 +761,7 @@ static __always_inline void __ticket_unl { PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket); } +#endif #endif Index: linux-2.6/arch/x86/include/asm/paravirt_types.h ==================================================================--- linux-2.6.orig/arch/x86/include/asm/paravirt_types.h +++ linux-2.6/arch/x86/include/asm/paravirt_types.h @@ -326,6 +326,9 @@ struct pv_mmu_ops { phys_addr_t phys, pgprot_t flags); }; +struct mcs_spinlock; +struct qspinlock; + struct arch_spinlock; #ifdef CONFIG_SMP #include <asm/spinlock_types.h> @@ -334,8 +337,20 @@ typedef u16 __ticket_t; #endif struct pv_lock_ops { +#ifdef CONFIG_QUEUE_SPINLOCK + struct paravirt_callee_save init_node; + struct paravirt_callee_save link_and_wait_node; + struct paravirt_callee_save kick_node; + + struct paravirt_callee_save wait_head; + struct paravirt_callee_save queue_unlock; + + void (*wait)(int *ptr, int val); + void (*kick)(int cpu); +#else struct paravirt_callee_save lock_spinning; void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket); +#endif }; /* This contains all the paravirt structures: we get a convenient Index: linux-2.6/arch/x86/include/asm/qspinlock.h ==================================================================--- linux-2.6.orig/arch/x86/include/asm/qspinlock.h +++ linux-2.6/arch/x86/include/asm/qspinlock.h @@ -3,24 +3,45 @@ #include <asm/cpufeature.h> #include <asm-generic/qspinlock_types.h> +#include <asm/paravirt.h> #if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE) -#define queue_spin_unlock queue_spin_unlock /** * queue_spin_unlock - release a queue spinlock * @lock : Pointer to queue spinlock structure * * An effective smp_store_release() on the least-significant byte. */ -static inline void queue_spin_unlock(struct qspinlock *lock) +static inline void native_queue_unlock(struct qspinlock *lock) { barrier(); ACCESS_ONCE(*(u8 *)lock) = 0; } +#else + +static inline void native_queue_unlock(struct qspinlock *lock) +{ + atomic_dec(&lock->val); +} + #endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */ +#define queue_spin_unlock queue_spin_unlock + +#ifdef CONFIG_PARAVIRT_SPINLOCKS +static inline void queue_spin_unlock(struct qspinlock *lock) +{ + pv_queue_unlock(lock); +} +#else +static inline void queue_spin_unlock(struct qspinlock *lock) +{ + native_queue_unlock(lock); +} +#endif + #define virt_queue_spin_lock virt_queue_spin_lock static inline bool virt_queue_spin_lock(struct qspinlock *lock) Index: linux-2.6/arch/x86/kernel/paravirt-spinlocks.c ==================================================================--- linux-2.6.orig/arch/x86/kernel/paravirt-spinlocks.c +++ linux-2.6/arch/x86/kernel/paravirt-spinlocks.c @@ -8,11 +8,33 @@ #include <asm/paravirt.h> +#ifdef CONFIG_SMP +#ifdef CONFIG_QUEUE_SPINLOCK +void __native_queue_unlock(struct qspinlock *lock) +{ + native_queue_unlock(lock); +} +PV_CALLEE_SAVE_REGS_THUNK(__native_queue_unlock); +#endif +#endif + struct pv_lock_ops pv_lock_ops = { #ifdef CONFIG_SMP +#ifdef CONFIG_QUEUE_SPINLOCK + .init_node = __PV_IS_CALLEE_SAVE(paravirt_nop), + .link_and_wait_node = __PV_IS_CALLEE_SAVE(paravirt_nop), + .kick_node = __PV_IS_CALLEE_SAVE(paravirt_nop), + + .wait_head = __PV_IS_CALLEE_SAVE(paravirt_nop), + .queue_unlock = PV_CALLEE_SAVE(__native_queue_unlock), + + .wait = paravirt_nop, + .kick = paravirt_nop, +#else .lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop), .unlock_kick = paravirt_nop, #endif +#endif }; EXPORT_SYMBOL(pv_lock_ops); Index: linux-2.6/arch/x86/kernel/paravirt_patch_32.c ==================================================================--- linux-2.6.orig/arch/x86/kernel/paravirt_patch_32.c +++ linux-2.6/arch/x86/kernel/paravirt_patch_32.c @@ -12,6 +12,10 @@ DEF_NATIVE(pv_mmu_ops, read_cr3, "mov %c DEF_NATIVE(pv_cpu_ops, clts, "clts"); DEF_NATIVE(pv_cpu_ops, read_tsc, "rdtsc"); +#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUE_SPINLOCK) +DEF_NATIVE(pv_lock_ops, queue_unlock, "movb $0, (%eax)"); +#endif + unsigned paravirt_patch_ident_32(void *insnbuf, unsigned len) { /* arg in %eax, return in %eax */ @@ -47,6 +51,9 @@ unsigned native_patch(u8 type, u16 clobb PATCH_SITE(pv_mmu_ops, write_cr3); PATCH_SITE(pv_cpu_ops, clts); PATCH_SITE(pv_cpu_ops, read_tsc); +#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUE_SPINLOCK) + PATCH_SITE(pv_lock_ops, queue_unlock); +#endif patch_site: ret = paravirt_patch_insns(ibuf, len, start, end); Index: linux-2.6/arch/x86/kernel/paravirt_patch_64.c ==================================================================--- linux-2.6.orig/arch/x86/kernel/paravirt_patch_64.c +++ linux-2.6/arch/x86/kernel/paravirt_patch_64.c @@ -22,6 +22,10 @@ DEF_NATIVE(pv_cpu_ops, swapgs, "swapgs") DEF_NATIVE(, mov32, "mov %edi, %eax"); DEF_NATIVE(, mov64, "mov %rdi, %rax"); +#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUE_SPINLOCK) +DEF_NATIVE(pv_lock_ops, queue_unlock, "movb $0, (%rdi)"); +#endif + unsigned paravirt_patch_ident_32(void *insnbuf, unsigned len) { return paravirt_patch_insns(insnbuf, len, @@ -61,6 +65,9 @@ unsigned native_patch(u8 type, u16 clobb PATCH_SITE(pv_cpu_ops, clts); PATCH_SITE(pv_mmu_ops, flush_tlb_single); PATCH_SITE(pv_cpu_ops, wbinvd); +#if defined(CONFIG_PARAVIRT_SPINLOCKS) && defined(CONFIG_QUEUE_SPINLOCK) + PATCH_SITE(pv_lock_ops, queue_unlock); +#endif patch_site: ret = paravirt_patch_insns(ibuf, len, start, end); Index: linux-2.6/include/asm-generic/qspinlock.h ==================================================================--- linux-2.6.orig/include/asm-generic/qspinlock.h +++ linux-2.6/include/asm-generic/qspinlock.h @@ -105,6 +105,17 @@ static __always_inline bool virt_queue_s } #endif +#ifdef CONFIG_PARAVIRT_SPINLOCKS +struct mcs_spinlock; + +extern void __pv_init_node(struct mcs_spinlock *node); +extern void __pv_link_and_wait_node(u32 old, struct mcs_spinlock *node); +extern void __pv_kick_node(struct mcs_spinlock *node); + +extern void __pv_wait_head(struct qspinlock *lock); +extern void __pv_queue_unlock(struct qspinlock *lock); +#endif + /* * Initializier */ Index: linux-2.6/kernel/locking/qspinlock.c ==================================================================--- linux-2.6.orig/kernel/locking/qspinlock.c +++ linux-2.6/kernel/locking/qspinlock.c @@ -56,13 +56,33 @@ #include "mcs_spinlock.h" +#ifdef CONFIG_PARAVIRT_SPINLOCKS + +#define MAX_NODES 8 + +static inline bool pv_enabled(void) +{ + return static_key_false(¶virt_spinlocks_enabled); +} +#else /* !PARAVIRT_SPINLOCKS */ + +#define MAX_NODES 4 + +static inline bool pv_enabled(void) +{ + return false; +} +#endif /* PARAVIRT_SPINLOCKS */ + /* * Per-CPU queue node structures; we can never have more than 4 nested * contexts: task, softirq, hardirq, nmi. * * Exactly fits one cacheline. + * + * PV doubles the storage and uses the second cacheline for PV state. */ -static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[4]); +static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[MAX_NODES]); /* * We must be able to distinguish between no-tail and the tail at 0:0, @@ -218,6 +238,156 @@ static __always_inline void set_locked(s ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL; } +#ifdef CONFIG_PARAVIRT_SPINLOCKS + +/* + * Write a comment about how all this works... + */ + +#define _Q_LOCKED_SLOW (2U << _Q_LOCKED_OFFSET) + +struct pv_node { + struct mcs_spinlock mcs; + struct mcs_spinlock __offset[3]; + int cpu, head; +}; + +#define INVALID_HEAD -1 +#define NO_HEAD nr_cpu_ids + +void __pv_init_node(struct mcs_spinlock *node) +{ + struct pv_node *pn = (struct pv_node *)node; + + BUILD_BUG_ON(sizeof(struct pv_node) > 5*sizeof(struct mcs_spinlock)); + + pn->cpu = smp_processor_id(); + pn->head = INVALID_HEAD; +} + +static inline struct pv_node *pv_decode_tail(u32 tail) +{ + return (struct pv_node *)decode_tail(tail); +} + +void __pv_link_and_wait_node(u32 old, struct mcs_spinlock *node) +{ + struct pv_node *ppn, *pn = (struct pv_node *)node; + unsigned int count; + + if (!(old & _Q_TAIL_MASK)) { + pn->head = NO_HEAD; + return; + } + + ppn = pv_decode_tail(old); + ACCESS_ONCE(ppn->mcs.next) = node; + + while (ppn->head == INVALID_HEAD) + cpu_relax(); + + pn->head = ppn->head; + + for (;;) { + count = SPIN_THRESHOLD; + + do { + if (smp_load_acquire(&node->locked)) + return; + + cpu_relax(); + } while (--count); + + pv_wait(&node->locked, 1); + } +} + +void __pv_kick_node(struct mcs_spinlock *node) +{ + struct pv_node *pn = (struct pv_node *)node; + + pv_kick(pn->cpu); +} + +void __pv_wait_head(struct qspinlock *lock) +{ + unsigned int count; + struct pv_node *pn; + int val, old, new; + + for (;;) { + count = SPIN_THRESHOLD; + + do { + val = smp_load_acquire(&lock->val.counter); + if (!(val & _Q_LOCKED_PENDING_MASK)) + return; + } while (--count); + + do { + pn = pv_decode_tail(atomic_read(&lock->val)); + + while (pn->head == INVALID_HEAD) + cpu_relax(); + + pn->head = smp_processor_id(); + + } while (pn != pv_decode_tail(atomic_read(&lock->val))); + + /* + * Set _Q_LOCKED_SLOW; bail when the lock is free. + */ + val = atomic_read(&lock->val); + for (;;) { + if (!(val & _Q_LOCKED_PENDING_MASK)) + return; + new = val | _Q_LOCKED_SLOW; + old = atomic_cmpxchg(&lock->val, val, new); + if (old == val) + break; + val = old; + } + + /* XXX 16bit would be better */ + pv_wait(&lock->val.counter, new); + } +} + +static void ___pv_kick_head(struct qspinlock *lock) +{ + struct pv_node *pn; + + pn = pv_decode_tail(atomic_read(&lock->val)); + + while (pn->head == INVALID_HEAD) + cpu_relax(); + + if (WARN_ON_ONCE(pn->head == NO_HEAD)) + return; + + pv_kick(pn->head); +} + +void __pv_queue_unlock(struct qspinlock *lock) +{ + int val = atomic_read(&lock->val); + + native_queue_unlock(lock); + + if (val & _Q_LOCKED_SLOW) + ___pv_kick_head(lock); +} + +#else + +static inline void pv_init_node(struct mcs_spinlock *node) { } +static inline void pv_link_and_wait_node(u32 old, struct mcs_spinlock *node) { } +static inline void pv_kick_node(struct mcs_spinlock *node) { } + +static inline void pv_wait_head(struct qspinlock *lock) { } + +#endif + /** * queue_spin_lock_slowpath - acquire the queue spinlock * @lock: Pointer to queue spinlock structure @@ -247,6 +417,9 @@ void queue_spin_lock_slowpath(struct qsp BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS)); + if (pv_enabled()) + goto queue; + if (virt_queue_spin_lock(lock)) return; @@ -323,6 +496,7 @@ void queue_spin_lock_slowpath(struct qsp node += idx; node->locked = 0; node->next = NULL; + pv_init_node(node); /* * We touched a (possibly) cold cacheline in the per-cpu queue node; @@ -343,6 +517,7 @@ void queue_spin_lock_slowpath(struct qsp /* * if there was a previous node; link it and wait. */ + pv_link_and_wait_node(old, node); if (old & _Q_TAIL_MASK) { prev = decode_tail(old); ACCESS_ONCE(prev->next) = node; @@ -358,6 +533,7 @@ void queue_spin_lock_slowpath(struct qsp * * *,x,y -> *,0,0 */ + pv_wait_head(lock); while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_PENDING_MASK) cpu_relax(); @@ -391,6 +567,7 @@ void queue_spin_lock_slowpath(struct qsp cpu_relax(); arch_mcs_spin_unlock_contended(&next->locked); + pv_kick_node(next); release: /*
Signed-off-by: Peter Zijlstra <peterz at infradead.org> --- arch/x86/kernel/kvm.c | 58 ++++++++++++++++++++++++++++++++++++++++++++++++++ kernel/Kconfig.locks | 2 - 2 files changed, 59 insertions(+), 1 deletion(-) Index: linux-2.6/arch/x86/kernel/kvm.c ==================================================================--- linux-2.6.orig/arch/x86/kernel/kvm.c +++ linux-2.6/arch/x86/kernel/kvm.c @@ -569,6 +569,7 @@ static void kvm_kick_cpu(int cpu) kvm_hypercall2(KVM_HC_KICK_CPU, flags, apicid); } +#ifndef CONFIG_QUEUE_SPINLOCK enum kvm_contention_stat { TAKEN_SLOW, TAKEN_SLOW_PICKUP, @@ -796,6 +797,51 @@ static void kvm_unlock_kick(struct arch_ } } } +#else /* QUEUE_SPINLOCK */ + +#include <asm-generic/qspinlock.h> + +PV_CALLEE_SAVE_REGS_THUNK(__pv_init_node); +PV_CALLEE_SAVE_REGS_THUNK(__pv_link_and_wait_node); +PV_CALLEE_SAVE_REGS_THUNK(__pv_kick_node); + +PV_CALLEE_SAVE_REGS_THUNK(__pv_wait_head); +PV_CALLEE_SAVE_REGS_THUNK(__pv_queue_unlock); + +void kvm_wait(int *ptr, int val) +{ + unsigned long flags; + + if (in_nmi()) + return; + + /* + * Make sure an interrupt handler can't upset things in a + * partially setup state. + */ + local_irq_save(flags); + + /* + * check again make sure it didn't become free while + * we weren't looking. + */ + if (ACCESS_ONCE(*ptr) != val) + goto out; + + /* + * halt until it's our turn and kicked. Note that we do safe halt + * for irq enabled case to avoid hang when lock info is overwritten + * in irq spinlock slowpath and no spurious interrupt occur to save us. + */ + if (arch_irqs_disabled_flags(flags)) + halt(); + else + safe_halt(); + +out: + local_irq_restore(flags); +} +#endif /* QUEUE_SPINLOCK */ /* * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present. @@ -808,8 +854,20 @@ void __init kvm_spinlock_init(void) if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) return; +#ifdef CONFIG_QUEUE_SPINLOCK + pv_lock_ops.init_node = PV_CALLEE_SAVE(__pv_init_node); + pv_lock_ops.link_and_wait_node = PV_CALLEE_SAVE(__pv_link_and_wait_node); + pv_lock_ops.kick_node = PV_CALLEE_SAVE(__pv_kick_node); + + pv_lock_ops.wait_head = PV_CALLEE_SAVE(__pv_wait_head); + pv_lock_ops.queue_unlock = PV_CALLEE_SAVE(__pv_queue_unlock); + + pv_lock_ops.wait = kvm_wait; + pv_lock_ops.kick = kvm_kick_cpu; +#else pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning); pv_lock_ops.unlock_kick = kvm_unlock_kick; +#endif } static __init int kvm_spinlock_init_jump(void) Index: linux-2.6/kernel/Kconfig.locks ==================================================================--- linux-2.6.orig/kernel/Kconfig.locks +++ linux-2.6/kernel/Kconfig.locks @@ -229,7 +229,7 @@ config ARCH_USE_QUEUE_SPINLOCK config QUEUE_SPINLOCK def_bool y if ARCH_USE_QUEUE_SPINLOCK - depends on SMP && !PARAVIRT_SPINLOCKS + depends on SMP && !(PARAVIRT_SPINLOCKS && XEN) config ARCH_USE_QUEUE_RWLOCK bool
Konrad Rzeszutek Wilk
2014-Jun-16 20:49 UTC
[PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
On Sun, Jun 15, 2014 at 02:46:58PM +0200, Peter Zijlstra wrote:> From: Waiman Long <Waiman.Long at hp.com> > > This patch introduces a new generic queue spinlock implementation that > can serve as an alternative to the default ticket spinlock. Compared > with the ticket spinlock, this queue spinlock should be almost as fair > as the ticket spinlock. It has about the same speed in single-thread > and it can be much faster in high contention situations especially when > the spinlock is embedded within the data structure to be protected. > > Only in light to moderate contention where the average queue depth > is around 1-3 will this queue spinlock be potentially a bit slower > due to the higher slowpath overhead. > > This queue spinlock is especially suit to NUMA machines with a large > number of cores as the chance of spinlock contention is much higher > in those machines. The cost of contention is also higher because of > slower inter-node memory traffic. > > Due to the fact that spinlocks are acquired with preemption disabled, > the process will not be migrated to another CPU while it is trying > to get a spinlock. Ignoring interrupt handling, a CPU can only be > contending in one spinlock at any one time. Counting soft IRQ, hard > IRQ and NMI, a CPU can only have a maximum of 4 concurrent lock waiting > activities. By allocating a set of per-cpu queue nodes and used them > to form a waiting queue, we can encode the queue node address into a > much smaller 24-bit size (including CPU number and queue node index) > leaving one byte for the lock. > > Please note that the queue node is only needed when waiting for the > lock. Once the lock is acquired, the queue node can be released to > be used later. > > Signed-off-by: Waiman Long <Waiman.Long at hp.com> > Signed-off-by: Peter Zijlstra <peterz at infradead.org>Thank you for the repost. I have some questions about the implementation that hopefully will be easy to answer and said answers I hope can be added in the code to enlighten other folks. See below. .. snip..> Index: linux-2.6/kernel/locking/mcs_spinlock.h > ==================================================================> --- linux-2.6.orig/kernel/locking/mcs_spinlock.h > +++ linux-2.6/kernel/locking/mcs_spinlock.h > @@ -17,6 +17,7 @@ > struct mcs_spinlock { > struct mcs_spinlock *next; > int locked; /* 1 if lock acquired */ > + int count;This could use a comment.> }; > > #ifndef arch_mcs_spin_lock_contended > Index: linux-2.6/kernel/locking/qspinlock.c > ==================================================================> --- /dev/null > +++ linux-2.6/kernel/locking/qspinlock.c > @@ -0,0 +1,197 @@ > +/* > + * Queue spinlock > + * > + * This program is free software; you can redistribute it and/or modify > + * it under the terms of the GNU General Public License as published by > + * the Free Software Foundation; either version 2 of the License, or > + * (at your option) any later version. > + * > + * This program is distributed in the hope that it will be useful, > + * but WITHOUT ANY WARRANTY; without even the implied warranty of > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the > + * GNU General Public License for more details. > + * > + * (C) Copyright 2013-2014 Hewlett-Packard Development Company, L.P. > + * > + * Authors: Waiman Long <waiman.long at hp.com> > + * Peter Zijlstra <pzijlstr at redhat.com> > + */ > +#include <linux/smp.h> > +#include <linux/bug.h> > +#include <linux/cpumask.h> > +#include <linux/percpu.h> > +#include <linux/hardirq.h> > +#include <linux/mutex.h> > +#include <asm/qspinlock.h> > + > +/* > + * The basic principle of a queue-based spinlock can best be understood > + * by studying a classic queue-based spinlock implementation called the > + * MCS lock. The paper below provides a good description for this kind > + * of lock. > + * > + * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf > + * > + * This queue spinlock implementation is based on the MCS lock, however to make > + * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing > + * API, we must modify it some. > + * > + * In particular; where the traditional MCS lock consists of a tail pointer > + * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to > + * unlock the next pending (next->locked), we compress both these: {tail, > + * next->locked} into a single u32 value. > + * > + * Since a spinlock disables recursion of its own context and there is a limit > + * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can > + * encode the tail as and index indicating this context and a cpu number. > + * > + * We can further change the first spinner to spin on a bit in the lock word > + * instead of its node; whereby avoiding the need to carry a node from lock to > + * unlock, and preserving API. > + */ > + > +#include "mcs_spinlock.h" > + > +/* > + * Per-CPU queue node structures; we can never have more than 4 nested > + * contexts: task, softirq, hardirq, nmi. > + * > + * Exactly fits one cacheline. > + */ > +static DEFINE_PER_CPU_ALIGNED(struct mcs_spinlock, mcs_nodes[4]); > + > +/* > + * We must be able to distinguish between no-tail and the tail at 0:0, > + * therefore increment the cpu number by one. > + */ > + > +static inline u32 encode_tail(int cpu, int idx) > +{ > + u32 tail; > + > + tail = (cpu + 1) << _Q_TAIL_CPU_OFFSET; > + tail |= idx << _Q_TAIL_IDX_OFFSET; /* assume < 4 */Should there an ASSSERT (idx < 4) just in case we screw up somehow (I can't figure out how, but that is partially why ASSERTS are added).> + > + return tail; > +} > + > +static inline struct mcs_spinlock *decode_tail(u32 tail) > +{ > + int cpu = (tail >> _Q_TAIL_CPU_OFFSET) - 1; > + int idx = (tail & _Q_TAIL_IDX_MASK) >> _Q_TAIL_IDX_OFFSET; > + > + return per_cpu_ptr(&mcs_nodes[idx], cpu); > +} > + > +/** > + * queue_spin_lock_slowpath - acquire the queue spinlock > + * @lock: Pointer to queue spinlock structure > + * @val: Current value of the queue spinlock 32-bit word > + * > + * (queue tail, lock bit)Except it is not a lock bit. It is a lock uint8_t. Is the queue tail at this point the composite of 'cpu|idx'?> + * > + * fast : slow : unlock > + * : : > + * uncontended (0,0) --:--> (0,1) --------------------------------:--> (*,0) > + * : | ^--------. / : > + * : v \ | : > + * uncontended : (n,x) --+--> (n,0) | :So many CPUn come in right? Is 'n' for the number of CPUs?> + * queue : | ^--' | : > + * : v | : > + * contended : (*,x) --+--> (*,0) -----> (*,1) ---' : > + * queue : ^--' :And here um, what are the '*' for? Are they the four different types of handlers that can be nested? So task, sofitrq, hardisk, and nmi?> + * > + */ > +void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val) > +{ > + struct mcs_spinlock *prev, *next, *node; > + u32 new, old, tail; > + int idx; > + > + BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS)); > + > + node = this_cpu_ptr(&mcs_nodes[0]); > + idx = node->count++;If this is the first time we enter this, wouldn't idx end up being 1?> + tail = encode_tail(smp_processor_id(), idx); > + > + node += idx;Meaning we end up skipping the 'mcs_nodes[0]' one altogether - even on the first 'level' (task, softirq, hardirq, nmi)? Won't that cause us to blow past the array when we are nested at the nmi handler?> + node->locked = 0; > + node->next = NULL; > + > + /* > + * trylock || xchg(lock, node) > + * > + * 0,0 -> 0,1 ; trylock > + * p,x -> n,x ; prev = xchg(lock, node)I looked at that for 10 seconds and I was not sure what you meant. Is this related to the MCS document you had pointed to? It would help if you mention that the comments follow the document. (But they don't seem to) I presume what you mean is that if we are the next after the lock-holder we need only to update the 'next' (or the composite value of smp_processor_idx | idx) to point to us. As in, swap the 'L' with 'I' (looking at the doc)> + */ > + for (;;) { > + new = _Q_LOCKED_VAL; > + if (val)Could you add a comment here, like this: /* * N.B. Initially 'val' will have some value (as we are called * after the _Q_LOCKED_VAL could not be set by queue_spin_lock). * But on subsequent iterations, either the lock holder will * decrement the val (queue_spin_unlock - to zero) and we * needn't to record our status in the queue as we have set the * Q_LOCKED_VAL (new) and are the lock holder. Or we are next * in line and need to record our 'next' (aka, smp_processor_id() | idx) * position. */ */> + new = tail | (val & _Q_LOCKED_MASK); > + > + old = atomic_cmpxchg(&lock->val, val, new); > + if (old == val) > + break; > + > + val = old; > + } > + > + /* > + * we won the trylock; forget about queueing. > + */ > + if (new == _Q_LOCKED_VAL) > + goto release; > + > + /* > + * if there was a previous node; link it and wait. > + */ > + if (old & ~_Q_LOCKED_MASK) { > + prev = decode_tail(old); > + ACCESS_ONCE(prev->next) = node; > + > + arch_mcs_spin_lock_contended(&node->locked); > + } > + > + /* > + * we're at the head of the waitqueue, wait for the owner to go away. > + * > + * *,x -> *,0 > + */ > + while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK) > + cpu_relax(); > + > + /* > + * claim the lock: > + * > + * n,0 -> 0,1 : lock, uncontended > + * *,0 -> *,1 : lock, contended > + */ > + for (;;) { > + new = _Q_LOCKED_VAL; > + if (val != tail) > + new |= val;You lost me here. If we are at the head of the queue, and the owner has called queue_spin_unlock (hence made us get out of the 'val = atomic_read' loop, how can val != tail? I suspect it has something to do with the comment, but I am still unsure what it means. Could you help a bit in explaining it in English please?> + > + old = atomic_cmpxchg(&lock->val, val, new); > + if (old == val) > + break; > + > + val = old; > + } > + > + /* > + * contended path; wait for next, release. > + */ > + if (new != _Q_LOCKED_VAL) {Hm, wouldn't it be just easier to do a 'goto restart' where restart label points at the first loop statement? Ah never mind - we have already inserted ourselves in the previous's node. But that is confusing - we have done: "prev->next = node;" And then exited out of 'val = atomic_read(&lock->val))' which suggests that queue_spin_unlock has called us. How can we be contended again? Thanks!> + while (!(next = ACCESS_ONCE(node->next))) > + cpu_relax(); > + > + arch_mcs_spin_unlock_contended(&next->locked); > + } > + > +release: > + /* > + * release the node > + */ > + this_cpu_dec(mcs_nodes[0].count); > +} > +EXPORT_SYMBOL(queue_spin_lock_slowpath); > >
On Sun, Jun 15, 2014 at 02:46:57PM +0200, Peter Zijlstra wrote:> Since Waiman seems incapable of doing simple things; here's my take on the > paravirt crap. > > The first few patches are taken from Waiman's latest series, but the virt > support is completely new. Its primary aim is to not mess up the native code.OK. I finally cleared some time to look over this and are reading the code in details to make sure I have it clear in mind. I will most likely ask some questions that are naive - hopefully they will lead to the code being self-explanatory for anybody else taking a stab at understanding them when bugs appear.> > I've not stress tested it, but the virt and paravirt (kvm) cases boot on simple > smp guests. I've not done Xen, but the patch should be simple and similar.Looking forward to seeing it. Glancing over the KVM one and comparing it to the original version that Waiman posted it should be fairly simple. Perhaps even some of the code could be shared?> > I ripped out all the unfair nonsense as its not at all required for paravirt > and optimizations that make paravirt better at the cost of code clarity and/or > native performance are just not worth it. > > Also; if we were to ever add some of that unfair nonsense you do so _after_ you > got the simple things working. > > The thing I'm least sure about is the head tracking, I chose to do something > different from what Waiman did, because his is O(nr_cpus) and had the > assumption that guests have small nr_cpus. AFAIK this is not at all true. The > biggest problem I have with what I did is that it contains wait loops itself. > > >
Waiman Long
2014-Jun-16 21:57 UTC
[PATCH 08/11] qspinlock: Revert to test-and-set on hypervisors
On 06/15/2014 08:47 AM, Peter Zijlstra wrote:> When we detect a hypervisor (!paravirt, see later patches), revert to > a simple test-and-set lock to avoid the horrors of queue preemption. > > Signed-off-by: Peter Zijlstra<peterz at infradead.org> > --- > arch/x86/include/asm/qspinlock.h | 14 ++++++++++++++ > include/asm-generic/qspinlock.h | 7 +++++++ > kernel/locking/qspinlock.c | 3 +++ > 3 files changed, 24 insertions(+) > > --- a/arch/x86/include/asm/qspinlock.h > +++ b/arch/x86/include/asm/qspinlock.h > @@ -1,6 +1,7 @@ > #ifndef _ASM_X86_QSPINLOCK_H > #define _ASM_X86_QSPINLOCK_H > > +#include<asm/cpufeature.h> > #include<asm-generic/qspinlock_types.h> > > #if !defined(CONFIG_X86_OOSTORE)&& !defined(CONFIG_X86_PPRO_FENCE) > @@ -20,6 +21,19 @@ static inline void queue_spin_unlock(str > > #endif /* !CONFIG_X86_OOSTORE&& !CONFIG_X86_PPRO_FENCE */ > > +#define virt_queue_spin_lock virt_queue_spin_lock > + > +static inline bool virt_queue_spin_lock(struct qspinlock *lock) > +{ > + if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) > + return false; > + > + while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0) > + cpu_relax(); > + > + return true; > +} > + > #include<asm-generic/qspinlock.h> > > #endif /* _ASM_X86_QSPINLOCK_H */ > --- a/include/asm-generic/qspinlock.h > +++ b/include/asm-generic/qspinlock.h > @@ -98,6 +98,13 @@ static __always_inline void queue_spin_u > } > #endif > > +#ifndef virt_queue_spin_lock > +static __always_inline bool virt_queue_spin_lock(struct qspinlock *lock) > +{ > + return false; > +} > +#endif > + > /* > * Initializier > */ > --- a/kernel/locking/qspinlock.c > +++ b/kernel/locking/qspinlock.c > @@ -247,6 +247,9 @@ void queue_spin_lock_slowpath(struct qsp > > BUILD_BUG_ON(CONFIG_NR_CPUS>= (1U<< _Q_TAIL_CPU_BITS)); > > + if (virt_queue_spin_lock(lock)) > + return; > + > /* > * wait for in-progress pending->locked hand-overs > *I just wonder if it is better to allow the kernel distributors to decide if unfair lock should be the default for virtual guest. Anyway, I have no objection to that myself. -Longman
On 06/15/2014 08:47 AM, Peter Zijlstra wrote:> > > > +#ifdef CONFIG_PARAVIRT_SPINLOCKS > + > +/* > + * Write a comment about how all this works... > + */ > + > +#define _Q_LOCKED_SLOW (2U<< _Q_LOCKED_OFFSET) > + > +struct pv_node { > + struct mcs_spinlock mcs; > + struct mcs_spinlock __offset[3]; > + int cpu, head; > +};I am wondering why you need the separate cpu and head variables. I thought one will be enough here. The wait code put the cpu number in head, the the kick_cpu code kick the one in cpu which is just the cpu # of the tail.> + > +#define INVALID_HEAD -1 > +#define NO_HEAD nr_cpu_ids > +I think it is better to use a constant like -2 for NO_HEAD instead of an external variable.> +void __pv_init_node(struct mcs_spinlock *node) > +{ > + struct pv_node *pn = (struct pv_node *)node; > + > + BUILD_BUG_ON(sizeof(struct pv_node)> 5*sizeof(struct mcs_spinlock)); > + > + pn->cpu = smp_processor_id(); > + pn->head = INVALID_HEAD; > +} > + > +static inline struct pv_node *pv_decode_tail(u32 tail) > +{ > + return (struct pv_node *)decode_tail(tail); > +} > + > +void __pv_link_and_wait_node(u32 old, struct mcs_spinlock *node) > +{ > + struct pv_node *ppn, *pn = (struct pv_node *)node; > + unsigned int count; > + > + if (!(old& _Q_TAIL_MASK)) { > + pn->head = NO_HEAD; > + return; > + } > + > + ppn = pv_decode_tail(old); > + ACCESS_ONCE(ppn->mcs.next) = node; > + > + while (ppn->head == INVALID_HEAD) > + cpu_relax(); > + > + pn->head = ppn->head;A race can happen here as pn->head can be changed to the head cpu by the head waiter while being changed by this function at the same time. It is safer to use cmpxchg to make sure that there is no accidental overwriting of the head CPU number.> + > + for (;;) { > + count = SPIN_THRESHOLD; > + > + do { > + if (smp_load_acquire(&node->locked)) > + return; > + > + cpu_relax(); > + } while (--count); > + > + pv_wait(&node->locked, 1); > + } > +} > + > +void __pv_kick_node(struct mcs_spinlock *node) > +{ > + struct pv_node *pn = (struct pv_node *)node; > + > + pv_kick(pn->cpu); > +} > + > +void __pv_wait_head(struct qspinlock *lock) > +{ > + unsigned int count; > + struct pv_node *pn; > + int val, old, new; > + > + for (;;) { > + count = SPIN_THRESHOLD; > + > + do { > + val = smp_load_acquire(&lock->val.counter); > + if (!(val& _Q_LOCKED_PENDING_MASK)) > + return; > + } while (--count); > + > + do { > + pn = pv_decode_tail(atomic_read(&lock->val)); > + > + while (pn->head == INVALID_HEAD) > + cpu_relax(); > + > + pn->head = smp_processor_id(); > + > + } while (pn != pv_decode_tail(atomic_read(&lock->val))); > + > + /* > + * Set _Q_LOCKED_SLOW; bail when the lock is free. > + */ > + val = atomic_read(&lock->val); > + for (;;) { > + if (!(val& _Q_LOCKED_PENDING_MASK)) > + return; > + new = val | _Q_LOCKED_SLOW; > + old = atomic_cmpxchg(&lock->val, val, new); > + if (old == val) > + break; > + val = old; > + } > + > + /* XXX 16bit would be better */ > + pv_wait(&lock->val.counter, new); > + } > +} > + > +static void ___pv_kick_head(struct qspinlock *lock) > +{ > + struct pv_node *pn; > + > + pn = pv_decode_tail(atomic_read(&lock->val)); > + > + while (pn->head == INVALID_HEAD) > + cpu_relax(); > + > + if (WARN_ON_ONCE(pn->head == NO_HEAD)) > + return; > + > + pv_kick(pn->head); > +} > + > +void __pv_queue_unlock(struct qspinlock *lock) > +{ > + int val = atomic_read(&lock->val); > + > + native_queue_unlock(lock); > + > + if (val& _Q_LOCKED_SLOW) > + ___pv_kick_head(lock); > +} > +Again a race can happen here between the reading and writing of the lock value. I can't think of a good way to do that without using cmpxchg.> +#else > + > +static inline void pv_init_node(struct mcs_spinlock *node) { } > +static inline void pv_link_and_wait_node(u32 old, struct mcs_spinlock *node) { } > +static inline void pv_kick_node(struct mcs_spinlock *node) { } > + > +static inline void pv_wait_head(struct qspinlock *lock) { } > + > +#endif > + > /** > * queue_spin_lock_slowpath - acquire the queue spinlock > * @lock: Pointer to queue spinlock structure > @@ -247,6 +417,9 @@ void queue_spin_lock_slowpath(struct qsp > > BUILD_BUG_ON(CONFIG_NR_CPUS>= (1U<< _Q_TAIL_CPU_BITS)); > > + if (pv_enabled()) > + goto queue; > + > if (virt_queue_spin_lock(lock)) > return; > > @@ -323,6 +496,7 @@ void queue_spin_lock_slowpath(struct qsp > node += idx; > node->locked = 0; > node->next = NULL; > + pv_init_node(node); > > /* > * We touched a (possibly) cold cacheline in the per-cpu queue node; > @@ -343,6 +517,7 @@ void queue_spin_lock_slowpath(struct qsp > /* > * if there was a previous node; link it and wait. > */ > + pv_link_and_wait_node(old, node); > if (old& _Q_TAIL_MASK) { > prev = decode_tail(old); > ACCESS_ONCE(prev->next) = node; > @@ -358,6 +533,7 @@ void queue_spin_lock_slowpath(struct qsp > * > * *,x,y -> *,0,0 > */ > + pv_wait_head(lock); > while ((val = smp_load_acquire(&lock->val.counter))& > _Q_LOCKED_PENDING_MASK) > cpu_relax(); > @@ -391,6 +567,7 @@ void queue_spin_lock_slowpath(struct qsp > cpu_relax(); > > arch_mcs_spin_unlock_contended(&next->locked); > + pv_kick_node(next); >pv_kick_node is an expensive operation and it can significantly slow down the locking operation if we have to do it for every subsequent task in the queue. -Longman -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20140616/4ac6ab07/attachment.html>
I am resending it as my original reply has some HTML code & hence rejected by the mailing lists. On 06/15/2014 08:47 AM, Peter Zijlstra wrote:> > > > +#ifdef CONFIG_PARAVIRT_SPINLOCKS > + > +/* > + * Write a comment about how all this works... > + */ > + > +#define _Q_LOCKED_SLOW (2U<< _Q_LOCKED_OFFSET) > + > +struct pv_node { > + struct mcs_spinlock mcs; > + struct mcs_spinlock __offset[3]; > + int cpu, head; > +};I am wondering why you need the separate cpu and head variables. I thought one will be enough here. The wait code put the cpu number in head, the the kick_cpu code kick the one in cpu which is just the cpu # of the tail.> + > +#define INVALID_HEAD -1 > +#define NO_HEAD nr_cpu_ids > +I think it is better to use a constant like -2 for NO_HEAD instead of an external variable.> +void __pv_init_node(struct mcs_spinlock *node) > +{ > + struct pv_node *pn = (struct pv_node *)node; > + > + BUILD_BUG_ON(sizeof(struct pv_node)> 5*sizeof(struct mcs_spinlock)); > + > + pn->cpu = smp_processor_id(); > + pn->head = INVALID_HEAD; > +} > + > +static inline struct pv_node *pv_decode_tail(u32 tail) > +{ > + return (struct pv_node *)decode_tail(tail); > +} > + > +void __pv_link_and_wait_node(u32 old, struct mcs_spinlock *node) > +{ > + struct pv_node *ppn, *pn = (struct pv_node *)node; > + unsigned int count; > + > + if (!(old& _Q_TAIL_MASK)) { > + pn->head = NO_HEAD; > + return; > + } > + > + ppn = pv_decode_tail(old); > + ACCESS_ONCE(ppn->mcs.next) = node; > + > + while (ppn->head == INVALID_HEAD) > + cpu_relax(); > + > + pn->head = ppn->head;A race can happen here as pn->head can be changed to the head cpu by the head waiter while being changed by this function at the same time. It is safer to use cmpxchg to make sure that there is no accidental overwriting of the head CPU number.> + > + for (;;) { > + count = SPIN_THRESHOLD; > + > + do { > + if (smp_load_acquire(&node->locked)) > + return; > + > + cpu_relax(); > + } while (--count); > + > + pv_wait(&node->locked, 1); > + } > +} > + > +void __pv_kick_node(struct mcs_spinlock *node) > +{ > + struct pv_node *pn = (struct pv_node *)node; > + > + pv_kick(pn->cpu); > +} > + > +void __pv_wait_head(struct qspinlock *lock) > +{ > + unsigned int count; > + struct pv_node *pn; > + int val, old, new; > + > + for (;;) { > + count = SPIN_THRESHOLD; > + > + do { > + val = smp_load_acquire(&lock->val.counter); > + if (!(val& _Q_LOCKED_PENDING_MASK)) > + return; > + } while (--count); > + > + do { > + pn = pv_decode_tail(atomic_read(&lock->val)); > + > + while (pn->head == INVALID_HEAD) > + cpu_relax(); > + > + pn->head = smp_processor_id(); > + > + } while (pn != pv_decode_tail(atomic_read(&lock->val))); > + > + /* > + * Set _Q_LOCKED_SLOW; bail when the lock is free. > + */ > + val = atomic_read(&lock->val); > + for (;;) { > + if (!(val& _Q_LOCKED_PENDING_MASK)) > + return; > + new = val | _Q_LOCKED_SLOW; > + old = atomic_cmpxchg(&lock->val, val, new); > + if (old == val) > + break; > + val = old; > + } > + > + /* XXX 16bit would be better */ > + pv_wait(&lock->val.counter, new); > + } > +} > + > +static void ___pv_kick_head(struct qspinlock *lock) > +{ > + struct pv_node *pn; > + > + pn = pv_decode_tail(atomic_read(&lock->val)); > + > + while (pn->head == INVALID_HEAD) > + cpu_relax(); > + > + if (WARN_ON_ONCE(pn->head == NO_HEAD)) > + return; > + > + pv_kick(pn->head); > +} > + > +void __pv_queue_unlock(struct qspinlock *lock) > +{ > + int val = atomic_read(&lock->val); > + > + native_queue_unlock(lock); > + > + if (val& _Q_LOCKED_SLOW) > + ___pv_kick_head(lock); > +} > +Again a race can happen here between the reading and writing of the lock value. I can't think of a good way to do that without using cmpxchg.> +#else > + > +static inline void pv_init_node(struct mcs_spinlock *node) { } > +static inline void pv_link_and_wait_node(u32 old, struct mcs_spinlock *node) { } > +static inline void pv_kick_node(struct mcs_spinlock *node) { } > + > +static inline void pv_wait_head(struct qspinlock *lock) { } > + > +#endif > + > /** > * queue_spin_lock_slowpath - acquire the queue spinlock > * @lock: Pointer to queue spinlock structure > @@ -247,6 +417,9 @@ void queue_spin_lock_slowpath(struct qsp > > BUILD_BUG_ON(CONFIG_NR_CPUS>= (1U<< _Q_TAIL_CPU_BITS)); > > + if (pv_enabled()) > + goto queue; > + > if (virt_queue_spin_lock(lock)) > return; > > @@ -323,6 +496,7 @@ void queue_spin_lock_slowpath(struct qsp > node += idx; > node->locked = 0; > node->next = NULL; > + pv_init_node(node); > > /* > * We touched a (possibly) cold cacheline in the per-cpu queue node; > @@ -343,6 +517,7 @@ void queue_spin_lock_slowpath(struct qsp > /* > * if there was a previous node; link it and wait. > */ > + pv_link_and_wait_node(old, node); > if (old& _Q_TAIL_MASK) { > prev = decode_tail(old); > ACCESS_ONCE(prev->next) = node; > @@ -358,6 +533,7 @@ void queue_spin_lock_slowpath(struct qsp > * > * *,x,y -> *,0,0 > */ > + pv_wait_head(lock); > while ((val = smp_load_acquire(&lock->val.counter))& > _Q_LOCKED_PENDING_MASK) > cpu_relax(); > @@ -391,6 +567,7 @@ void queue_spin_lock_slowpath(struct qsp > cpu_relax(); > > arch_mcs_spin_unlock_contended(&next->locked); > + pv_kick_node(next); >pv_kick_node is an expensive operation and it can significantly slow down the locking operation if we have to do it for every subsequent task in the queue. -Longman
Konrad Rzeszutek Wilk
2014-Jun-17 20:05 UTC
[PATCH 01/11] qspinlock: A simple generic 4-byte queue spinlock
> + * The basic principle of a queue-based spinlock can best be understood > + * by studying a classic queue-based spinlock implementation called the > + * MCS lock. The paper below provides a good description for this kind > + * of lock. > + * > + * http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf > + * > + * This queue spinlock implementation is based on the MCS lock, however to make > + * it fit the 4 bytes we assume spinlock_t to be, and preserve its existing > + * API, we must modify it some. > + * > + * In particular; where the traditional MCS lock consists of a tail pointer > + * (8 bytes) and needs the next pointer (another 8 bytes) of its own node to > + * unlock the next pending (next->locked), we compress both these: {tail, > + * next->locked} into a single u32 value. > + * > + * Since a spinlock disables recursion of its own context and there is a limit > + * to the contexts that can nest; namely: task, softirq, hardirq, nmi, we can > + * encode the tail as and index indicating this context and a cpu number. > + * > + * We can further change the first spinner to spin on a bit in the lock word > + * instead of its node; whereby avoiding the need to carry a node from lock to > + * unlock, and preserving API.You also made changes (compared to the MCS) in that the unlock path is not spinning waiting for the successor and that the job of passing the lock is not done in the unlock path either. Instead all of that is now done in the path of the lock acquirer logic. Could you update the comment to say that please? Thanks.
On Sun, Jun 15, 2014 at 02:47:00PM +0200, Peter Zijlstra wrote:> Because the qspinlock needs to touch a second cacheline; add a pending > bit and allow a single in-word spinner before we punt to the second > cacheline.Could you add this in the description please: And by second cacheline we mean the local 'node'. That is the: mcs_nodes[0] and mcs_nodes[idx] Perhaps it might be better then to split this in the header file as this is trying to not be a slowpath code - but rather - a pre-slow-path-lets-try-if-we can do another cmpxchg in case the unlocker has just unlocked itself. So something like: diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h index e8a7ae8..29cc9c7 100644 --- a/include/asm-generic/qspinlock.h +++ b/include/asm-generic/qspinlock.h @@ -75,11 +75,21 @@ extern void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val); */ static __always_inline void queue_spin_lock(struct qspinlock *lock) { - u32 val; + u32 val, new; val = atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL); if (likely(val == 0)) return; + + /* One more attempt - but if we fail mark it as pending. */ + if (val == _Q_LOCKED_VAL) { + new = Q_LOCKED_VAL |_Q_PENDING_VAL; + + old = atomic_cmpxchg(&lock->val, val, new); + if (old == _Q_LOCKED_VAL) /* YEEY! */ + return; + val = old; + } queue_spin_lock_slowpath(lock, val); } and then the slowpath preserves most of the old logic path (with the pending bit stuff)?> > Signed-off-by: Peter Zijlstra <peterz at infradead.org> > --- > include/asm-generic/qspinlock_types.h | 12 ++- > kernel/locking/qspinlock.c | 109 +++++++++++++++++++++++++++------- > 2 files changed, 97 insertions(+), 24 deletions(-) > > --- a/include/asm-generic/qspinlock_types.h > +++ b/include/asm-generic/qspinlock_types.h > @@ -39,8 +39,9 @@ typedef struct qspinlock { > * Bitfields in the atomic value: > * > * 0- 7: locked byte > - * 8- 9: tail index > - * 10-31: tail cpu (+1) > + * 8: pending > + * 9-10: tail index > + * 11-31: tail cpu (+1) > */ > #define _Q_SET_MASK(type) (((1U << _Q_ ## type ## _BITS) - 1)\ > << _Q_ ## type ## _OFFSET) > @@ -48,7 +49,11 @@ typedef struct qspinlock { > #define _Q_LOCKED_BITS 8 > #define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED) > > -#define _Q_TAIL_IDX_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS) > +#define _Q_PENDING_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS) > +#define _Q_PENDING_BITS 1 > +#define _Q_PENDING_MASK _Q_SET_MASK(PENDING) > + > +#define _Q_TAIL_IDX_OFFSET (_Q_PENDING_OFFSET + _Q_PENDING_BITS) > #define _Q_TAIL_IDX_BITS 2 > #define _Q_TAIL_IDX_MASK _Q_SET_MASK(TAIL_IDX) > > @@ -57,5 +62,6 @@ typedef struct qspinlock { > #define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU) > > #define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET) > +#define _Q_PENDING_VAL (1U << _Q_PENDING_OFFSET) > > #endif /* __ASM_GENERIC_QSPINLOCK_TYPES_H */ > --- a/kernel/locking/qspinlock.c > +++ b/kernel/locking/qspinlock.c > @@ -83,24 +83,28 @@ static inline struct mcs_spinlock *decod > return per_cpu_ptr(&mcs_nodes[idx], cpu); > } > > +#define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK) > + > /** > * queue_spin_lock_slowpath - acquire the queue spinlock > * @lock: Pointer to queue spinlock structure > * @val: Current value of the queue spinlock 32-bit word > * > - * (queue tail, lock bit) > - * > - * fast : slow : unlock > - * : : > - * uncontended (0,0) --:--> (0,1) --------------------------------:--> (*,0) > - * : | ^--------. / : > - * : v \ | : > - * uncontended : (n,x) --+--> (n,0) | : > - * queue : | ^--' | : > - * : v | : > - * contended : (*,x) --+--> (*,0) -----> (*,1) ---' : > - * queue : ^--' : > + * (queue tail, pending bit, lock bit) > * > + * fast : slow : unlock > + * : : > + * uncontended (0,0,0) -:--> (0,0,1) ------------------------------:--> (*,*,0) > + * : | ^--------.------. / : > + * : v \ \ | : > + * pending : (0,1,1) +--> (0,1,0) \ | : > + * : | ^--' | | : > + * : v | | : > + * uncontended : (n,x,y) +--> (n,0,0) --' | : > + * queue : | ^--' | : > + * : v | : > + * contended : (*,x,y) +--> (*,0,0) ---> (*,0,1) -' : > + * queue : ^--' : > */ > void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val) > { > @@ -110,6 +114,65 @@ void queue_spin_lock_slowpath(struct qsp > > BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS)); > > + /* > + * trylock || pending > + * > + * 0,0,0 -> 0,0,1 ; trylock > + * 0,0,1 -> 0,1,1 ; pending > + */ > + for (;;) { > + /* > + * If we observe any contention; queue. > + */ > + if (val & ~_Q_LOCKED_MASK) > + goto queue; > + > + new = _Q_LOCKED_VAL; > + if (val == new) > + new |= _Q_PENDING_VAL; > + > + old = atomic_cmpxchg(&lock->val, val, new); > + if (old == val) > + break; > + > + val = old; > + } > + > + /* > + * we won the trylock > + */ > + if (new == _Q_LOCKED_VAL) > + return; > + > + /* > + * we're pending, wait for the owner to go away. > + * > + * *,1,1 -> *,1,0 > + */ > + while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK) > + cpu_relax(); > + > + /* > + * take ownership and clear the pending bit. > + * > + * *,1,0 -> *,0,1 > + */ > + for (;;) { > + new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL; > + > + old = atomic_cmpxchg(&lock->val, val, new); > + if (old == val) > + break; > + > + val = old; > + } > + return; > + > + /* > + * End of pending bit optimistic spinning and beginning of MCS > + * queuing. > + */ > +queue: > node = this_cpu_ptr(&mcs_nodes[0]); > idx = node->count++; > tail = encode_tail(smp_processor_id(), idx); > @@ -119,15 +182,18 @@ void queue_spin_lock_slowpath(struct qsp > node->next = NULL; > > /* > + * we already touched the queueing cacheline; don't bother with pending > + * stuff. > + * > * trylock || xchg(lock, node) > * > - * 0,0 -> 0,1 ; trylock > - * p,x -> n,x ; prev = xchg(lock, node) > + * 0,0,0 -> 0,0,1 ; trylock > + * p,y,x -> n,y,x ; prev = xchg(lock, node) > */ > for (;;) { > new = _Q_LOCKED_VAL; > if (val) > - new = tail | (val & _Q_LOCKED_MASK); > + new = tail | (val & _Q_LOCKED_PENDING_MASK); > > old = atomic_cmpxchg(&lock->val, val, new); > if (old == val) > @@ -145,7 +211,7 @@ void queue_spin_lock_slowpath(struct qsp > /* > * if there was a previous node; link it and wait. > */ > - if (old & ~_Q_LOCKED_MASK) { > + if (old & ~_Q_LOCKED_PENDING_MASK) { > prev = decode_tail(old); > ACCESS_ONCE(prev->next) = node; > > @@ -153,18 +219,19 @@ void queue_spin_lock_slowpath(struct qsp > } > > /* > - * we're at the head of the waitqueue, wait for the owner to go away. > + * we're at the head of the waitqueue, wait for the owner & pending to > + * go away. > * > - * *,x -> *,0 > + * *,x,y -> *,0,0 > */ > - while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK) > + while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK) > cpu_relax(); > > /* > * claim the lock: > * > - * n,0 -> 0,1 : lock, uncontended > - * *,0 -> *,1 : lock, contended > + * n,0,0 -> 0,0,1 : lock, uncontended > + * *,0,0 -> *,0,1 : lock, contended > */ > for (;;) { > new = _Q_LOCKED_VAL; > >
Konrad Rzeszutek Wilk
2014-Jun-17 20:55 UTC
[PATCH 04/11] qspinlock: Extract out the exchange of tail code word
On Sun, Jun 15, 2014 at 02:47:01PM +0200, Peter Zijlstra wrote:> From: Waiman Long <Waiman.Long at hp.com> > > This patch extracts the logic for the exchange of new and previous tail > code words into a new xchg_tail() function which can be optimized in a > later patch.And also adds a third try on acquiring the lock. That I think should be a seperate patch. And instead of saying 'later patch' you should spell out the name of the patch. Especially as this might not be obvious from somebody doing git bisection.> > Signed-off-by: Waiman Long <Waiman.Long at hp.com> > Signed-off-by: Peter Zijlstra <peterz at infradead.org> > --- > include/asm-generic/qspinlock_types.h | 2 + > kernel/locking/qspinlock.c | 58 +++++++++++++++++++++------------- > 2 files changed, 38 insertions(+), 22 deletions(-) > > --- a/include/asm-generic/qspinlock_types.h > +++ b/include/asm-generic/qspinlock_types.h > @@ -61,6 +61,8 @@ typedef struct qspinlock { > #define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET) > #define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU) > > +#define _Q_TAIL_MASK (_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK) > + > #define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET) > #define _Q_PENDING_VAL (1U << _Q_PENDING_OFFSET) > > --- a/kernel/locking/qspinlock.c > +++ b/kernel/locking/qspinlock.c > @@ -86,6 +86,31 @@ static inline struct mcs_spinlock *decod > #define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK) > > /** > + * xchg_tail - Put in the new queue tail code word & retrieve previous one > + * @lock : Pointer to queue spinlock structure > + * @tail : The new queue tail code word > + * Return: The previous queue tail code word > + * > + * xchg(lock, tail) > + * > + * p,*,* -> n,*,* ; prev = xchg(lock, node) > + */ > +static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail) > +{ > + u32 old, new, val = atomic_read(&lock->val); > + > + for (;;) { > + new = (val & _Q_LOCKED_PENDING_MASK) | tail; > + old = atomic_cmpxchg(&lock->val, val, new); > + if (old == val) > + break; > + > + val = old; > + } > + return old; > +} > + > +/** > * queue_spin_lock_slowpath - acquire the queue spinlock > * @lock: Pointer to queue spinlock structure > * @val: Current value of the queue spinlock 32-bit word > @@ -182,36 +207,25 @@ void queue_spin_lock_slowpath(struct qsp > node->next = NULL; > > /* > - * we already touched the queueing cacheline; don't bother with pending > - * stuff. > - * > - * trylock || xchg(lock, node) > - * > - * 0,0,0 -> 0,0,1 ; trylock > - * p,y,x -> n,y,x ; prev = xchg(lock, node) > + * We touched a (possibly) cold cacheline in the per-cpu queue node; > + * attempt the trylock once more in the hope someone let go while we > + * weren't watching. > */ > - for (;;) { > - new = _Q_LOCKED_VAL; > - if (val) > - new = tail | (val & _Q_LOCKED_PENDING_MASK); > - > - old = atomic_cmpxchg(&lock->val, val, new); > - if (old == val) > - break; > - > - val = old; > - } > + if (queue_spin_trylock(lock)) > + goto release;So now are three of them? One in queue_spin_lock, then at the start of this function when checking for the pending bit, and the once more here. And that is because the local cache line might be cold for the 'mcs_index' struct? That all seems to be a bit of experimental. But then we are already in the slowpath so we could as well do: for (i = 0; i < 10; i++) if (queue_spin_trylock(lock)) goto release; And would have the same effect.> > /* > - * we won the trylock; forget about queueing. > + * we already touched the queueing cacheline; don't bother with pending > + * stuff.I guess we could also just erase the pending bit if we wanted too. The optimistic spinning will still hit go to the queue label as lock->val will have the tail value.> + * > + * p,*,* -> n,*,* > */ > - if (new == _Q_LOCKED_VAL) > - goto release; > + old = xchg_tail(lock, tail); > > /* > * if there was a previous node; link it and wait. > */ > - if (old & ~_Q_LOCKED_PENDING_MASK) { > + if (old & _Q_TAIL_MASK) { > prev = decode_tail(old); > ACCESS_ONCE(prev->next) = node; > > >
Il 15/06/2014 14:47, Peter Zijlstra ha scritto:> > - for (;;) { > - new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL; > - > - old = atomic_cmpxchg(&lock->val, val, new); > - if (old == val) > - break; > - > - val = old; > - } > + clear_pending_set_locked(lock, val); > return;Might as well add clear_pending_set_locked already in patch 3. Paolo
Il 15/06/2014 14:47, Peter Zijlstra ha scritto:> XXX: merge into the pending bit patch..Agree, or if not move it right after the pending bit patch, before the NR_CPUS optimization. Paolo> It is possible so observe the pending bit without the locked bit when > the last owner has just released but the pending owner has not yet > taken ownership. > > In this case we would normally queue -- because the pending bit is > already taken. However, in this case the pending bit is guaranteed to > be released 'soon', therefore wait for it and avoid queueing. > > Signed-off-by: Peter Zijlstra <peterz at infradead.org>
Il 15/06/2014 14:47, Peter Zijlstra ha scritto:> > > #if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE) > > -#define queue_spin_unlock queue_spin_unlock > /** > * queue_spin_unlock - release a queue spinlock > * @lock : Pointer to queue spinlock structure > * > * An effective smp_store_release() on the least-significant byte. > */ > -static inline void queue_spin_unlock(struct qspinlock *lock) > +static inline void native_queue_unlock(struct qspinlock *lock) > { > barrier(); > ACCESS_ONCE(*(u8 *)lock) = 0; > } > > +#else > + > +static inline void native_queue_unlock(struct qspinlock *lock) > +{ > + atomic_dec(&lock->val); > +} > + > #endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */Should be (part of) an earlier patch? Also, does it get wrong if (CONFIG_X86_OOSTORE || CONFIG_X86_PPRO_FENCE) && paravirt patches the unlock to a single movb? Of course the paravirt spinlocks could simply depend on !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE.> + > +#define INVALID_HEAD -1 > +#define NO_HEAD nr_cpu_ids > +-2, like Waiman said. Paolo
Konrad Rzeszutek Wilk
2014-Jun-18 15:57 UTC
[PATCH 05/11] qspinlock: Optimize for smaller NR_CPUS
On Sun, Jun 15, 2014 at 02:47:02PM +0200, Peter Zijlstra wrote:> From: Peter Zijlstra <peterz at infradead.org> > > When we allow for a max NR_CPUS < 2^14 we can optimize the pending > wait-acquire and the xchg_tail() operations. > > By growing the pending bit to a byte, we reduce the tail to 16bit. > This means we can use xchg16 for the tail part and do away with all > the repeated compxchg() operations. > > This in turn allows us to unconditionally acquire; the locked state > as observed by the wait loops cannot change. And because both locked > and pending are now a full byte we can use simple stores for the > state transition, obviating one atomic operation entirely.I have to ask - how much more performance do you get from this? Is this extra atomic operation hurting that much?> > All this is horribly broken on Alpha pre EV56 (and any other arch that > cannot do single-copy atomic byte stores). > > Signed-off-by: Peter Zijlstra <peterz at infradead.org> > --- > include/asm-generic/qspinlock_types.h | 13 ++++ > kernel/locking/qspinlock.c | 103 ++++++++++++++++++++++++++++++---- > 2 files changed, 106 insertions(+), 10 deletions(-) > > --- a/include/asm-generic/qspinlock_types.h > +++ b/include/asm-generic/qspinlock_types.h > @@ -38,6 +38,14 @@ typedef struct qspinlock { > /* > * Bitfields in the atomic value: > * > + * When NR_CPUS < 16K > + * 0- 7: locked byte > + * 8: pending > + * 9-15: not used > + * 16-17: tail index > + * 18-31: tail cpu (+1) > + * > + * When NR_CPUS >= 16K > * 0- 7: locked byte > * 8: pending > * 9-10: tail index > @@ -50,7 +58,11 @@ typedef struct qspinlock { > #define _Q_LOCKED_MASK _Q_SET_MASK(LOCKED) > > #define _Q_PENDING_OFFSET (_Q_LOCKED_OFFSET + _Q_LOCKED_BITS) > +#if CONFIG_NR_CPUS < (1U << 14) > +#define _Q_PENDING_BITS 8 > +#else > #define _Q_PENDING_BITS 1 > +#endif > #define _Q_PENDING_MASK _Q_SET_MASK(PENDING) > > #define _Q_TAIL_IDX_OFFSET (_Q_PENDING_OFFSET + _Q_PENDING_BITS) > @@ -61,6 +73,7 @@ typedef struct qspinlock { > #define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET) > #define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU) > > +#define _Q_TAIL_OFFSET _Q_TAIL_IDX_OFFSET > #define _Q_TAIL_MASK (_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK) > > #define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET) > --- a/kernel/locking/qspinlock.c > +++ b/kernel/locking/qspinlock.c > @@ -22,6 +22,7 @@ > #include <linux/percpu.h> > #include <linux/hardirq.h> > #include <linux/mutex.h> > +#include <asm/byteorder.h> > #include <asm/qspinlock.h> > > /* > @@ -48,6 +49,9 @@ > * We can further change the first spinner to spin on a bit in the lock word > * instead of its node; whereby avoiding the need to carry a node from lock to > * unlock, and preserving API. > + * > + * N.B. The current implementation only supports architectures that allow > + * atomic operations on smaller 8-bit and 16-bit data types. > */ > > #include "mcs_spinlock.h" > @@ -85,6 +89,87 @@ static inline struct mcs_spinlock *decod > > #define _Q_LOCKED_PENDING_MASK (_Q_LOCKED_MASK | _Q_PENDING_MASK) > > +/* > + * By using the whole 2nd least significant byte for the pending bit, we > + * can allow better optimization of the lock acquisition for the pending > + * bit holder. > + */ > +#if _Q_PENDING_BITS == 8 > + > +struct __qspinlock { > + union { > + atomic_t val; > + struct { > +#ifdef __LITTLE_ENDIAN > + u16 locked_pending; > + u16 tail; > +#else > + u16 tail; > + u16 locked_pending; > +#endif > + }; > + }; > +}; > + > +/** > + * clear_pending_set_locked - take ownership and clear the pending bit. > + * @lock: Pointer to queue spinlock structure > + * @val : Current value of the queue spinlock 32-bit word > + * > + * *,1,0 -> *,0,1 > + * > + * Lock stealing is not allowed if this function is used. > + */ > +static __always_inline void > +clear_pending_set_locked(struct qspinlock *lock, u32 val) > +{ > + struct __qspinlock *l = (void *)lock; > + > + ACCESS_ONCE(l->locked_pending) = _Q_LOCKED_VAL; > +} > + > +/* > + * xchg_tail - Put in the new queue tail code word & retrieve previous oneMissing full stop.> + * @lock : Pointer to queue spinlock structure > + * @tail : The new queue tail code word > + * Return: The previous queue tail code word > + * > + * xchg(lock, tail) > + * > + * p,*,* -> n,*,* ; prev = xchg(lock, node) > + */ > +static __always_inline u32 xchg_tail(struct qspinlock *lock, u32 tail) > +{ > + struct __qspinlock *l = (void *)lock; > + > + return (u32)xchg(&l->tail, tail >> _Q_TAIL_OFFSET) << _Q_TAIL_OFFSET; > +} > + > +#else /* _Q_PENDING_BITS == 8 */ > + > +/** > + * clear_pending_set_locked - take ownership and clear the pending bit. > + * @lock: Pointer to queue spinlock structure > + * @val : Current value of the queue spinlock 32-bit word > + * > + * *,1,0 -> *,0,1 > + */ > +static __always_inline void > +clear_pending_set_locked(struct qspinlock *lock, u32 val) > +{ > + u32 new, old; > + > + for (;;) { > + new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL; > + > + old = atomic_cmpxchg(&lock->val, val, new); > + if (old == val) > + break; > + > + val = old; > + } > +} > + > /** > * xchg_tail - Put in the new queue tail code word & retrieve previous one > * @lock : Pointer to queue spinlock structure > @@ -109,6 +194,7 @@ static __always_inline u32 xchg_tail(str > } > return old; > } > +#endif /* _Q_PENDING_BITS == 8 */ > > /** > * queue_spin_lock_slowpath - acquire the queue spinlock > @@ -173,8 +259,13 @@ void queue_spin_lock_slowpath(struct qsp > * we're pending, wait for the owner to go away. > * > * *,1,1 -> *,1,0 > + * > + * this wait loop must be a load-acquire such that we match the > + * store-release that clears the locked bit and create lock > + * sequentiality; this because not all clear_pending_set_locked() > + * implementations imply full barriers. > */ > - while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK) > + while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK)lock->val.counter? Ugh, all to deal with the 'int' -> 'u32' (or 'u64') Could you introduce a macro in atomic.h called 'atomic_read_raw' which would do the this? Like this: diff --git a/include/linux/atomic.h b/include/linux/atomic.h index fef3a80..5a83750 100644 --- a/include/linux/atomic.h +++ b/include/linux/atomic.h @@ -160,6 +160,8 @@ static inline void atomic_or(int i, atomic_t *v) } #endif /* #ifndef CONFIG_ARCH_HAS_ATOMIC_OR */ +#define atomic_read_raw(v) (v.counter) + #include <asm-generic/atomic-long.h> #ifdef CONFIG_GENERIC_ATOMIC64 #include <asm-generic/atomic64.h> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index fc7fd8c..2833fe1 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -265,7 +265,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val) * sequentiality; this because not all clear_pending_set_locked() * implementations imply full barriers. */ - while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK) + while ((val = smp_load_acquire(atomic_read_raw(&lock->val))) & _Q_LOCKED_MASK) arch_mutex_cpu_relax(); /* ?> cpu_relax(); > > /* > @@ -182,15 +273,7 @@ void queue_spin_lock_slowpath(struct qsp > * > * *,1,0 -> *,0,1 > */ > - for (;;) { > - new = (val & ~_Q_PENDING_MASK) | _Q_LOCKED_VAL; > - > - old = atomic_cmpxchg(&lock->val, val, new); > - if (old == val) > - break; > - > - val = old; > - } > + clear_pending_set_locked(lock, val); > return; > > /* > >
Konrad Rzeszutek Wilk
2014-Jun-18 16:36 UTC
[PATCH 07/11] qspinlock: Use a simple write to grab the lock, if applicable
On Sun, Jun 15, 2014 at 02:47:04PM +0200, Peter Zijlstra wrote:> From: Waiman Long <Waiman.Long at hp.com> > > Currently, atomic_cmpxchg() is used to get the lock. However, this is > not really necessary if there is more than one task in the queue and > the queue head don't need to reset the queue code word. For that case,s/queue code word/tail {number,value}/ ?> a simple write to set the lock bit is enough as the queue head will > be the only one eligible to get the lock as long as it checks that > both the lock and pending bits are not set. The current pending bit > waiting code will ensure that the bit will not be set as soon as the > queue code word (tail) in the lock is set.Just use the same word as above.> > With that change, the are some slight improvement in the performance > of the queue spinlock in the 5M loop micro-benchmark run on a 4-socket > Westere-EX machine as shown in the tables below. > > [Standalone/Embedded - same node] > # of tasks Before patch After patch %Change > ---------- ----------- ---------- ------- > 3 2324/2321 2248/2265 -3%/-2% > 4 2890/2896 2819/2831 -2%/-2% > 5 3611/3595 3522/3512 -2%/-2% > 6 4281/4276 4173/4160 -3%/-3% > 7 5018/5001 4875/4861 -3%/-3% > 8 5759/5750 5563/5568 -3%/-3% > > [Standalone/Embedded - different nodes] > # of tasks Before patch After patch %Change > ---------- ----------- ---------- ------- > 3 12242/12237 12087/12093 -1%/-1% > 4 10688/10696 10507/10521 -2%/-2% > > It was also found that this change produced a much bigger performance > improvement in the newer IvyBridge-EX chip and was essentially to close > the performance gap between the ticket spinlock and queue spinlock. > > The disk workload of the AIM7 benchmark was run on a 4-socket > Westmere-EX machine with both ext4 and xfs RAM disks at 3000 users > on a 3.14 based kernel. The results of the test runs were: > > AIM7 XFS Disk Test > kernel JPM Real Time Sys Time Usr Time > ----- --- --------- -------- -------- > ticketlock 5678233 3.17 96.61 5.81 > qspinlock 5750799 3.13 94.83 5.97 > > AIM7 EXT4 Disk Test > kernel JPM Real Time Sys Time Usr Time > ----- --- --------- -------- -------- > ticketlock 1114551 16.15 509.72 7.11 > qspinlock 2184466 8.24 232.99 6.01 > > The ext4 filesystem run had a much higher spinlock contention than > the xfs filesystem run. > > The "ebizzy -m" test was also run with the following results: > > kernel records/s Real Time Sys Time Usr Time > ----- --------- --------- -------- -------- > ticketlock 2075 10.00 216.35 3.49 > qspinlock 3023 10.00 198.20 4.80 > > Signed-off-by: Waiman Long <Waiman.Long at hp.com> > Signed-off-by: Peter Zijlstra <peterz at infradead.org> > --- > kernel/locking/qspinlock.c | 59 ++++++++++++++++++++++++++++++++------------- > 1 file changed, 43 insertions(+), 16 deletions(-) > > --- a/kernel/locking/qspinlock.c > +++ b/kernel/locking/qspinlock.c > @@ -93,24 +93,33 @@ static inline struct mcs_spinlock *decod > * By using the whole 2nd least significant byte for the pending bit, we > * can allow better optimization of the lock acquisition for the pending > * bit holder. > + * > + * This internal structure is also used by the set_locked function which > + * is not restricted to _Q_PENDING_BITS == 8. > */ > -#if _Q_PENDING_BITS == 8 > - > struct __qspinlock { > union { > atomic_t val; > - struct { > #ifdef __LITTLE_ENDIAN > + u8 locked; > + struct { > u16 locked_pending; > u16 tail; > + }; > #else > + struct { > u16 tail; > u16 locked_pending; > -#endif > }; > + struct { > + u8 reserved[3]; > + u8 locked; > + }; > +#endif > }; > }; > > +#if _Q_PENDING_BITS == 8 > /** > * clear_pending_set_locked - take ownership and clear the pending bit. > * @lock: Pointer to queue spinlock structure > @@ -197,6 +206,19 @@ static __always_inline u32 xchg_tail(str > #endif /* _Q_PENDING_BITS == 8 */ > > /** > + * set_locked - Set the lock bit and own the lockFull stop missing.> + * @lock: Pointer to queue spinlock structureDitto.> + * > + * *,*,0 -> *,0,1 > + */ > +static __always_inline void set_locked(struct qspinlock *lock) > +{ > + struct __qspinlock *l = (void *)lock; > + > + ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL; > +} > + > +/** > * queue_spin_lock_slowpath - acquire the queue spinlock > * @lock: Pointer to queue spinlock structure > * @val: Current value of the queue spinlock 32-bit word > @@ -328,10 +350,13 @@ void queue_spin_lock_slowpath(struct qsp > /* > * we're at the head of the waitqueue, wait for the owner & pending to > * go away. > + * Load-acquired is used here because the set_locked() > + * function below may not be a full memory barrier. > * > * *,x,y -> *,0,0 > */ > - while ((val = atomic_read(&lock->val)) & _Q_LOCKED_PENDING_MASK) > + while ((val = smp_load_acquire(&lock->val.counter)) & > + _Q_LOCKED_PENDING_MASK) > cpu_relax(); > > /* > @@ -339,15 +364,19 @@ void queue_spin_lock_slowpath(struct qsp > * > * n,0,0 -> 0,0,1 : lock, uncontended > * *,0,0 -> *,0,1 : lock, contended > + * > + * If the queue head is the only one in the queue (lock value == tail), > + * clear the tail code and grab the lock. Otherwise, we only need > + * to grab the lock. > */ > for (;;) { > - new = _Q_LOCKED_VAL; > - if (val != tail) > - new |= val; > - > - old = atomic_cmpxchg(&lock->val, val, new); > - if (old == val) > + if (val != tail) { > + set_locked(lock); > break; > + } > + old = atomic_cmpxchg(&lock->val, val, _Q_LOCKED_VAL); > + if (old == val) > + goto release; /* No contention */ > > val = old; > } > @@ -355,12 +384,10 @@ void queue_spin_lock_slowpath(struct qsp > /* > * contended path; wait for next, release. > */ > - if (new != _Q_LOCKED_VAL) { > - while (!(next = ACCESS_ONCE(node->next))) > - cpu_relax(); > + while (!(next = ACCESS_ONCE(node->next))) > + cpu_relax(); > > - arch_mcs_spin_unlock_contended(&next->locked); > - } > + arch_mcs_spin_unlock_contended(&next->locked); > > release: > /* > >
Konrad Rzeszutek Wilk
2014-Jun-18 16:40 UTC
[PATCH 08/11] qspinlock: Revert to test-and-set on hypervisors
On Sun, Jun 15, 2014 at 02:47:05PM +0200, Peter Zijlstra wrote:> When we detect a hypervisor (!paravirt, see later patches), revert toPlease spell out the name of the patches.> a simple test-and-set lock to avoid the horrors of queue preemption.Heheh.> > Signed-off-by: Peter Zijlstra <peterz at infradead.org> > --- > arch/x86/include/asm/qspinlock.h | 14 ++++++++++++++ > include/asm-generic/qspinlock.h | 7 +++++++ > kernel/locking/qspinlock.c | 3 +++ > 3 files changed, 24 insertions(+) > > --- a/arch/x86/include/asm/qspinlock.h > +++ b/arch/x86/include/asm/qspinlock.h > @@ -1,6 +1,7 @@ > #ifndef _ASM_X86_QSPINLOCK_H > #define _ASM_X86_QSPINLOCK_H > > +#include <asm/cpufeature.h> > #include <asm-generic/qspinlock_types.h> > > #if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE) > @@ -20,6 +21,19 @@ static inline void queue_spin_unlock(str > > #endif /* !CONFIG_X86_OOSTORE && !CONFIG_X86_PPRO_FENCE */ > > +#define virt_queue_spin_lock virt_queue_spin_lock > + > +static inline bool virt_queue_spin_lock(struct qspinlock *lock) > +{ > + if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) > + return false; > + > + while (atomic_cmpxchg(&lock->val, 0, _Q_LOCKED_VAL) != 0) > + cpu_relax(); > + > + return true; > +} > + > #include <asm-generic/qspinlock.h> > > #endif /* _ASM_X86_QSPINLOCK_H */ > --- a/include/asm-generic/qspinlock.h > +++ b/include/asm-generic/qspinlock.h > @@ -98,6 +98,13 @@ static __always_inline void queue_spin_u > } > #endif > > +#ifndef virt_queue_spin_lock > +static __always_inline bool virt_queue_spin_lock(struct qspinlock *lock) > +{ > + return false; > +} > +#endif > + > /* > * Initializier > */ > --- a/kernel/locking/qspinlock.c > +++ b/kernel/locking/qspinlock.c > @@ -247,6 +247,9 @@ void queue_spin_lock_slowpath(struct qsp > > BUILD_BUG_ON(CONFIG_NR_CPUS >= (1U << _Q_TAIL_CPU_BITS)); > > + if (virt_queue_spin_lock(lock)) > + return; > + > /* > * wait for in-progress pending->locked hand-overs > * > >
Konrad Rzeszutek Wilk
2014-Jun-18 16:43 UTC
[PATCH 09/11] pvqspinlock, x86: Rename paravirt_ticketlocks_enabled
On Sun, Jun 15, 2014 at 02:47:06PM +0200, Peter Zijlstra wrote:> From: Waiman Long <Waiman.Long at hp.com> > > This patch renames the paravirt_ticketlocks_enabled static key to a > more generic paravirt_spinlocks_enabled name. > > Signed-off-by: Waiman Long <Waiman.Long at hp.com> > Signed-off-by: Peter Zijlstra <peterz at infradead.org>Acked-by: Konrad Rzeszutek Wilk <konrad.wilk at oracle.com>> --- > arch/x86/include/asm/spinlock.h | 4 ++-- > arch/x86/kernel/kvm.c | 2 +- > arch/x86/kernel/paravirt-spinlocks.c | 4 ++-- > arch/x86/xen/spinlock.c | 2 +- > 4 files changed, 6 insertions(+), 6 deletions(-) > > --- a/arch/x86/include/asm/spinlock.h > +++ b/arch/x86/include/asm/spinlock.h > @@ -39,7 +39,7 @@ > /* How long a lock should spin before we consider blocking */ > #define SPIN_THRESHOLD (1 << 15) > > -extern struct static_key paravirt_ticketlocks_enabled; > +extern struct static_key paravirt_spinlocks_enabled; > static __always_inline bool static_key_false(struct static_key *key); > > #ifdef CONFIG_QUEUE_SPINLOCK > @@ -150,7 +150,7 @@ static inline void __ticket_unlock_slowp > static __always_inline void arch_spin_unlock(arch_spinlock_t *lock) > { > if (TICKET_SLOWPATH_FLAG && > - static_key_false(¶virt_ticketlocks_enabled)) { > + static_key_false(¶virt_spinlocks_enabled)) { > arch_spinlock_t prev; > > prev = *lock; > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -819,7 +819,7 @@ static __init int kvm_spinlock_init_jump > if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) > return 0; > > - static_key_slow_inc(¶virt_ticketlocks_enabled); > + static_key_slow_inc(¶virt_spinlocks_enabled); > printk(KERN_INFO "KVM setup paravirtual spinlock\n"); > > return 0; > --- a/arch/x86/kernel/paravirt-spinlocks.c > +++ b/arch/x86/kernel/paravirt-spinlocks.c > @@ -16,5 +16,5 @@ struct pv_lock_ops pv_lock_ops = { > }; > EXPORT_SYMBOL(pv_lock_ops); > > -struct static_key paravirt_ticketlocks_enabled = STATIC_KEY_INIT_FALSE; > -EXPORT_SYMBOL(paravirt_ticketlocks_enabled); > +struct static_key paravirt_spinlocks_enabled = STATIC_KEY_INIT_FALSE; > +EXPORT_SYMBOL(paravirt_spinlocks_enabled); > --- a/arch/x86/xen/spinlock.c > +++ b/arch/x86/xen/spinlock.c > @@ -293,7 +293,7 @@ static __init int xen_init_spinlocks_jum > if (!xen_domain()) > return 0; > > - static_key_slow_inc(¶virt_ticketlocks_enabled); > + static_key_slow_inc(¶virt_spinlocks_enabled); > return 0; > } > early_initcall(xen_init_spinlocks_jump); > >
On Sun, Jun 15, 2014 at 02:47:07PM +0200, Peter Zijlstra wrote:> Add minimal paravirt support. > > The code aims for minimal impact on the native case.Woot!> > On the lock side we add one jump label (asm_goto) and 4 paravirt > callee saved calls that default to NOPs. The only effects are the > extra NOPs and some pointless MOVs to accomodate the calling > convention. No register spills happen because of this (x86_64). > > On the unlock side we have one paravirt callee saved call, which > defaults to the actual unlock sequence: "movb $0, (%rdi)" and a NOP. > > The actual paravirt code comes in 3 parts; > > - init_node; this initializes the extra data members required for PV > state. PV state data is kept 1 cacheline ahead of the regular data. > > - link_and_wait_node/kick_node; these are paired with the regular MCS > queueing and are placed resp. before/after the paired MCS ops. > > - wait_head/queue_unlock; the interesting part here is finding the > head node to kick. > > Tracking the head is done in two parts, firstly the pv_wait_head will > store its cpu number in whichever node is pointed to by the tail part > of the lock word. Secondly, pv_link_and_wait_node() will propagate the > existing head from the old to the new tail node.I dug in the code and I have some comments about it, but before I post them I was wondering if you have any plans to run any performance tests against the PV ticketlock with normal and over-committed scenarios? Looking at this with a pen and paper I see that compared to PV ticketlock for the CPUs that are contending on the queue (so they go to pv_wait_head_and_link, then progress to pv_wait_head), they go sleep twice and get woken up twice. In PV ticketlock the contending CPUs would only go to sleep once and woken up once it was their turn. That of course is the worst case scenario - where the CPU that has the lock is taking forever to do its job and the host is quite overcommitted. Thanks!
On 06/15/2014 06:17 PM, Peter Zijlstra wrote:> Signed-off-by: Peter Zijlstra<peterz at infradead.org> > ---[...]> + > +void kvm_wait(int *ptr, int val) > +{ > + unsigned long flags; > + > + if (in_nmi()) > + return; > + > + /* > + * Make sure an interrupt handler can't upset things in a > + * partially setup state. > + */I am seeing hang with even 2 cpu guest (with patches on top of 3.15-rc6 ). looking further with gdb I see one cpu is stuck with native_halt with slowpath flag(_Q_LOCKED_SLOW) set when it was called. (gdb) bt #0 native_halt () at /test/master/arch/x86/include/asm/irqflags.h:55 #1 0xffffffff81033118 in halt (ptr=0xffffffff81eb0e58, val=524291) at /test/master/arch/x86/include/asm/paravirt.h:116 #2 kvm_wait (ptr=0xffffffff81eb0e58, val=524291) at arch/x86/kernel/kvm.c:835 #3 kvm_wait (ptr=0xffffffff81eb0e58, val=524291) at arch/x86/kernel/kvm.c:809 #4 0xffffffff810a2d8e in pv_wait (lock=0xffffffff81eb0e58) at /test/master/arch/x86/include/asm/paravirt.h:744 #5 __pv_wait_head (lock=0xffffffff81eb0e58) at kernel/locking/qspinlock.c:352 Value of lock seem to be 524288 (means already unlocked?) So apart from races Waiman mentioned, are we also in need of smp_mb() here and/or native_queue_unlock()?. Interestingly I see other cpu stuck at multi_cpu_stop(). (gdb) thr 1 [Switching to thread 1 (Thread 1)]#0 multi_cpu_stop (data=0xffff8802140d1da0) at kernel/stop_machine.c:192 192 if (msdata->state != curstate) { Or is it I am missing something. please let me know if .config need to be shared.> + local_irq_save(flags); > + > + /* > + * check again make sure it didn't become free while > + * we weren't looking. > + */ > + if (ACCESS_ONCE(*ptr) != val) > + goto out; > + > + /* > + * halt until it's our turn and kicked. Note that we do safe halt > + * for irq enabled case to avoid hang when lock info is overwritten > + * in irq spinlock slowpath and no spurious interrupt occur to save us. > + */ > + if (arch_irqs_disabled_flags(flags)) > + halt(); > + else > + safe_halt(); > + > +out: > + local_irq_restore(flags); > +} > +#endif /* QUEUE_SPINLOCK */