search for: qsval

Displaying 20 results from an estimated 25 matches for "qsval".

2014 Mar 12
0
[PATCH v6 04/11] qspinlock: Optimized code path for 2 contending tasks
...<< 2 | (idx)) +#define queue_get_qcode(lock) (atomic_read(&(lock)->qlcode) >> _QCODE_OFFSET) + +#define queue_spin_trylock_quick queue_spin_trylock_quick +/** + * queue_spin_trylock_quick - quick spinning on the queue spinlock + * @lock : Pointer to queue spinlock structure + * @qsval: Old queue spinlock value + * Return: 1 if lock acquired, 0 if failed + * + * This is an optimized contention path for 2 contending tasks. It + * should only be entered if no task is waiting in the queue. + */ +static inline int queue_spin_trylock_quick(struct qspinlock *lock, int qsval) +{ + union...
2014 Feb 26
2
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
...agram for this thing do you? I suppose I'm going to have to make one; this is all getting a bit unwieldy, and those xchg() + fixup things are hard to read. On Wed, Feb 26, 2014 at 10:14:23AM -0500, Waiman Long wrote: > +static inline int queue_spin_trylock_quick(struct qspinlock *lock, int qsval) > +{ > + union arch_qspinlock *qlock = (union arch_qspinlock *)lock; > + u16 old; > + > + /* > + * Fall into the quick spinning code path only if no one is waiting > + * or the lock is available. > + */ > + if (unlikely((qsval != _QSPINLOCK_LOCKED) && &g...
2014 Feb 26
2
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
...agram for this thing do you? I suppose I'm going to have to make one; this is all getting a bit unwieldy, and those xchg() + fixup things are hard to read. On Wed, Feb 26, 2014 at 10:14:23AM -0500, Waiman Long wrote: > +static inline int queue_spin_trylock_quick(struct qspinlock *lock, int qsval) > +{ > + union arch_qspinlock *qlock = (union arch_qspinlock *)lock; > + u16 old; > + > + /* > + * Fall into the quick spinning code path only if no one is waiting > + * or the lock is available. > + */ > + if (unlikely((qsval != _QSPINLOCK_LOCKED) && &g...
2014 Mar 02
1
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
On 02/26, Waiman Long wrote: > > +void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval) > +{ > + unsigned int cpu_nr, qn_idx; > + struct qnode *node, *next; > + u32 prev_qcode, my_qcode; > + > + /* > + * Get the queue node > + */ > + cpu_nr = smp_processor_id(); > + node = get_qnode(&qn_idx); > + > + /* > + * It should never happen that...
2014 Mar 02
1
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
On 02/26, Waiman Long wrote: > > +void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval) > +{ > + unsigned int cpu_nr, qn_idx; > + struct qnode *node, *next; > + u32 prev_qcode, my_qcode; > + > + /* > + * Get the queue node > + */ > + cpu_nr = smp_processor_id(); > + node = get_qnode(&qn_idx); > + > + /* > + * It should never happen that...
2014 Apr 02
0
[PATCH v8 01/10] qspinlock: A generic 4-byte queue spinlock implementation
....P. + * + * Authors: Waiman Long <waiman.long at hp.com> + */ +#ifndef __ASM_GENERIC_QSPINLOCK_H +#define __ASM_GENERIC_QSPINLOCK_H + +#include <asm-generic/qspinlock_types.h> + +/* + * External function declarations + */ +extern void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval); + +/** + * queue_spin_is_locked - is the spinlock locked? + * @lock: Pointer to queue spinlock structure + * Return: 1 if it is locked, 0 otherwise + */ +static __always_inline int queue_spin_is_locked(struct qspinlock *lock) +{ + return atomic_read(&lock->qlcode) & _QLOCK_LOCK_MASK; +...
2014 Feb 26
0
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
...ffff /* Mask for lock & wait bits */ + +#define queue_encode_qcode(cpu, idx) (((cpu) + 1) << 2 | (idx)) + +#define queue_spin_trylock_quick queue_spin_trylock_quick +/** + * queue_spin_trylock_quick - fast spinning on the queue spinlock + * @lock : Pointer to queue spinlock structure + * @qsval: Old queue spinlock value + * Return: 1 if lock acquired, 0 if failed + * + * This is an optimized contention path for 2 contending tasks. It + * should only be entered if no task is waiting in the queue. This + * optimized path is not as fair as the ticket spinlock, but it offers + * slightly bett...
2014 Feb 27
0
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
...ffff /* Mask for lock & wait bits */ + +#define queue_encode_qcode(cpu, idx) (((cpu) + 1) << 2 | (idx)) + +#define queue_spin_trylock_quick queue_spin_trylock_quick +/** + * queue_spin_trylock_quick - fast spinning on the queue spinlock + * @lock : Pointer to queue spinlock structure + * @qsval: Old queue spinlock value + * Return: 1 if lock acquired, 0 if failed + * + * This is an optimized contention path for 2 contending tasks. It + * should only be entered if no task is waiting in the queue. This + * optimized path is not as fair as the ticket spinlock, but it offers + * slightly bett...
2014 Feb 27
0
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
...re hard to read. I don't have a state diagram on hand, but I will add more comments to describe the 4 possible cases and how to handle them. > > On Wed, Feb 26, 2014 at 10:14:23AM -0500, Waiman Long wrote: >> +static inline int queue_spin_trylock_quick(struct qspinlock *lock, int qsval) >> +{ >> + union arch_qspinlock *qlock = (union arch_qspinlock *)lock; >> + u16 old; >> + >> + /* >> + * Fall into the quick spinning code path only if no one is waiting >> + * or the lock is available. >> + */ >> + if (unlikely((qsval...
2014 Apr 01
10
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
v7->v8: - Remove one unneeded atomic operation from the slowpath, thus improving performance. - Simplify some of the codes and add more comments. - Test for X86_FEATURE_HYPERVISOR CPU feature bit to enable/disable unfair lock. - Reduce unfair lock slowpath lock stealing frequency depending on its distance from the queue head. - Add performance data for IvyBridge-EX CPU.
2014 Apr 01
10
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
v7->v8: - Remove one unneeded atomic operation from the slowpath, thus improving performance. - Simplify some of the codes and add more comments. - Test for X86_FEATURE_HYPERVISOR CPU feature bit to enable/disable unfair lock. - Reduce unfair lock slowpath lock stealing frequency depending on its distance from the queue head. - Add performance data for IvyBridge-EX CPU.
2014 Apr 02
17
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
N.B. Sorry for the duplicate. This patch series were resent as the original one was rejected by the vger.kernel.org list server due to long header. There is no change in content. v7->v8: - Remove one unneeded atomic operation from the slowpath, thus improving performance. - Simplify some of the codes and add more comments. - Test for X86_FEATURE_HYPERVISOR CPU feature bit
2014 Apr 02
17
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
N.B. Sorry for the duplicate. This patch series were resent as the original one was rejected by the vger.kernel.org list server due to long header. There is no change in content. v7->v8: - Remove one unneeded atomic operation from the slowpath, thus improving performance. - Simplify some of the codes and add more comments. - Test for X86_FEATURE_HYPERVISOR CPU feature bit
2014 Mar 19
15
[PATCH v7 00/11] qspinlock: a 4-byte queue spinlock with PV support
v6->v7: - Remove an atomic operation from the 2-task contending code - Shorten the names of some macros - Make the queue waiter to attempt to steal lock when unfair lock is enabled. - Remove lock holder kick from the PV code and fix a race condition - Run the unfair lock & PV code on overcommitted KVM guests to collect performance data. v5->v6: - Change the optimized
2014 Mar 19
15
[PATCH v7 00/11] qspinlock: a 4-byte queue spinlock with PV support
v6->v7: - Remove an atomic operation from the 2-task contending code - Shorten the names of some macros - Make the queue waiter to attempt to steal lock when unfair lock is enabled. - Remove lock holder kick from the PV code and fix a race condition - Run the unfair lock & PV code on overcommitted KVM guests to collect performance data. v5->v6: - Change the optimized
2014 Feb 27
14
[PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5: - Move the optimized 2-task contending code to the generic file to enable more architectures to use it without code duplication. - Address some of the style-related comments by PeterZ. - Allow the use of unfair queue spinlock in a real para-virtualized execution environment. - Add para-virtualization support to the qspinlock code by ensuring that the lock holder and queue
2014 Feb 27
14
[PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5: - Move the optimized 2-task contending code to the generic file to enable more architectures to use it without code duplication. - Address some of the style-related comments by PeterZ. - Allow the use of unfair queue spinlock in a real para-virtualized execution environment. - Add para-virtualization support to the qspinlock code by ensuring that the lock holder and queue
2014 Feb 26
0
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
....P. + * + * Authors: Waiman Long <waiman.long at hp.com> + */ +#ifndef __ASM_GENERIC_QSPINLOCK_H +#define __ASM_GENERIC_QSPINLOCK_H + +#include <asm-generic/qspinlock_types.h> + +/* + * External function declarations + */ +extern void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval); + +/** + * queue_spin_is_locked - is the spinlock locked? + * @lock: Pointer to queue spinlock structure + * Return: 1 if it is locked, 0 otherwise + */ +static __always_inline int queue_spin_is_locked(struct qspinlock *lock) +{ + return atomic_read(&lock->qlcode) & _QSPINLOCK_LOCKED;...
2014 Feb 27
0
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
....P. + * + * Authors: Waiman Long <waiman.long at hp.com> + */ +#ifndef __ASM_GENERIC_QSPINLOCK_H +#define __ASM_GENERIC_QSPINLOCK_H + +#include <asm-generic/qspinlock_types.h> + +/* + * External function declarations + */ +extern void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval); + +/** + * queue_spin_is_locked - is the spinlock locked? + * @lock: Pointer to queue spinlock structure + * Return: 1 if it is locked, 0 otherwise + */ +static __always_inline int queue_spin_is_locked(struct qspinlock *lock) +{ + return atomic_read(&lock->qlcode) & _QSPINLOCK_LOCKED;...
2014 Feb 26
22
[PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5: - Move the optimized 2-task contending code to the generic file to enable more architectures to use it without code duplication. - Address some of the style-related comments by PeterZ. - Allow the use of unfair queue spinlock in a real para-virtualized execution environment. - Add para-virtualization support to the qspinlock code by ensuring that the lock holder and queue