search for: _qspinlock_waiting

Displaying 20 results from an estimated 21 matches for "_qspinlock_waiting".

2014 Feb 26
2
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
...n arch_qspinlock *qlock = (union arch_qspinlock *)lock; > + u16 old; > + > + /* > + * Fall into the quick spinning code path only if no one is waiting > + * or the lock is available. > + */ > + if (unlikely((qsval != _QSPINLOCK_LOCKED) && > + (qsval != _QSPINLOCK_WAITING))) > + return 0; > + > + old = xchg(&qlock->lock_wait, _QSPINLOCK_WAITING|_QSPINLOCK_LOCKED); > + > + if (old == 0) { > + /* > + * Got the lock, can clear the waiting bit now > + */ > + smp_u8_store_release(&qlock->wait, 0); So we just did an atomic...
2014 Feb 26
2
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
...n arch_qspinlock *qlock = (union arch_qspinlock *)lock; > + u16 old; > + > + /* > + * Fall into the quick spinning code path only if no one is waiting > + * or the lock is available. > + */ > + if (unlikely((qsval != _QSPINLOCK_LOCKED) && > + (qsval != _QSPINLOCK_WAITING))) > + return 0; > + > + old = xchg(&qlock->lock_wait, _QSPINLOCK_WAITING|_QSPINLOCK_LOCKED); > + > + if (old == 0) { > + /* > + * Got the lock, can clear the waiting bit now > + */ > + smp_u8_store_release(&qlock->wait, 0); So we just did an atomic...
2014 Feb 26
0
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
...ing bit + * for waiting lock acquirer so that it won't need to go through the + * MCS style locking queuing which has a higher overhead. + * 2) The 16-bit queue code can be accessed or modified directly as a + * 16-bit short value without disturbing the first 2 bytes. + */ +#define _QSPINLOCK_WAITING 0x100U /* Waiting bit in 2nd byte */ +#define _QSPINLOCK_LWMASK 0xffff /* Mask for lock & wait bits */ + +#define queue_encode_qcode(cpu, idx) (((cpu) + 1) << 2 | (idx)) + +#define queue_spin_trylock_quick queue_spin_trylock_quick +/** + * queue_spin_trylock_quick - fast spinning on the...
2014 Feb 27
0
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
...ing bit + * for waiting lock acquirer so that it won't need to go through the + * MCS style locking queuing which has a higher overhead. + * 2) The 16-bit queue code can be accessed or modified directly as a + * 16-bit short value without disturbing the first 2 bytes. + */ +#define _QSPINLOCK_WAITING 0x100U /* Waiting bit in 2nd byte */ +#define _QSPINLOCK_LWMASK 0xffff /* Mask for lock & wait bits */ + +#define queue_encode_qcode(cpu, idx) (((cpu) + 1) << 2 | (idx)) + +#define queue_spin_trylock_quick queue_spin_trylock_quick +/** + * queue_spin_trylock_quick - fast spinning on the...
2014 Feb 27
0
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
...n arch_qspinlock *)lock; >> + u16 old; >> + >> + /* >> + * Fall into the quick spinning code path only if no one is waiting >> + * or the lock is available. >> + */ >> + if (unlikely((qsval != _QSPINLOCK_LOCKED)&& >> + (qsval != _QSPINLOCK_WAITING))) >> + return 0; >> + >> + old = xchg(&qlock->lock_wait, _QSPINLOCK_WAITING|_QSPINLOCK_LOCKED); >> + >> + if (old == 0) { >> + /* >> + * Got the lock, can clear the waiting bit now >> + */ >> + smp_u8_store_release(&qlock->...
2014 Mar 12
0
[PATCH v6 04/11] qspinlock: Optimized code path for 2 contending tasks
...t 2 bytes. + * 2) The 2nd byte of the 32-bit lock word can be used as a pending bit + * for waiting lock acquirer so that it won't need to go through the + * MCS style locking queuing which has a higher overhead. */ +#define _QSPINLOCK_WAIT_SHIFT 8 /* Waiting bit position */ +#define _QSPINLOCK_WAITING (1 << _QSPINLOCK_WAIT_SHIFT) +/* Masks for lock & wait bits */ +#define _QSPINLOCK_LWMASK (_QSPINLOCK_WAITING | _QSPINLOCK_LOCKED) + #define queue_encode_qcode(cpu, idx) (((cpu) + 1) << 2 | (idx)) +#define queue_get_qcode(lock) (atomic_read(&(lock)->qlcode) >> _QCODE...
2014 Mar 12
2
[PATCH v6 04/11] qspinlock: Optimized code path for 2 contending tasks
...E(qlock->lock_wait) = _QSPINLOCK_LOCKED; > + * > + * It is not currently clear why this happens. A workaround > + * is to use atomic instruction to store the new value. > + */ > + { > + u16 lw = xchg(&qlock->lock_wait, _QSPINLOCK_LOCKED); > + BUG_ON(lw != _QSPINLOCK_WAITING); > + } > + return 1; > It was found that when I used a direct memory store instead of an atomic op, the following kernel crash might happen at filesystem dismount time: Red Hat Enterprise Linux Server 7.0 (Maipo) Kernel 3.14.0-rc6-qlock on an x86_64 h11-kvm20 login: [ 1529.934047] B...
2014 Mar 12
2
[PATCH v6 04/11] qspinlock: Optimized code path for 2 contending tasks
...E(qlock->lock_wait) = _QSPINLOCK_LOCKED; > + * > + * It is not currently clear why this happens. A workaround > + * is to use atomic instruction to store the new value. > + */ > + { > + u16 lw = xchg(&qlock->lock_wait, _QSPINLOCK_LOCKED); > + BUG_ON(lw != _QSPINLOCK_WAITING); > + } > + return 1; > It was found that when I used a direct memory store instead of an atomic op, the following kernel crash might happen at filesystem dismount time: Red Hat Enterprise Linux Server 7.0 (Maipo) Kernel 3.14.0-rc6-qlock on an x86_64 h11-kvm20 login: [ 1529.934047] B...
2014 Feb 26
22
[PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5: - Move the optimized 2-task contending code to the generic file to enable more architectures to use it without code duplication. - Address some of the style-related comments by PeterZ. - Allow the use of unfair queue spinlock in a real para-virtualized execution environment. - Add para-virtualization support to the qspinlock code by ensuring that the lock holder and queue
2014 Feb 26
22
[PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5: - Move the optimized 2-task contending code to the generic file to enable more architectures to use it without code duplication. - Address some of the style-related comments by PeterZ. - Allow the use of unfair queue spinlock in a real para-virtualized execution environment. - Add para-virtualization support to the qspinlock code by ensuring that the lock holder and queue
2014 Feb 27
14
[PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5: - Move the optimized 2-task contending code to the generic file to enable more architectures to use it without code duplication. - Address some of the style-related comments by PeterZ. - Allow the use of unfair queue spinlock in a real para-virtualized execution environment. - Add para-virtualization support to the qspinlock code by ensuring that the lock holder and queue
2014 Feb 27
14
[PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5: - Move the optimized 2-task contending code to the generic file to enable more architectures to use it without code duplication. - Address some of the style-related comments by PeterZ. - Allow the use of unfair queue spinlock in a real para-virtualized execution environment. - Add para-virtualization support to the qspinlock code by ensuring that the lock holder and queue
2014 Feb 27
0
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
...> But this may result in some ping ponging? Actually, I think the qspinlock can work roughly the same as the pvticketlock, using the same lock_spinning and unlock_lock hooks. The x86-specific codepath can use bit 1 in the ->wait byte as "I have halted, please kick me". value = _QSPINLOCK_WAITING; i = 0; do cpu_relax(); while (ACCESS_ONCE(slock->lock) && i++ < BUSY_WAIT); if (ACCESS_ONCE(slock->lock)) { value |= _QSPINLOCK_HALTED; xchg(&slock->wait, value >> 8); if (ACCESS_ONCE(slock->lock)) { ... call lock_spinning hook ... } } /* * Se...
2014 Mar 13
0
[PATCH v6 04/11] qspinlock: Optimized code path for 2 contending tasks
...PINLOCK_LOCKED; > >+ * > >+ * It is not currently clear why this happens. A workaround > >+ * is to use atomic instruction to store the new value. > >+ */ > >+ { > >+ u16 lw = xchg(&qlock->lock_wait, _QSPINLOCK_LOCKED); > >+ BUG_ON(lw != _QSPINLOCK_WAITING); > >+ } > It was found that when I used a direct memory store instead of an atomic op, > the following kernel crash might happen at filesystem dismount time: > > [ 1529.936714] Call Trace: > [ 1529.936714] [<ffffffff811c2d03>] d_walk+0xc3/0x260 > [ 1529.936714] [...
2014 Feb 28
0
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
On 02/28/2014 04:29 AM, Peter Zijlstra wrote: > On Thu, Feb 27, 2014 at 03:42:19PM -0500, Waiman Long wrote: >>>> + old = xchg(&qlock->lock_wait, _QSPINLOCK_WAITING|_QSPINLOCK_LOCKED); >>>> + >>>> + if (old == 0) { >>>> + /* >>>> + * Got the lock, can clear the waiting bit now >>>> + */ >>>> + smp_u8_store_release(&qlock->wait, 0); >>> So we just did an atomic op, and...
2014 Feb 28
5
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
On Thu, Feb 27, 2014 at 03:42:19PM -0500, Waiman Long wrote: > >>+ old = xchg(&qlock->lock_wait, _QSPINLOCK_WAITING|_QSPINLOCK_LOCKED); > >>+ > >>+ if (old == 0) { > >>+ /* > >>+ * Got the lock, can clear the waiting bit now > >>+ */ > >>+ smp_u8_store_release(&qlock->wait, 0); > > > >So we just did an atomic op, and now you're tr...
2014 Feb 28
5
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
On Thu, Feb 27, 2014 at 03:42:19PM -0500, Waiman Long wrote: > >>+ old = xchg(&qlock->lock_wait, _QSPINLOCK_WAITING|_QSPINLOCK_LOCKED); > >>+ > >>+ if (old == 0) { > >>+ /* > >>+ * Got the lock, can clear the waiting bit now > >>+ */ > >>+ smp_u8_store_release(&qlock->wait, 0); > > > >So we just did an atomic op, and now you're tr...
2014 Feb 27
3
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
On 02/27/2014 08:15 PM, Paolo Bonzini wrote: [...] >> But neither of the VCPUs being kicked here are halted -- they're either >> running or runnable (descheduled by the hypervisor). > > /me actually looks at Waiman's code... > > Right, this is really different from pvticketlocks, where the *unlock* > primitive wakes up a sleeping VCPU. It is more similar to PLE
2014 Feb 27
3
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
On 02/27/2014 08:15 PM, Paolo Bonzini wrote: [...] >> But neither of the VCPUs being kicked here are halted -- they're either >> running or runnable (descheduled by the hypervisor). > > /me actually looks at Waiman's code... > > Right, this is really different from pvticketlocks, where the *unlock* > primitive wakes up a sleeping VCPU. It is more similar to PLE
2014 Mar 12
17
[PATCH v6 00/11] qspinlock: a 4-byte queue spinlock with PV support
v5->v6: - Change the optimized 2-task contending code to make it fairer at the expense of a bit of performance. - Add a patch to support unfair queue spinlock for Xen. - Modify the PV qspinlock code to follow what was done in the PV ticketlock. - Add performance data for the unfair lock as well as the PV support code. v4->v5: - Move the optimized 2-task contending code to the