Peter Zijlstra
2014-May-08 19:06 UTC
[PATCH v10 09/19] qspinlock: Prepare for unfair lock support
On Wed, May 07, 2014 at 11:01:37AM -0400, Waiman Long wrote:> If unfair lock is supported, the lock acquisition loop at the end of > the queue_spin_lock_slowpath() function may need to detect the fact > the lock can be stolen. Code are added for the stolen lock detection. > > A new qhead macro is also defined as a shorthand for mcs.locked.NAK, unfair should be a pure test-and-set lock.> /** > * get_qlock - Set the lock bit and own the lock > - * @lock: Pointer to queue spinlock structure > + * @lock : Pointer to queue spinlock structure > + * Return: 1 if lock acquired, 0 otherwise > * > * This routine should only be called when the caller is the only one > * entitled to acquire the lock. > */ > -static __always_inline void get_qlock(struct qspinlock *lock) > +static __always_inline int get_qlock(struct qspinlock *lock) > { > struct __qspinlock *l = (void *)lock; > > barrier(); > ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL; > barrier(); > + return 1; > }and here you make a horribly named function more horrible; try_set_locked() is that its now.
Waiman Long
2014-May-10 01:19 UTC
[PATCH v10 09/19] qspinlock: Prepare for unfair lock support
On 05/08/2014 03:06 PM, Peter Zijlstra wrote:> On Wed, May 07, 2014 at 11:01:37AM -0400, Waiman Long wrote: >> If unfair lock is supported, the lock acquisition loop at the end of >> the queue_spin_lock_slowpath() function may need to detect the fact >> the lock can be stolen. Code are added for the stolen lock detection. >> >> A new qhead macro is also defined as a shorthand for mcs.locked. > NAK, unfair should be a pure test-and-set lock.I have performance data showing that a simple test-and-set lock does not scale well. That is the primary reason of ditching the test-and-set lock and use a more complicated scheme which scales better. Also, it will be hard to make the unfair test-and-set lock code to coexist nicely with PV spinlock code.>> /** >> * get_qlock - Set the lock bit and own the lock >> - * @lock: Pointer to queue spinlock structure >> + * @lock : Pointer to queue spinlock structure >> + * Return: 1 if lock acquired, 0 otherwise >> * >> * This routine should only be called when the caller is the only one >> * entitled to acquire the lock. >> */ >> -static __always_inline void get_qlock(struct qspinlock *lock) >> +static __always_inline int get_qlock(struct qspinlock *lock) >> { >> struct __qspinlock *l = (void *)lock; >> >> barrier(); >> ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL; >> barrier(); >> + return 1; >> } > and here you make a horribly named function more horrible; > try_set_locked() is that its now.Will do. -Longman
Peter Zijlstra
2014-May-10 14:13 UTC
[PATCH v10 09/19] qspinlock: Prepare for unfair lock support
On Fri, May 09, 2014 at 09:19:32PM -0400, Waiman Long wrote:> On 05/08/2014 03:06 PM, Peter Zijlstra wrote: > >On Wed, May 07, 2014 at 11:01:37AM -0400, Waiman Long wrote: > >>If unfair lock is supported, the lock acquisition loop at the end of > >>the queue_spin_lock_slowpath() function may need to detect the fact > >>the lock can be stolen. Code are added for the stolen lock detection. > >> > >>A new qhead macro is also defined as a shorthand for mcs.locked. > >NAK, unfair should be a pure test-and-set lock. > > I have performance data showing that a simple test-and-set lock does not > scale well. That is the primary reason of ditching the test-and-set lock and > use a more complicated scheme which scales better.Nobody should give a fuck about scalability in this case anyway. Also, as I explained/asked earlier: lkml.kernel.org/r/20140314083001.GN27965 at twins.programming.kicks-ass.net Lock holder preemption is _way_ worse with any kind of queueing. You've not explained how the simple 3 cpu example in that email gets better performance than a test-and-set lock.> Also, it will be hard to > make the unfair test-and-set lock code to coexist nicely with PV spinlock > code.That's just complete crap as the test-and-set lock is like 3 lines of code. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 836 bytes Desc: not available URL: <http://lists.linuxfoundation.org/pipermail/virtualization/attachments/20140510/e2368bff/attachment.sig>
Reasonably Related Threads
- [PATCH v10 09/19] qspinlock: Prepare for unfair lock support
- [PATCH v10 09/19] qspinlock: Prepare for unfair lock support
- [PATCH v10 09/19] qspinlock: Prepare for unfair lock support
- [PATCH v10 12/19] unfair qspinlock: Variable frequency lock stealing mechanism
- [PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest