search for: get_qlock

Displaying 20 results from an estimated 26 matches for "get_qlock".

2014 May 08
1
[PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable
On Wed, May 07, 2014 at 11:01:35AM -0400, Waiman Long wrote: > /** > + * get_qlock - Set the lock bit and own the lock > + * @lock: Pointer to queue spinlock structure > + * > + * This routine should only be called when the caller is the only one > + * entitled to acquire the lock. > + */ > +static __always_inline void get_qlock(struct qspinlock *lock) set_lock...
2014 May 08
1
[PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable
On Wed, May 07, 2014 at 11:01:35AM -0400, Waiman Long wrote: > /** > + * get_qlock - Set the lock bit and own the lock > + * @lock: Pointer to queue spinlock structure > + * > + * This routine should only be called when the caller is the only one > + * entitled to acquire the lock. > + */ > +static __always_inline void get_qlock(struct qspinlock *lock) set_lock...
2014 May 07
0
[PATCH v10 09/19] qspinlock: Prepare for unfair lock support
...king/qspinlock.c @@ -64,6 +64,7 @@ struct qnode { struct mcs_spinlock mcs; }; +#define qhead mcs.locked /* The queue head flag */ /* * Per-CPU queue node structures; we can never have more than 4 nested @@ -216,18 +217,20 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval) /** * get_qlock - Set the lock bit and own the lock - * @lock: Pointer to queue spinlock structure + * @lock : Pointer to queue spinlock structure + * Return: 1 if lock acquired, 0 otherwise * * This routine should only be called when the caller is the only one * entitled to acquire the lock. */ -static __...
2014 May 08
2
[PATCH v10 09/19] qspinlock: Prepare for unfair lock support
...end of > the queue_spin_lock_slowpath() function may need to detect the fact > the lock can be stolen. Code are added for the stolen lock detection. > > A new qhead macro is also defined as a shorthand for mcs.locked. NAK, unfair should be a pure test-and-set lock. > /** > * get_qlock - Set the lock bit and own the lock > - * @lock: Pointer to queue spinlock structure > + * @lock : Pointer to queue spinlock structure > + * Return: 1 if lock acquired, 0 otherwise > * > * This routine should only be called when the caller is the only one > * entitled to acq...
2014 May 08
2
[PATCH v10 09/19] qspinlock: Prepare for unfair lock support
...end of > the queue_spin_lock_slowpath() function may need to detect the fact > the lock can be stolen. Code are added for the stolen lock detection. > > A new qhead macro is also defined as a shorthand for mcs.locked. NAK, unfair should be a pure test-and-set lock. > /** > * get_qlock - Set the lock bit and own the lock > - * @lock: Pointer to queue spinlock structure > + * @lock : Pointer to queue spinlock structure > + * Return: 1 if lock acquired, 0 otherwise > * > * This routine should only be called when the caller is the only one > * entitled to acq...
2014 Apr 17
0
[PATCH v9 07/19] qspinlock: Use a simple write to grab the lock, if applicable
...; + }; +#endif }; }; +#if _Q_PENDING_BITS == 8 /** * clear_pending_set_locked - take ownership and clear the pending bit. * @lock: Pointer to queue spinlock structure @@ -204,6 +210,22 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval) #endif /* _Q_PENDING_BITS == 8 */ /** + * get_qlock - Set the lock bit and own the lock + * @lock: Pointer to queue spinlock structure + * + * This routine should only be called when the caller is the only one + * entitled to acquire the lock. + */ +static __always_inline void get_qlock(struct qspinlock *lock) +{ + struct __qspinlock *l = (void *)lo...
2014 May 07
0
[PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable
...; + }; +#endif }; }; +#if _Q_PENDING_BITS == 8 /** * clear_pending_set_locked - take ownership and clear the pending bit. * @lock: Pointer to queue spinlock structure @@ -200,6 +206,22 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval) #endif /* _Q_PENDING_BITS == 8 */ /** + * get_qlock - Set the lock bit and own the lock + * @lock: Pointer to queue spinlock structure + * + * This routine should only be called when the caller is the only one + * entitled to acquire the lock. + */ +static __always_inline void get_qlock(struct qspinlock *lock) +{ + struct __qspinlock *l = (void *)lo...
2014 May 10
0
[PATCH v10 09/19] qspinlock: Prepare for unfair lock support
...imple test-and-set lock does not scale well. That is the primary reason of ditching the test-and-set lock and use a more complicated scheme which scales better. Also, it will be hard to make the unfair test-and-set lock code to coexist nicely with PV spinlock code. >> /** >> * get_qlock - Set the lock bit and own the lock >> - * @lock: Pointer to queue spinlock structure >> + * @lock : Pointer to queue spinlock structure >> + * Return: 1 if lock acquired, 0 otherwise >> * >> * This routine should only be called when the caller is the only one &g...
2014 May 08
1
[PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
...ote: No, we want the unfair thing for VIRT, not PARAVIRT. > diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c > index 9e7659e..10e87e1 100644 > --- a/kernel/locking/qspinlock.c > +++ b/kernel/locking/qspinlock.c > @@ -227,6 +227,14 @@ static __always_inline int get_qlock(struct qspinlock *lock) > { > struct __qspinlock *l = (void *)lock; > > +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS > + if (static_key_false(&paravirt_unfairlocks_enabled)) > + /* > + * Need to use atomic operation to get the lock when > + * lock stealing can happen....
2014 May 08
1
[PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
...ote: No, we want the unfair thing for VIRT, not PARAVIRT. > diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c > index 9e7659e..10e87e1 100644 > --- a/kernel/locking/qspinlock.c > +++ b/kernel/locking/qspinlock.c > @@ -227,6 +227,14 @@ static __always_inline int get_qlock(struct qspinlock *lock) > { > struct __qspinlock *l = (void *)lock; > > +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS > + if (static_key_false(&paravirt_unfairlocks_enabled)) > + /* > + * Need to use atomic operation to get the lock when > + * lock stealing can happen....
2014 May 07
32
[PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support
v9->v10: - Make some minor changes to qspinlock.c to accommodate review feedback. - Change author to PeterZ for 2 of the patches. - Include Raghavendra KT's test results in patch 18. v8->v9: - Integrate PeterZ's version of the queue spinlock patch with some modification: http://lkml.kernel.org/r/20140310154236.038181843 at infradead.org - Break the more complex
2014 May 07
32
[PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support
v9->v10: - Make some minor changes to qspinlock.c to accommodate review feedback. - Change author to PeterZ for 2 of the patches. - Include Raghavendra KT's test results in patch 18. v8->v9: - Integrate PeterZ's version of the queue spinlock patch with some modification: http://lkml.kernel.org/r/20140310154236.038181843 at infradead.org - Break the more complex
2014 May 21
0
[RFC 08/07] qspinlock: integrate pending bit into queue
...till here; safe old = xchg_tail(lock, tail, &val); /* @@ -386,41 +458,45 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val) } /* - * we're at the head of the waitqueue, wait for the owner & pending to - * go away. - * Load-acquired is used here because the get_qlock() - * function below may not be a full memory barrier. - * - * *,x,y -> *,0,0 + * We are now waiting for the pending bit to get cleared. */ - while ((val = smp_load_acquire(&lock->val.counter)) - & _Q_LOCKED_PENDING_MASK) + // make a get_pending(lock, &val) helper...
2014 Apr 17
33
[PATCH v9 00/19] qspinlock: a 4-byte queue spinlock with PV support
v8->v9: - Integrate PeterZ's version of the queue spinlock patch with some modification: http://lkml.kernel.org/r/20140310154236.038181843 at infradead.org - Break the more complex patches into smaller ones to ease review effort. - Fix a racing condition in the PV qspinlock code. v7->v8: - Remove one unneeded atomic operation from the slowpath, thus improving
2014 Apr 17
33
[PATCH v9 00/19] qspinlock: a 4-byte queue spinlock with PV support
v8->v9: - Integrate PeterZ's version of the queue spinlock patch with some modification: http://lkml.kernel.org/r/20140310154236.038181843 at infradead.org - Break the more complex patches into smaller ones to ease review effort. - Fix a racing condition in the PV qspinlock code. v7->v8: - Remove one unneeded atomic operation from the slowpath, thus improving
2014 May 08
1
[PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable
On Wed, May 07, 2014 at 11:01:35AM -0400, Waiman Long wrote: > @@ -94,23 +94,29 @@ static inline struct mcs_spinlock *decode_tail(u32 tail) > * can allow better optimization of the lock acquisition for the pending > * bit holder. > */ > -#if _Q_PENDING_BITS == 8 > - > struct __qspinlock { > union { > atomic_t val; > - struct { > #ifdef __LITTLE_ENDIAN
2014 May 08
1
[PATCH v10 07/19] qspinlock: Use a simple write to grab the lock, if applicable
On Wed, May 07, 2014 at 11:01:35AM -0400, Waiman Long wrote: > @@ -94,23 +94,29 @@ static inline struct mcs_spinlock *decode_tail(u32 tail) > * can allow better optimization of the lock acquisition for the pending > * bit holder. > */ > -#if _Q_PENDING_BITS == 8 > - > struct __qspinlock { > union { > atomic_t val; > - struct { > #ifdef __LITTLE_ENDIAN
2014 May 14
2
[PATCH v10 03/19] qspinlock: Add pending bit
2014-05-14 19:00+0200, Peter Zijlstra: > On Wed, May 14, 2014 at 06:51:24PM +0200, Radim Kr?m?? wrote: > > Ok. > > I've seen merit in pvqspinlock even with slightly slower first-waiter, > > so I would have happily sacrificed those horrible branches. > > (I prefer elegant to optimized code, but I can see why we want to be > > strictly better than ticketlock.)
2014 May 14
2
[PATCH v10 03/19] qspinlock: Add pending bit
2014-05-14 19:00+0200, Peter Zijlstra: > On Wed, May 14, 2014 at 06:51:24PM +0200, Radim Kr?m?? wrote: > > Ok. > > I've seen merit in pvqspinlock even with slightly slower first-waiter, > > so I would have happily sacrificed those horrible branches. > > (I prefer elegant to optimized code, but I can see why we want to be > > strictly better than ticketlock.)
2014 Apr 23
0
[PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
...clear_pending_set_locked(struct qspinlock *lock, u32 val) > { > struct __qspinlock *l = (void *)lock; > > - ACCESS_ONCE(l->locked_pending) = 1; > + ACCESS_ONCE(l->locked_pending) = _Q_LOCKED_VAL; > } > > /* > @@ -567,16 +563,16 @@ static __always_inline int get_qlock(struct qspinlock *lock) > /** > * trylock_pending - try to acquire queue spinlock using the pending bit > * @lock : Pointer to queue spinlock structure > - * @pval : Pointer to value of the queue spinlock 32-bit word > + * @val : Current value of the queue spinlock 32-bit word...