search for: try_clear_pending_set_lock

Displaying 12 results from an estimated 12 matches for "try_clear_pending_set_lock".

2014 Apr 18
2
[PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
...wner to go away. > >> * > >> * *,1,1 -> *,1,0 > >>+ * > >>+ * this wait loop must be a load-acquire such that we match the > >>+ * store-release that clears the locked bit and create lock > >>+ * sequentiality; this because not all try_clear_pending_set_locked() > >>+ * implementations imply full barriers. > >You renamed the function referred in the above comment. > > > > Sorry, will fix the comments. I suggest not renaming the function instead. try_clear_pending_set_locked() tells the intent in a clearer fashion. Thanks...
2014 Apr 18
2
[PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
...wner to go away. > >> * > >> * *,1,1 -> *,1,0 > >>+ * > >>+ * this wait loop must be a load-acquire such that we match the > >>+ * store-release that clears the locked bit and create lock > >>+ * sequentiality; this because not all try_clear_pending_set_locked() > >>+ * implementations imply full barriers. > >You renamed the function referred in the above comment. > > > > Sorry, will fix the comments. I suggest not renaming the function instead. try_clear_pending_set_locked() tells the intent in a clearer fashion. Thanks...
2014 Apr 17
2
[PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
...*pval) > * we're pending, wait for the owner to go away. > * > * *,1,1 -> *,1,0 > + * > + * this wait loop must be a load-acquire such that we match the > + * store-release that clears the locked bit and create lock > + * sequentiality; this because not all try_clear_pending_set_locked() > + * implementations imply full barriers. You renamed the function referred in the above comment. > */ > - while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK) > + while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK) > arch_mut...
2014 Apr 17
2
[PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
...*pval) > * we're pending, wait for the owner to go away. > * > * *,1,1 -> *,1,0 > + * > + * this wait loop must be a load-acquire such that we match the > + * store-release that clears the locked bit and create lock > + * sequentiality; this because not all try_clear_pending_set_locked() > + * implementations imply full barriers. You renamed the function referred in the above comment. > */ > - while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK) > + while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK) > arch_mut...
2014 Apr 18
0
[PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
...>> * >>>> * *,1,1 -> *,1,0 >>>> + * >>>> + * this wait loop must be a load-acquire such that we match the >>>> + * store-release that clears the locked bit and create lock >>>> + * sequentiality; this because not all try_clear_pending_set_locked() >>>> + * implementations imply full barriers. >>> You renamed the function referred in the above comment. >>> >> Sorry, will fix the comments. > I suggest not renaming the function instead. > try_clear_pending_set_locked() tells the intent in a clearer...
2014 Apr 17
0
[PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
...pending, wait for the owner to go away. >> * >> * *,1,1 -> *,1,0 >> + * >> + * this wait loop must be a load-acquire such that we match the >> + * store-release that clears the locked bit and create lock >> + * sequentiality; this because not all try_clear_pending_set_locked() >> + * implementations imply full barriers. > You renamed the function referred in the above comment. > Sorry, will fix the comments. -Longman
2014 Apr 23
0
[PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
...> /* > @@ -643,7 +638,7 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval) > * > * this wait loop must be a load-acquire such that we match the > * store-release that clears the locked bit and create lock > - * sequentiality; this because not all try_clear_pending_set_locked() > + * sequentiality; this because not all clear_pending_set_locked() > * implementations imply full barriers. > * > * When PV qspinlock is enabled, exit the pending bit code path and > @@ -835,6 +830,10 @@ notify_next: > * contended : (*,x,y) +-->...
2014 Apr 17
0
[PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
...pending(struct qspinlock *lock, u32 *pval) * we're pending, wait for the owner to go away. * * *,1,1 -> *,1,0 + * + * this wait loop must be a load-acquire such that we match the + * store-release that clears the locked bit and create lock + * sequentiality; this because not all try_clear_pending_set_locked() + * implementations imply full barriers. */ - while ((val = atomic_read(&lock->val)) & _Q_LOCKED_MASK) + while ((val = smp_load_acquire(&lock->val.counter)) & _Q_LOCKED_MASK) arch_mutex_cpu_relax(); /* @@ -166,15 +265,7 @@ static inline int trylock_pending(struc...
2014 Apr 23
2
[PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
On 04/18/2014 05:40 PM, Waiman Long wrote: > On 04/18/2014 03:05 PM, Peter Zijlstra wrote: >> On Fri, Apr 18, 2014 at 01:52:50PM -0400, Waiman Long wrote: >>> I am confused by your notation. >> Nah, I think I was confused :-) Make the 1 _Q_LOCKED_VAL though, as >> that's the proper constant to use. > > Everyone gets confused once in a while:-) I have plenty
2014 Apr 23
2
[PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
On 04/18/2014 05:40 PM, Waiman Long wrote: > On 04/18/2014 03:05 PM, Peter Zijlstra wrote: >> On Fri, Apr 18, 2014 at 01:52:50PM -0400, Waiman Long wrote: >>> I am confused by your notation. >> Nah, I think I was confused :-) Make the 1 _Q_LOCKED_VAL though, as >> that's the proper constant to use. > > Everyone gets confused once in a while:-) I have plenty
2014 Apr 17
33
[PATCH v9 00/19] qspinlock: a 4-byte queue spinlock with PV support
v8->v9: - Integrate PeterZ's version of the queue spinlock patch with some modification: http://lkml.kernel.org/r/20140310154236.038181843 at infradead.org - Break the more complex patches into smaller ones to ease review effort. - Fix a racing condition in the PV qspinlock code. v7->v8: - Remove one unneeded atomic operation from the slowpath, thus improving
2014 Apr 17
33
[PATCH v9 00/19] qspinlock: a 4-byte queue spinlock with PV support
v8->v9: - Integrate PeterZ's version of the queue spinlock patch with some modification: http://lkml.kernel.org/r/20140310154236.038181843 at infradead.org - Break the more complex patches into smaller ones to ease review effort. - Fix a racing condition in the PV qspinlock code. v7->v8: - Remove one unneeded atomic operation from the slowpath, thus improving