search for: queue_spin_trylock

Displaying 20 results from an estimated 83 matches for "queue_spin_trylock".

2014 Mar 13
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...ch_spin_lock(struct qspinlock *lock) > +{ > + if (static_key_false(&paravirt_unfairlocks_enabled)) > + queue_spin_lock_unfair(lock); > + else > + queue_spin_lock(lock); > +} So I would have expected something like: if (static_key_false(&paravirt_spinlock)) { while (!queue_spin_trylock(lock)) cpu_relax(); return; } At the top of queue_spin_lock_slowpath(). > +static inline int arch_spin_trylock(struct qspinlock *lock) > +{ > + if (static_key_false(&paravirt_unfairlocks_enabled)) > + return queue_spin_trylock_unfair(lock); > + else > + return queue_...
2014 Mar 13
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...ch_spin_lock(struct qspinlock *lock) > +{ > + if (static_key_false(&paravirt_unfairlocks_enabled)) > + queue_spin_lock_unfair(lock); > + else > + queue_spin_lock(lock); > +} So I would have expected something like: if (static_key_false(&paravirt_spinlock)) { while (!queue_spin_trylock(lock)) cpu_relax(); return; } At the top of queue_spin_lock_slowpath(). > +static inline int arch_spin_trylock(struct qspinlock *lock) > +{ > + if (static_key_false(&paravirt_unfairlocks_enabled)) > + return queue_spin_trylock_unfair(lock); > + else > + return queue_...
2014 Apr 17
2
[PATCH v9 04/19] qspinlock: Extract out the exchange of tail code word
...2 val) > node->next = NULL; > > /* > + * We touched a (possibly) cold cacheline; attempt the trylock once > + * more in the hope someone let go while we weren't watching as long > + * as no one was queuing. > */ > + if (!(val & _Q_TAIL_MASK) && queue_spin_trylock(lock)) > + goto release; But you just did a potentially very expensive op; @val isn't representative anymore!
2014 Apr 17
2
[PATCH v9 04/19] qspinlock: Extract out the exchange of tail code word
...2 val) > node->next = NULL; > > /* > + * We touched a (possibly) cold cacheline; attempt the trylock once > + * more in the hope someone let go while we weren't watching as long > + * as no one was queuing. > */ > + if (!(val & _Q_TAIL_MASK) && queue_spin_trylock(lock)) > + goto release; But you just did a potentially very expensive op; @val isn't representative anymore!
2014 Apr 18
2
[PATCH v9 04/19] qspinlock: Extract out the exchange of tail code word
...t;> /* > >>+ * We touched a (possibly) cold cacheline; attempt the trylock once > >>+ * more in the hope someone let go while we weren't watching as long > >>+ * as no one was queuing. > >> */ > >>+ if (!(val& _Q_TAIL_MASK)&& queue_spin_trylock(lock)) > >>+ goto release; > >But you just did a potentially very expensive op; @val isn't > >representative anymore! > > That is not true. I pass in a pointer to val to trylock_pending() (the > pointer thing) so that it will store the latest value that it reads...
2014 Apr 18
2
[PATCH v9 04/19] qspinlock: Extract out the exchange of tail code word
...t;> /* > >>+ * We touched a (possibly) cold cacheline; attempt the trylock once > >>+ * more in the hope someone let go while we weren't watching as long > >>+ * as no one was queuing. > >> */ > >>+ if (!(val& _Q_TAIL_MASK)&& queue_spin_trylock(lock)) > >>+ goto release; > >But you just did a potentially very expensive op; @val isn't > >representative anymore! > > That is not true. I pass in a pointer to val to trylock_pending() (the > pointer thing) so that it will store the latest value that it reads...
2014 Jun 17
3
[PATCH 04/11] qspinlock: Extract out the exchange of tail code word
...39;t watching. > */ > - for (;;) { > - new = _Q_LOCKED_VAL; > - if (val) > - new = tail | (val & _Q_LOCKED_PENDING_MASK); > - > - old = atomic_cmpxchg(&lock->val, val, new); > - if (old == val) > - break; > - > - val = old; > - } > + if (queue_spin_trylock(lock)) > + goto release; So now are three of them? One in queue_spin_lock, then at the start of this function when checking for the pending bit, and the once more here. And that is because the local cache line might be cold for the 'mcs_index' struct? That all seems to be a bit of exp...
2014 Jun 17
3
[PATCH 04/11] qspinlock: Extract out the exchange of tail code word
...39;t watching. > */ > - for (;;) { > - new = _Q_LOCKED_VAL; > - if (val) > - new = tail | (val & _Q_LOCKED_PENDING_MASK); > - > - old = atomic_cmpxchg(&lock->val, val, new); > - if (old == val) > - break; > - > - val = old; > - } > + if (queue_spin_trylock(lock)) > + goto release; So now are three of them? One in queue_spin_lock, then at the start of this function when checking for the pending bit, and the once more here. And that is because the local cache line might be cold for the 'mcs_index' struct? That all seems to be a bit of exp...
2014 Mar 13
0
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...gt; +{ >> + if (static_key_false(&paravirt_unfairlocks_enabled)) >> + queue_spin_lock_unfair(lock); >> + else >> + queue_spin_lock(lock); >> +} > So I would have expected something like: > > if (static_key_false(&paravirt_spinlock)) { > while (!queue_spin_trylock(lock)) > cpu_relax(); > return; > } > > At the top of queue_spin_lock_slowpath(). I don't like the idea of constantly spinning on the lock. That can cause all sort of performance issues. My version of the unfair lock tries to grab the lock ignoring if there are others wa...
2014 May 08
1
[PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
...;s missing {}. > +#endif > barrier(); > ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL; > barrier(); But no, what you want is: static __always_inline bool virt_lock(struct qspinlock *lock) { #ifdef CONFIG_VIRT_MUCK if (static_key_false(&virt_unfairlocks_enabled)) { while (!queue_spin_trylock(lock)) cpu_relax(); return true; } #else return false; } void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val) { if (virt_lock(lock)) return; ... }
2014 May 08
1
[PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
...;s missing {}. > +#endif > barrier(); > ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL; > barrier(); But no, what you want is: static __always_inline bool virt_lock(struct qspinlock *lock) { #ifdef CONFIG_VIRT_MUCK if (static_key_false(&virt_unfairlocks_enabled)) { while (!queue_spin_trylock(lock)) cpu_relax(); return true; } #else return false; } void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val) { if (virt_lock(lock)) return; ... }
2014 Jun 11
3
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
...t; mainly due to the use of a static key. However, uncontended lock-unlock > operation are really just a tiny percentage of a real workload. So > there should no noticeable change in application performance. No, entirely unacceptable. > +#ifdef CONFIG_VIRT_UNFAIR_LOCKS > +/** > + * queue_spin_trylock_unfair - try to acquire the queue spinlock unfairly > + * @lock : Pointer to queue spinlock structure > + * Return: 1 if lock acquired, 0 if failed > + */ > +static __always_inline int queue_spin_trylock_unfair(struct qspinlock *lock) > +{ > + union arch_qspinlock *qlock = (union...
2014 Jun 11
3
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
...t; mainly due to the use of a static key. However, uncontended lock-unlock > operation are really just a tiny percentage of a real workload. So > there should no noticeable change in application performance. No, entirely unacceptable. > +#ifdef CONFIG_VIRT_UNFAIR_LOCKS > +/** > + * queue_spin_trylock_unfair - try to acquire the queue spinlock unfairly > + * @lock : Pointer to queue spinlock structure > + * Return: 1 if lock acquired, 0 if failed > + */ > +static __always_inline int queue_spin_trylock_unfair(struct qspinlock *lock) > +{ > + union arch_qspinlock *qlock = (union...
2014 May 30
0
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
...ock in a virtualized environment. In this case, a new lock acquirer can come and steal the lock if the next-in-line CPU to get the lock is scheduled out. A simple unfair queue spinlock can be implemented by allowing lock stealing in the fast path. The slowpath will also be modified to run a simple queue_spin_trylock() loop. A simple test and set lock like that does have the problem that the The constant spinning on the lock word put a lot of cacheline contention traffic on the affected cacheline, thus slowing tasks that need to access the cacheline. Unfair lock in a native environment is generally not a good...
2014 Apr 18
1
[PATCH v9 04/19] qspinlock: Extract out the exchange of tail code word
...ouched a (possibly) cold cacheline; attempt the trylock once > >>>>+ * more in the hope someone let go while we weren't watching as long > >>>>+ * as no one was queuing. > >>>> */ > >>>>+ if (!(val& _Q_TAIL_MASK)&& queue_spin_trylock(lock)) > >>>>+ goto release; > >>>But you just did a potentially very expensive op; @val isn't > >>>representative anymore! > >>That is not true. I pass in a pointer to val to trylock_pending() (the > >>pointer thing) so that it will sto...
2014 Apr 18
1
[PATCH v9 04/19] qspinlock: Extract out the exchange of tail code word
...ouched a (possibly) cold cacheline; attempt the trylock once > >>>>+ * more in the hope someone let go while we weren't watching as long > >>>>+ * as no one was queuing. > >>>> */ > >>>>+ if (!(val& _Q_TAIL_MASK)&& queue_spin_trylock(lock)) > >>>>+ goto release; > >>>But you just did a potentially very expensive op; @val isn't > >>>representative anymore! > >>That is not true. I pass in a pointer to val to trylock_pending() (the > >>pointer thing) so that it will sto...
2014 Jun 18
0
[PATCH 04/11] qspinlock: Extract out the exchange of tail code word
...>> - new = _Q_LOCKED_VAL; >> - if (val) >> - new = tail | (val & _Q_LOCKED_PENDING_MASK); >> - >> - old = atomic_cmpxchg(&lock->val, val, new); >> - if (old == val) >> - break; >> - >> - val = old; >> - } >> + if (queue_spin_trylock(lock)) >> + goto release; > > So now are three of them? One in queue_spin_lock, then at the start > of this function when checking for the pending bit, and the once more > here. And that is because the local cache line might be cold for the > 'mcs_index' struct? > &...
2014 Mar 14
4
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...false(&paravirt_unfairlocks_enabled)) > >>+ queue_spin_lock_unfair(lock); > >>+ else > >>+ queue_spin_lock(lock); > >>+} > >So I would have expected something like: > > > > if (static_key_false(&paravirt_spinlock)) { > > while (!queue_spin_trylock(lock)) > > cpu_relax(); > > return; > > } > > > >At the top of queue_spin_lock_slowpath(). > > I don't like the idea of constantly spinning on the lock. That can cause all > sort of performance issues. Its bloody virt; _that_ is a performance issue to...
2014 Mar 14
4
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...false(&paravirt_unfairlocks_enabled)) > >>+ queue_spin_lock_unfair(lock); > >>+ else > >>+ queue_spin_lock(lock); > >>+} > >So I would have expected something like: > > > > if (static_key_false(&paravirt_spinlock)) { > > while (!queue_spin_trylock(lock)) > > cpu_relax(); > > return; > > } > > > >At the top of queue_spin_lock_slowpath(). > > I don't like the idea of constantly spinning on the lock. That can cause all > sort of performance issues. Its bloody virt; _that_ is a performance issue to...
2014 Apr 23
0
[PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
...we weren't watching as long > - * as no one was queuing. > + * We touched a (possibly) cold cacheline in the per-cpu queue node; > + * attempt the trylock once more in the hope someone let go while we > + * weren't watching. > */ > - if ((val & _Q_TAIL_MASK) || !queue_spin_trylock(lock)) > + if (!queue_spin_trylock(lock)) > queue_spin_lock_slowerpath(lock, node, tail); > > /*