search for: pv_hash_find

Displaying 20 results from an estimated 26 matches for "pv_hash_find".

2015 Apr 13
1
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...ait(&l->locked, _Q_SLOW_VAL); > > > >If we get a spurious wakeup (due to device interrupts or random kick) > >we'll loop around but ->locked will remain _Q_SLOW_VAL. > > The purpose of the slow_set flag is not about the lock value. It is to make > sure that pv_hash_find() will always find a match. Consider the following > scenario: > > cpu1 cpu2 cpu3 > ---- ---- ---- > pv_wait > spurious wakeup > loop l->locked > > read _Q_SLOW_VAL > pv_ha...
2015 Apr 13
1
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...ait(&l->locked, _Q_SLOW_VAL); > > > >If we get a spurious wakeup (due to device interrupts or random kick) > >we'll loop around but ->locked will remain _Q_SLOW_VAL. > > The purpose of the slow_set flag is not about the lock value. It is to make > sure that pv_hash_find() will always find a match. Consider the following > scenario: > > cpu1 cpu2 cpu3 > ---- ---- ---- > pv_wait > spurious wakeup > loop l->locked > > read _Q_SLOW_VAL > pv_ha...
2015 Apr 09
6
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...INE; hb < end; hb++) { > + if (!cmpxchg(&hb->lock, NULL, lock)) { > + WRITE_ONCE(hb->node, node); > + /* > + * We haven't set the _Q_SLOW_VAL yet. So > + * the order of writing doesn't matter. > + */ > + smp_wmb(); /* matches rmb from pv_hash_find */ This doesn't make sense. Both sites do ->lock first and ->node second. No amount of ordering can 'fix' that. I think we can safely remove this wmb and the rmb below, because the required ordering is already provided by setting/observing l->locked == SLOW. > + goto d...
2015 Apr 09
6
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...INE; hb < end; hb++) { > + if (!cmpxchg(&hb->lock, NULL, lock)) { > + WRITE_ONCE(hb->node, node); > + /* > + * We haven't set the _Q_SLOW_VAL yet. So > + * the order of writing doesn't matter. > + */ > + smp_wmb(); /* matches rmb from pv_hash_find */ This doesn't make sense. Both sites do ->lock first and ->node second. No amount of ordering can 'fix' that. I think we can safely remove this wmb and the rmb below, because the required ordering is already provided by setting/observing l->locked == SLOW. > + goto d...
2015 Apr 02
3
[PATCH 8/9] qspinlock: Generic paravirt support
...is a > good time to look up. No, its all already ordered and working. pv_wait_head(): pv_hash() /* MB as per cmpxchg */ cmpxchg(&l->locked, _Q_LOCKED_VAL, _Q_SLOW_VAL); VS __pv_queue_spin_unlock(): if (xchg(&l->locked, 0) != _Q_SLOW_VAL) return; /* MB as per xchg */ pv_hash_find(lock);
2015 Apr 02
3
[PATCH 8/9] qspinlock: Generic paravirt support
...is a > good time to look up. No, its all already ordered and working. pv_wait_head(): pv_hash() /* MB as per cmpxchg */ cmpxchg(&l->locked, _Q_LOCKED_VAL, _Q_SLOW_VAL); VS __pv_queue_spin_unlock(): if (xchg(&l->locked, 0) != _Q_SLOW_VAL) return; /* MB as per xchg */ pv_hash_find(lock);
2015 Apr 09
0
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...t;> + if (!cmpxchg(&hb->lock, NULL, lock)) { >> + WRITE_ONCE(hb->node, node); >> + /* >> + * We haven't set the _Q_SLOW_VAL yet. So >> + * the order of writing doesn't matter. >> + */ >> + smp_wmb(); /* matches rmb from pv_hash_find */ > This doesn't make sense. Both sites do ->lock first and ->node second. > No amount of ordering can 'fix' that. > > I think we can safely remove this wmb and the rmb below, because the > required ordering is already provided by setting/observing l->locked ==...
2015 Mar 19
4
[PATCH 8/9] qspinlock: Generic paravirt support
...es. + * + * This can cause hb_hash_find() to not find a + * cpu even though _Q_SLOW_VAL, this is not a + * problem since we re-check l->locked before + * going to sleep and the unlock will have + * cleared l->locked already. + */ + smp_wmb(); /* matches rmb from pv_hash_find */ + WRITE_ONCE(hb->lock, lock); + goto done; + } + } + + hash = lfsr(hash, PV_LOCK_HASH_BITS); + hb = &__pv_lock_hash[hash_align(hash)]; + } + +done: + return &hb->lock; +} + +static int pv_hash_find(struct qspinlock *lock) +{ + u64 hash = hash_ptr(lock, PV_LOCK_HASH_BITS)...
2015 Mar 19
4
[PATCH 8/9] qspinlock: Generic paravirt support
...es. + * + * This can cause hb_hash_find() to not find a + * cpu even though _Q_SLOW_VAL, this is not a + * problem since we re-check l->locked before + * going to sleep and the unlock will have + * cleared l->locked already. + */ + smp_wmb(); /* matches rmb from pv_hash_find */ + WRITE_ONCE(hb->lock, lock); + goto done; + } + } + + hash = lfsr(hash, PV_LOCK_HASH_BITS); + hb = &__pv_lock_hash[hash_align(hash)]; + } + +done: + return &hb->lock; +} + +static int pv_hash_find(struct qspinlock *lock) +{ + u64 hash = hash_ptr(lock, PV_LOCK_HASH_BITS)...
2015 Apr 02
0
[PATCH 8/9] qspinlock: Generic paravirt support
...> pv_wait_head(): > > pv_hash() > /* MB as per cmpxchg */ > cmpxchg(&l->locked, _Q_LOCKED_VAL, _Q_SLOW_VAL); > > VS > > __pv_queue_spin_unlock(): > > if (xchg(&l->locked, 0) != _Q_SLOW_VAL) > return; > > /* MB as per xchg */ > pv_hash_find(lock); > > Something like so.. compile tested only. I took out the LFSR because that was likely over engineering from my side :-) --- a/kernel/locking/qspinlock_paravirt.h +++ b/kernel/locking/qspinlock_paravirt.h @@ -2,6 +2,8 @@ #error "do not include this file" #endif +#...
2015 Apr 13
1
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
..._node *node; > >>+ > >>+ if (likely(cmpxchg(&l->locked, _Q_LOCKED_VAL, 0) == _Q_LOCKED_VAL)) > >>+ return; > >>+ > >>+ /* > >>+ * The queue head has been halted. Need to locate it and wake it up. > >>+ */ > >>+ node = pv_hash_find(lock); > >>+ smp_store_release(&l->locked, 0); > >Ah yes, clever that. > > > >>+ /* > >>+ * At this point the memory pointed at by lock can be freed/reused, > >>+ * however we can still use the PV node to kick the CPU. > >>+ */ >...
2015 Apr 13
1
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
..._node *node; > >>+ > >>+ if (likely(cmpxchg(&l->locked, _Q_LOCKED_VAL, 0) == _Q_LOCKED_VAL)) > >>+ return; > >>+ > >>+ /* > >>+ * The queue head has been halted. Need to locate it and wake it up. > >>+ */ > >>+ node = pv_hash_find(lock); > >>+ smp_store_release(&l->locked, 0); > >Ah yes, clever that. > > > >>+ /* > >>+ * At this point the memory pointed at by lock can be freed/reused, > >>+ * however we can still use the PV node to kick the CPU. > >>+ */ >...
2015 Apr 09
2
[PATCH v15 13/15] pvqspinlock: Only kick CPU at unlock time
...* needed. > + */ > + WRITE_ONCE(l->locked, _Q_SLOW_VAL); > + (void)pv_hash(lock, pn); > } This is broken. The unlock path relies on: pv_hash() MB l->locked = SLOW such that when it observes SLOW, it must then also observe a consistent bucket. The above can have us do pv_hash_find() _before_ we actually hash the lock, which will result in us triggering that BUG_ON() in there.
2015 Apr 09
2
[PATCH v15 13/15] pvqspinlock: Only kick CPU at unlock time
...* needed. > + */ > + WRITE_ONCE(l->locked, _Q_SLOW_VAL); > + (void)pv_hash(lock, pn); > } This is broken. The unlock path relies on: pv_hash() MB l->locked = SLOW such that when it observes SLOW, it must then also observe a consistent bucket. The above can have us do pv_hash_find() _before_ we actually hash the lock, which will result in us triggering that BUG_ON() in there.
2015 Apr 07
0
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...;) { + for (end = hb + PV_HB_PER_LINE; hb < end; hb++) { + if (!cmpxchg(&hb->lock, NULL, lock)) { + WRITE_ONCE(hb->node, node); + /* + * We haven't set the _Q_SLOW_VAL yet. So + * the order of writing doesn't matter. + */ + smp_wmb(); /* matches rmb from pv_hash_find */ + goto done; + } + } + + hash = lfsr(hash, pv_lock_hash_bits, 0); + hb = &pv_lock_hash[hash_align(hash)]; + BUG_ON(hash == init_hash); + } + +done: + return &hb->lock; +} + +static struct pv_node *pv_hash_find(struct qspinlock *lock) +{ + unsigned long init_hash, hash = hash_...
2015 Apr 24
0
[PATCH v16 08/14] pvqspinlock: Implement simple paravirt support for the qspinlock
...et. So + * the order of writing doesn't matter. + */ + WRITE_ONCE(he->node, node); + goto done; + } + } + if (++hash >= (1 << pv_lock_hash_bits)) + hash = 0; + BUG_ON(hash == init_hash); + } + +done: + return &he->lock; +} + +static inline struct pv_node *pv_hash_find(struct qspinlock *lock) +{ + unsigned long init_hash, hash = hash_ptr(lock, pv_lock_hash_bits); + struct pv_hash_entry *he, *end; + struct pv_node *node = NULL; + + init_hash = hash; + for (;;) { + he = pv_lock_hash[hash].ent; + for (end = he + PV_HE_PER_LINE; he < end; he++) { + struct qspi...
2015 Apr 01
0
[PATCH 8/9] qspinlock: Generic paravirt support
...long rehashing chains. > > --- > include/linux/lfsr.h | 49 ++++++++++++ > kernel/locking/qspinlock_paravirt.h | 143 ++++++++++++++++++++++++++++++++---- > 2 files changed, 178 insertions(+), 14 deletions(-) > > --- /dev/null > > + > +static int pv_hash_find(struct qspinlock *lock) > +{ > + u64 hash = hash_ptr(lock, PV_LOCK_HASH_BITS); > + struct pv_hash_bucket *hb, *end; > + int cpu = -1; > + > + if (!hash) > + hash = 1; > + > + hb =&__pv_lock_hash[hash_align(hash)]; > + for (;;) { > + for (end = hb + PV_HB_PER_L...
2015 Apr 09
0
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...+ if (!cmpxchg(&hb->lock, NULL, lock)) { > > + WRITE_ONCE(hb->node, node); > > + /* > > + * We haven't set the _Q_SLOW_VAL yet. So > > + * the order of writing doesn't matter. > > + */ > > + smp_wmb(); /* matches rmb from pv_hash_find */ > > + goto done; > > + } > > + } > > + > > + hash = lfsr(hash, pv_lock_hash_bits, 0); > > Since pv_lock_hash_bits is a variable, you end up running through that > massive if() forest to find the corresponding tap every single time. It > cannot co...
2015 Mar 18
2
[PATCH 8/9] qspinlock: Generic paravirt support
On 03/16/2015 09:16 AM, Peter Zijlstra wrote: > Implement simple paravirt support for the qspinlock. > > Provide a separate (second) version of the spin_lock_slowpath for > paravirt along with a special unlock path. > > The second slowpath is generated by adding a few pv hooks to the > normal slowpath, but where those will compile away for the native > case, they expand
2015 Mar 18
2
[PATCH 8/9] qspinlock: Generic paravirt support
On 03/16/2015 09:16 AM, Peter Zijlstra wrote: > Implement simple paravirt support for the qspinlock. > > Provide a separate (second) version of the spin_lock_slowpath for > paravirt along with a special unlock path. > > The second slowpath is generated by adding a few pv hooks to the > normal slowpath, but where those will compile away for the native > case, they expand