search for: _q_tail_mask

Displaying 20 results from an estimated 77 matches for "_q_tail_mask".

2014 Apr 17
2
[PATCH v9 04/19] qspinlock: Extract out the exchange of tail code word
...truct qspinlock *lock, u32 val) > node->next = NULL; > > /* > + * We touched a (possibly) cold cacheline; attempt the trylock once > + * more in the hope someone let go while we weren't watching as long > + * as no one was queuing. > */ > + if (!(val & _Q_TAIL_MASK) && queue_spin_trylock(lock)) > + goto release; But you just did a potentially very expensive op; @val isn't representative anymore!
2014 Apr 17
2
[PATCH v9 04/19] qspinlock: Extract out the exchange of tail code word
...truct qspinlock *lock, u32 val) > node->next = NULL; > > /* > + * We touched a (possibly) cold cacheline; attempt the trylock once > + * more in the hope someone let go while we weren't watching as long > + * as no one was queuing. > */ > + if (!(val & _Q_TAIL_MASK) && queue_spin_trylock(lock)) > + goto release; But you just did a potentially very expensive op; @val isn't representative anymore!
2014 Apr 17
0
[PATCH v9 04/19] qspinlock: Extract out the exchange of tail code word
...generic/qspinlock_types.h index bd25081..ed5d89a 100644 --- a/include/asm-generic/qspinlock_types.h +++ b/include/asm-generic/qspinlock_types.h @@ -61,6 +61,8 @@ typedef struct qspinlock { #define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET) #define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU) +#define _Q_TAIL_MASK (_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK) + #define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET) #define _Q_PENDING_VAL (1U << _Q_PENDING_OFFSET) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c index d35362a..fcf06cb 100644 --- a/kernel/locking/qspinlock.c +++ b/kernel...
2014 Apr 18
2
[PATCH v9 04/19] qspinlock: Extract out the exchange of tail code word
...LL; > >> > >> /* > >>+ * We touched a (possibly) cold cacheline; attempt the trylock once > >>+ * more in the hope someone let go while we weren't watching as long > >>+ * as no one was queuing. > >> */ > >>+ if (!(val& _Q_TAIL_MASK)&& queue_spin_trylock(lock)) > >>+ goto release; > >But you just did a potentially very expensive op; @val isn't > >representative anymore! > > That is not true. I pass in a pointer to val to trylock_pending() (the > pointer thing) so that it will store...
2014 Apr 18
2
[PATCH v9 04/19] qspinlock: Extract out the exchange of tail code word
...LL; > >> > >> /* > >>+ * We touched a (possibly) cold cacheline; attempt the trylock once > >>+ * more in the hope someone let go while we weren't watching as long > >>+ * as no one was queuing. > >> */ > >>+ if (!(val& _Q_TAIL_MASK)&& queue_spin_trylock(lock)) > >>+ goto release; > >But you just did a potentially very expensive op; @val isn't > >representative anymore! > > That is not true. I pass in a pointer to val to trylock_pending() (the > pointer thing) so that it will store...
2014 May 21
0
[RFC 08/07] qspinlock: integrate pending bit into queue
...might produce the same code if we use *pval directly + + // we could use 'if' and a xchg that touches only the pending bit to + // save some cycles at the price of a longer line cutting window + // (and I think it would bug without changing the rest) + while (!(val & (_Q_PENDING_MASK | _Q_TAIL_MASK))) { + old = atomic_cmpxchg(&lock->val, val, val | _Q_PENDING_MASK); + if (old == val) { + *pval = val | _Q_PENDING_MASK; + return 1; + } + val = old; + } + *pval = val; + return 0; +} + +// here +static inline void set_pending(struct qspinlock *lock, u8 pending) +{ + struct __qspinl...
2014 May 08
2
[PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path
...trylock_pending(struct qspinlock *lock, u32 *pval) > */ > for (;;) { > /* > - * If we observe any contention; queue. > + * If we observe that the queue is not empty, > + * return and be queued. > */ > - if (val & ~_Q_LOCKED_MASK) > + if (val & _Q_TAIL_MASK) > return 0; > > + if (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL)) { > + /* > + * If both the lock and pending bits are set, we wait > + * a while to see if that either bit will be cleared. > + * If that is no change, we return and be queued. > + */ > +...
2014 May 08
2
[PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path
...trylock_pending(struct qspinlock *lock, u32 *pval) > */ > for (;;) { > /* > - * If we observe any contention; queue. > + * If we observe that the queue is not empty, > + * return and be queued. > */ > - if (val & ~_Q_LOCKED_MASK) > + if (val & _Q_TAIL_MASK) > return 0; > > + if (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL)) { > + /* > + * If both the lock and pending bits are set, we wait > + * a while to see if that either bit will be cleared. > + * If that is no change, we return and be queued. > + */ > +...
2014 Jun 11
2
[PATCH v11 06/16] qspinlock: prolong the stay in the pending bit path
...spinlock *lock, u32 val) > */ > for (;;) { > /* > - * If we observe any contention; queue. > + * If we observe that the queue is not empty or both > + * the pending and lock bits are set, queue > */ > - if (val & ~_Q_LOCKED_MASK) > + if ((val & _Q_TAIL_MASK) || > + (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL))) > goto queue; > > + if (val == _Q_PENDING_VAL) { > + /* > + * Pending bit is set, but not the lock bit. > + * Assuming that the pending bit holder is going to > + * set the lock bit and clear the pending...
2014 Jun 11
2
[PATCH v11 06/16] qspinlock: prolong the stay in the pending bit path
...spinlock *lock, u32 val) > */ > for (;;) { > /* > - * If we observe any contention; queue. > + * If we observe that the queue is not empty or both > + * the pending and lock bits are set, queue > */ > - if (val & ~_Q_LOCKED_MASK) > + if ((val & _Q_TAIL_MASK) || > + (val == (_Q_LOCKED_VAL|_Q_PENDING_VAL))) > goto queue; > > + if (val == _Q_PENDING_VAL) { > + /* > + * Pending bit is set, but not the lock bit. > + * Assuming that the pending bit holder is going to > + * set the lock bit and clear the pending...
2014 Jun 17
3
[PATCH 04/11] qspinlock: Extract out the exchange of tail code word
...2 deletions(-) > > --- a/include/asm-generic/qspinlock_types.h > +++ b/include/asm-generic/qspinlock_types.h > @@ -61,6 +61,8 @@ typedef struct qspinlock { > #define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET) > #define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU) > > +#define _Q_TAIL_MASK (_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK) > + > #define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET) > #define _Q_PENDING_VAL (1U << _Q_PENDING_OFFSET) > > --- a/kernel/locking/qspinlock.c > +++ b/kernel/locking/qspinlock.c > @@ -86,6 +86,31 @@ static inline struct mcs...
2014 Jun 17
3
[PATCH 04/11] qspinlock: Extract out the exchange of tail code word
...2 deletions(-) > > --- a/include/asm-generic/qspinlock_types.h > +++ b/include/asm-generic/qspinlock_types.h > @@ -61,6 +61,8 @@ typedef struct qspinlock { > #define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET) > #define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU) > > +#define _Q_TAIL_MASK (_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK) > + > #define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET) > #define _Q_PENDING_VAL (1U << _Q_PENDING_OFFSET) > > --- a/kernel/locking/qspinlock.c > +++ b/kernel/locking/qspinlock.c > @@ -86,6 +86,31 @@ static inline struct mcs...
2014 Jun 15
0
[PATCH 04/11] qspinlock: Extract out the exchange of tail code word
...-- 2 files changed, 38 insertions(+), 22 deletions(-) --- a/include/asm-generic/qspinlock_types.h +++ b/include/asm-generic/qspinlock_types.h @@ -61,6 +61,8 @@ typedef struct qspinlock { #define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET) #define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU) +#define _Q_TAIL_MASK (_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK) + #define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET) #define _Q_PENDING_VAL (1U << _Q_PENDING_OFFSET) --- a/kernel/locking/qspinlock.c +++ b/kernel/locking/qspinlock.c @@ -86,6 +86,31 @@ static inline struct mcs_spinlock *decod #define _Q_LOCKED...
2014 Apr 18
1
[PATCH v9 04/19] qspinlock: Extract out the exchange of tail code word
...>>>>+ * We touched a (possibly) cold cacheline; attempt the trylock once > >>>>+ * more in the hope someone let go while we weren't watching as long > >>>>+ * as no one was queuing. > >>>> */ > >>>>+ if (!(val& _Q_TAIL_MASK)&& queue_spin_trylock(lock)) > >>>>+ goto release; > >>>But you just did a potentially very expensive op; @val isn't > >>>representative anymore! > >>That is not true. I pass in a pointer to val to trylock_pending() (the > >>po...
2014 Apr 18
1
[PATCH v9 04/19] qspinlock: Extract out the exchange of tail code word
...>>>>+ * We touched a (possibly) cold cacheline; attempt the trylock once > >>>>+ * more in the hope someone let go while we weren't watching as long > >>>>+ * as no one was queuing. > >>>> */ > >>>>+ if (!(val& _Q_TAIL_MASK)&& queue_spin_trylock(lock)) > >>>>+ goto release; > >>>But you just did a potentially very expensive op; @val isn't > >>>representative anymore! > >>That is not true. I pass in a pointer to val to trylock_pending() (the > >>po...
2014 Nov 03
0
[PATCH v13 09/11] pvqspinlock, x86: Add para-virtualization support
...queue_spin_unlock_slowpath(lock); > +} Idem, that static key stuff is wrong, use PV ops to switch between unlock paths. > @@ -354,7 +394,7 @@ queue: > * if there was a previous node; link it and wait until reaching the > * head of the waitqueue. > */ > - if (old & _Q_TAIL_MASK) { > + if (!pv_link_and_wait_node(old, node) && (old & _Q_TAIL_MASK)) { > prev = decode_tail(old); > ACCESS_ONCE(prev->next) = node; > @@ -369,9 +409,11 @@ queue: > * > * *,x,y -> *,0,0 > */ > - while ((val = smp_load_acquire(&lock->va...
2014 May 14
2
[PATCH v10 03/19] qspinlock: Add pending bit
2014-05-14 19:00+0200, Peter Zijlstra: > On Wed, May 14, 2014 at 06:51:24PM +0200, Radim Kr?m?? wrote: > > Ok. > > I've seen merit in pvqspinlock even with slightly slower first-waiter, > > so I would have happily sacrificed those horrible branches. > > (I prefer elegant to optimized code, but I can see why we want to be > > strictly better than ticketlock.)
2014 May 14
2
[PATCH v10 03/19] qspinlock: Add pending bit
2014-05-14 19:00+0200, Peter Zijlstra: > On Wed, May 14, 2014 at 06:51:24PM +0200, Radim Kr?m?? wrote: > > Ok. > > I've seen merit in pvqspinlock even with slightly slower first-waiter, > > so I would have happily sacrificed those horrible branches. > > (I prefer elegant to optimized code, but I can see why we want to be > > strictly better than ticketlock.)
2014 Jun 18
0
[PATCH 04/11] qspinlock: Extract out the exchange of tail code word
...gt; --- a/include/asm-generic/qspinlock_types.h >> +++ b/include/asm-generic/qspinlock_types.h >> @@ -61,6 +61,8 @@ typedef struct qspinlock { >> #define _Q_TAIL_CPU_BITS (32 - _Q_TAIL_CPU_OFFSET) >> #define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU) >> >> +#define _Q_TAIL_MASK (_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK) >> + >> #define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET) >> #define _Q_PENDING_VAL (1U << _Q_PENDING_OFFSET) >> >> --- a/kernel/locking/qspinlock.c >> +++ b/kernel/locking/qspinlock.c >> @@ -86,6 +86,31 @...
2014 Jun 16
4
[PATCH 10/11] qspinlock: Paravirt support
..._node *pv_decode_tail(u32 tail) > +{ > + return (struct pv_node *)decode_tail(tail); > +} > + > +void __pv_link_and_wait_node(u32 old, struct mcs_spinlock *node) > +{ > + struct pv_node *ppn, *pn = (struct pv_node *)node; > + unsigned int count; > + > + if (!(old& _Q_TAIL_MASK)) { > + pn->head = NO_HEAD; > + return; > + } > + > + ppn = pv_decode_tail(old); > + ACCESS_ONCE(ppn->mcs.next) = node; > + > + while (ppn->head == INVALID_HEAD) > + cpu_relax(); > + > + pn->head = ppn->head; A race can happen here as pn->head...