search for: pv_scan_next

Displaying 16 results from an estimated 16 matches for "pv_scan_next".

2015 Apr 07
0
[PATCH v15 13/15] pvqspinlock: Only kick CPU at unlock time
...void set_locked(struct qspinlock *lock) static __always_inline void __pv_init_node(struct mcs_spinlock *node) { } static __always_inline void __pv_wait_node(struct mcs_spinlock *node) { } -static __always_inline void __pv_kick_node(struct mcs_spinlock *node) { } - +static __always_inline void __pv_scan_next(struct qspinlock *lock, + struct mcs_spinlock *node) { } static __always_inline void __pv_wait_head(struct qspinlock *lock, struct mcs_spinlock *node) { } @@ -248,7 +248,7 @@ static __always_inline void __pv_wait_head(struct qspinlock *lock, #define pv_init_node __pv_init_nod...
2015 Apr 29
4
[PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg
On Fri, Apr 24, 2015 at 02:56:42PM -0400, Waiman Long wrote: > In the pv_scan_next() function, the slow cmpxchg atomic operation is > performed even if the other CPU is not even close to being halted. This > extra cmpxchg can harm slowpath performance. > > This patch introduces the new mayhalt flag to indicate if the other > spinning CPU is close to being halted o...
2015 Apr 29
4
[PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg
On Fri, Apr 24, 2015 at 02:56:42PM -0400, Waiman Long wrote: > In the pv_scan_next() function, the slow cmpxchg atomic operation is > performed even if the other CPU is not even close to being halted. This > extra cmpxchg can harm slowpath performance. > > This patch introduces the new mayhalt flag to indicate if the other > spinning CPU is close to being halted o...
2015 Apr 24
0
[PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg
In the pv_scan_next() function, the slow cmpxchg atomic operation is performed even if the other CPU is not even close to being halted. This extra cmpxchg can harm slowpath performance. This patch introduces the new mayhalt flag to indicate if the other spinning CPU is close to being halted or not. The current thresh...
2015 Apr 09
2
[PATCH v15 13/15] pvqspinlock: Only kick CPU at unlock time
...ter setting next->locked = 1 & lock acquired. > + * Check if the the CPU has been halted. If so, set the _Q_SLOW_VAL flag > + * and put an entry into the lock hash table to be waken up at unlock time. > */ > -static void pv_kick_node(struct mcs_spinlock *node) > +static void pv_scan_next(struct qspinlock *lock, struct mcs_spinlock *node) I'm not too sure about that name change.. > { > struct pv_node *pn = (struct pv_node *)node; > + struct __qspinlock *l = (void *)lock; > > /* > + * Transition CPU state: halted => hashed > + * Quit if the tran...
2015 Apr 09
2
[PATCH v15 13/15] pvqspinlock: Only kick CPU at unlock time
...ter setting next->locked = 1 & lock acquired. > + * Check if the the CPU has been halted. If so, set the _Q_SLOW_VAL flag > + * and put an entry into the lock hash table to be waken up at unlock time. > */ > -static void pv_kick_node(struct mcs_spinlock *node) > +static void pv_scan_next(struct qspinlock *lock, struct mcs_spinlock *node) I'm not too sure about that name change.. > { > struct pv_node *pn = (struct pv_node *)node; > + struct __qspinlock *l = (void *)lock; > > /* > + * Transition CPU state: halted => hashed > + * Quit if the tran...
2015 May 04
1
[PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg
On Thu, Apr 30, 2015 at 02:49:26PM -0400, Waiman Long wrote: > On 04/29/2015 02:11 PM, Peter Zijlstra wrote: > >On Fri, Apr 24, 2015 at 02:56:42PM -0400, Waiman Long wrote: > >>In the pv_scan_next() function, the slow cmpxchg atomic operation is > >>performed even if the other CPU is not even close to being halted. This > >>extra cmpxchg can harm slowpath performance. > >> > >>This patch introduces the new mayhalt flag to indicate if the other > >&gt...
2015 May 04
1
[PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg
On Thu, Apr 30, 2015 at 02:49:26PM -0400, Waiman Long wrote: > On 04/29/2015 02:11 PM, Peter Zijlstra wrote: > >On Fri, Apr 24, 2015 at 02:56:42PM -0400, Waiman Long wrote: > >>In the pv_scan_next() function, the slow cmpxchg atomic operation is > >>performed even if the other CPU is not even close to being halted. This > >>extra cmpxchg can harm slowpath performance. > >> > >>This patch introduces the new mayhalt flag to indicate if the other > >&gt...
2015 Apr 29
0
[PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg
On Wed, Apr 29, 2015 at 11:11 AM, Peter Zijlstra <peterz at infradead.org> wrote: > On Fri, Apr 24, 2015 at 02:56:42PM -0400, Waiman Long wrote: >> In the pv_scan_next() function, the slow cmpxchg atomic operation is >> performed even if the other CPU is not even close to being halted. This >> extra cmpxchg can harm slowpath performance. >> >> This patch introduces the new mayhalt flag to indicate if the other >> spinning CPU is clos...
2015 Apr 30
0
[PATCH v16 13/14] pvqspinlock: Improve slowpath performance by avoiding cmpxchg
On 04/29/2015 02:11 PM, Peter Zijlstra wrote: > On Fri, Apr 24, 2015 at 02:56:42PM -0400, Waiman Long wrote: >> In the pv_scan_next() function, the slow cmpxchg atomic operation is >> performed even if the other CPU is not even close to being halted. This >> extra cmpxchg can harm slowpath performance. >> >> This patch introduces the new mayhalt flag to indicate if the other >> spinning CPU is clos...
2015 Apr 07
18
[PATCH v15 00/15] qspinlock: a 4-byte queue spinlock with PV support
v14->v15: - Incorporate PeterZ's v15 qspinlock patch and improve upon the PV qspinlock code by dynamically allocating the hash table as well as some other performance optimization. - Simplified the Xen PV qspinlock code as suggested by David Vrabel <david.vrabel at citrix.com>. - Add benchmarking data for 3.19 kernel to compare the performance of a spinlock heavy test
2015 Apr 07
18
[PATCH v15 00/15] qspinlock: a 4-byte queue spinlock with PV support
v14->v15: - Incorporate PeterZ's v15 qspinlock patch and improve upon the PV qspinlock code by dynamically allocating the hash table as well as some other performance optimization. - Simplified the Xen PV qspinlock code as suggested by David Vrabel <david.vrabel at citrix.com>. - Add benchmarking data for 3.19 kernel to compare the performance of a spinlock heavy test
2015 Apr 08
2
[PATCH v15 16/16] unfair qspinlock: a queue based unfair lock
...qspinlock *lock) /* * Generate the native code for queue_spin_unlock_slowpath(); provide NOPs for - * all the PV callbacks. + * all the PV and unfair callbacks. */ static __always_inline void __pv_init_node(struct mcs_spinlock *node) { } @@ -244,19 +244,36 @@ static __always_inline void __pv_scan_next(struct qspinlock *lock, static __always_inline void __pv_wait_head(struct qspinlock *lock, struct mcs_spinlock *node) { } +static __always_inline void __unfair_init_node(struct mcs_spinlock *node) { } +static __always_inline bool __unfair_wait_node(struct qspinlock *lock, + s...
2015 Apr 08
2
[PATCH v15 16/16] unfair qspinlock: a queue based unfair lock
...qspinlock *lock) /* * Generate the native code for queue_spin_unlock_slowpath(); provide NOPs for - * all the PV callbacks. + * all the PV and unfair callbacks. */ static __always_inline void __pv_init_node(struct mcs_spinlock *node) { } @@ -244,19 +244,36 @@ static __always_inline void __pv_scan_next(struct qspinlock *lock, static __always_inline void __pv_wait_head(struct qspinlock *lock, struct mcs_spinlock *node) { } +static __always_inline void __unfair_init_node(struct mcs_spinlock *node) { } +static __always_inline bool __unfair_wait_node(struct qspinlock *lock, + s...
2015 Apr 24
16
[PATCH v16 00/14] qspinlock: a 4-byte queue spinlock with PV support
v15->v16: - Remove the lfsr patch and use linear probing as lfsr is not really necessary in most cases. - Move the paravirt PV_CALLEE_SAVE_REGS_THUNK code to an asm header. - Add a patch to collect PV qspinlock statistics which also supersedes the PV lock hash debug patch. - Add PV qspinlock performance numbers. v14->v15: - Incorporate PeterZ's v15 qspinlock patch and improve
2015 Apr 24
16
[PATCH v16 00/14] qspinlock: a 4-byte queue spinlock with PV support
v15->v16: - Remove the lfsr patch and use linear probing as lfsr is not really necessary in most cases. - Move the paravirt PV_CALLEE_SAVE_REGS_THUNK code to an asm header. - Add a patch to collect PV qspinlock statistics which also supersedes the PV lock hash debug patch. - Add PV qspinlock performance numbers. v14->v15: - Incorporate PeterZ's v15 qspinlock patch and improve