search for: qhead

Displaying 20 results from an estimated 20 matches for "qhead".

Did you mean: head
2014 May 07
0
[PATCH v10 09/19] qspinlock: Prepare for unfair lock support
If unfair lock is supported, the lock acquisition loop at the end of the queue_spin_lock_slowpath() function may need to detect the fact the lock can be stolen. Code are added for the stolen lock detection. A new qhead macro is also defined as a shorthand for mcs.locked. Signed-off-by: Waiman Long <Waiman.Long at hp.com> --- kernel/locking/qspinlock.c | 26 +++++++++++++++++++------- 1 files changed, 19 insertions(+), 7 deletions(-) diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c i...
2014 May 07
0
[PATCH v10 12/19] unfair qspinlock: Variable frequency lock stealing mechanism
...qspinlock.c @@ -63,6 +63,11 @@ */ struct qnode { struct mcs_spinlock mcs; +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS + int lsteal_mask; /* Lock stealing frequency mask */ + u32 prev_tail; /* Tail code of previous node */ + struct qnode *qprev; /* Previous queue node addr */ +#endif }; #define qhead mcs.locked /* The queue head flag */ @@ -215,6 +220,139 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval) } #endif /* _Q_PENDING_BITS == 8 */ +/* + ************************************************************************ + * Inline functions for supporting unfair queue lock * + ****...
2014 May 30
0
[PATCH v11 14/16] pvqspinlock: Add qspinlock para-virtualization support
...ter will set a flag (_Q_LOCKED_SLOWPATH) in the lock byte to indicate that the unlock slowpath has to be invoked. In the unlock slowpath, the current lock holder will find the queue head by following the previous node pointer links stored in the queue node structure until it finds one that has the qhead flag turned on. It then attempt to kick in the CPU of the queue head. After the queue head acquired the lock, it will also check the status of the next node and set _Q_LOCKED_SLOWPATH flag if it has been halted. Enabling the PV code does have a performance impact on spinlock acquisitions and rele...
2014 Apr 01
10
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
v7->v8: - Remove one unneeded atomic operation from the slowpath, thus improving performance. - Simplify some of the codes and add more comments. - Test for X86_FEATURE_HYPERVISOR CPU feature bit to enable/disable unfair lock. - Reduce unfair lock slowpath lock stealing frequency depending on its distance from the queue head. - Add performance data for IvyBridge-EX CPU.
2014 Apr 01
10
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
v7->v8: - Remove one unneeded atomic operation from the slowpath, thus improving performance. - Simplify some of the codes and add more comments. - Test for X86_FEATURE_HYPERVISOR CPU feature bit to enable/disable unfair lock. - Reduce unfair lock slowpath lock stealing frequency depending on its distance from the queue head. - Add performance data for IvyBridge-EX CPU.
2014 May 08
2
[PATCH v10 09/19] qspinlock: Prepare for unfair lock support
On Wed, May 07, 2014 at 11:01:37AM -0400, Waiman Long wrote: > If unfair lock is supported, the lock acquisition loop at the end of > the queue_spin_lock_slowpath() function may need to detect the fact > the lock can be stolen. Code are added for the stolen lock detection. > > A new qhead macro is also defined as a shorthand for mcs.locked. NAK, unfair should be a pure test-and-set lock. > /** > * get_qlock - Set the lock bit and own the lock > - * @lock: Pointer to queue spinlock structure > + * @lock : Pointer to queue spinlock structure > + * Return: 1 if lock...
2014 May 08
2
[PATCH v10 09/19] qspinlock: Prepare for unfair lock support
On Wed, May 07, 2014 at 11:01:37AM -0400, Waiman Long wrote: > If unfair lock is supported, the lock acquisition loop at the end of > the queue_spin_lock_slowpath() function may need to detect the fact > the lock can be stolen. Code are added for the stolen lock detection. > > A new qhead macro is also defined as a shorthand for mcs.locked. NAK, unfair should be a pure test-and-set lock. > /** > * get_qlock - Set the lock bit and own the lock > - * @lock: Pointer to queue spinlock structure > + * @lock : Pointer to queue spinlock structure > + * Return: 1 if lock...
2014 Mar 19
15
[PATCH v7 00/11] qspinlock: a 4-byte queue spinlock with PV support
v6->v7: - Remove an atomic operation from the 2-task contending code - Shorten the names of some macros - Make the queue waiter to attempt to steal lock when unfair lock is enabled. - Remove lock holder kick from the PV code and fix a race condition - Run the unfair lock & PV code on overcommitted KVM guests to collect performance data. v5->v6: - Change the optimized
2014 Mar 19
15
[PATCH v7 00/11] qspinlock: a 4-byte queue spinlock with PV support
v6->v7: - Remove an atomic operation from the 2-task contending code - Shorten the names of some macros - Make the queue waiter to attempt to steal lock when unfair lock is enabled. - Remove lock holder kick from the PV code and fix a race condition - Run the unfair lock & PV code on overcommitted KVM guests to collect performance data. v5->v6: - Change the optimized
2014 Apr 02
17
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
N.B. Sorry for the duplicate. This patch series were resent as the original one was rejected by the vger.kernel.org list server due to long header. There is no change in content. v7->v8: - Remove one unneeded atomic operation from the slowpath, thus improving performance. - Simplify some of the codes and add more comments. - Test for X86_FEATURE_HYPERVISOR CPU feature bit
2014 Apr 02
17
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
N.B. Sorry for the duplicate. This patch series were resent as the original one was rejected by the vger.kernel.org list server due to long header. There is no change in content. v7->v8: - Remove one unneeded atomic operation from the slowpath, thus improving performance. - Simplify some of the codes and add more comments. - Test for X86_FEATURE_HYPERVISOR CPU feature bit
2014 May 07
32
[PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support
v9->v10: - Make some minor changes to qspinlock.c to accommodate review feedback. - Change author to PeterZ for 2 of the patches. - Include Raghavendra KT's test results in patch 18. v8->v9: - Integrate PeterZ's version of the queue spinlock patch with some modification: http://lkml.kernel.org/r/20140310154236.038181843 at infradead.org - Break the more complex
2014 May 07
32
[PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support
v9->v10: - Make some minor changes to qspinlock.c to accommodate review feedback. - Change author to PeterZ for 2 of the patches. - Include Raghavendra KT's test results in patch 18. v8->v9: - Integrate PeterZ's version of the queue spinlock patch with some modification: http://lkml.kernel.org/r/20140310154236.038181843 at infradead.org - Break the more complex
2014 Apr 17
33
[PATCH v9 00/19] qspinlock: a 4-byte queue spinlock with PV support
v8->v9: - Integrate PeterZ's version of the queue spinlock patch with some modification: http://lkml.kernel.org/r/20140310154236.038181843 at infradead.org - Break the more complex patches into smaller ones to ease review effort. - Fix a racing condition in the PV qspinlock code. v7->v8: - Remove one unneeded atomic operation from the slowpath, thus improving
2014 Apr 17
33
[PATCH v9 00/19] qspinlock: a 4-byte queue spinlock with PV support
v8->v9: - Integrate PeterZ's version of the queue spinlock patch with some modification: http://lkml.kernel.org/r/20140310154236.038181843 at infradead.org - Break the more complex patches into smaller ones to ease review effort. - Fix a racing condition in the PV qspinlock code. v7->v8: - Remove one unneeded atomic operation from the slowpath, thus improving
2014 Apr 02
0
[PATCH v8 01/10] qspinlock: A generic 4-byte queue spinlock implementation
...+ RELEASE_NODE /* Release current node directly */ +}; + +/* + * The queue node structure + * + * This structure is essentially the same as the mcs_spinlock structure + * in mcs_spinlock.h file. It is retained for future extension where new + * fields may be added. + */ +struct qnode { + u32 qhead; /* Queue head flag */ + struct qnode *next; /* Next queue node addr */ +}; + +struct qnode_set { + struct qnode nodes[MAX_QNODES]; + int node_idx; /* Current node to use */ +}; + +/* + * Per-CPU queue node structures + */ +static DEFINE_PER_CPU_ALIGNED(struct qnode_set, qnset) = { { { 0 } },...
2014 May 10
0
[PATCH v10 09/19] qspinlock: Prepare for unfair lock support
...4 at 11:01:37AM -0400, Waiman Long wrote: >> If unfair lock is supported, the lock acquisition loop at the end of >> the queue_spin_lock_slowpath() function may need to detect the fact >> the lock can be stolen. Code are added for the stolen lock detection. >> >> A new qhead macro is also defined as a shorthand for mcs.locked. > NAK, unfair should be a pure test-and-set lock. I have performance data showing that a simple test-and-set lock does not scale well. That is the primary reason of ditching the test-and-set lock and use a more complicated scheme which scal...
2014 May 30
19
[PATCH v11 00/16] qspinlock: a 4-byte queue spinlock with PV support
v10->v11: - Use a simple test-and-set unfair lock to simplify the code, but performance may suffer a bit for large guest with many CPUs. - Take out Raghavendra KT's test results as the unfair lock changes may render some of his results invalid. - Add PV support without increasing the size of the core queue node structure. - Other minor changes to address some of the
2014 May 30
19
[PATCH v11 00/16] qspinlock: a 4-byte queue spinlock with PV support
v10->v11: - Use a simple test-and-set unfair lock to simplify the code, but performance may suffer a bit for large guest with many CPUs. - Take out Raghavendra KT's test results as the unfair lock changes may render some of his results invalid. - Add PV support without increasing the size of the core queue node structure. - Other minor changes to address some of the
2005 Aug 15
3
[-mm PATCH 2/32] fs: fix-up schedule_timeout() usage
...obalMid_Lock); if(list_empty(&GlobalOplock_Q)) { spin_unlock(&GlobalMid_Lock); - set_current_state(TASK_INTERRUPTIBLE); - schedule_timeout(39*HZ); + schedule_timeout_interruptible(39*HZ); } else { oplock_item = list_entry(GlobalOplock_Q.next, struct oplock_q_entry, qhead); diff -urpN 2.6.13-rc5-mm1/fs/cifs/connect.c 2.6.13-rc5-mm1-dev/fs/cifs/connect.c --- 2.6.13-rc5-mm1/fs/cifs/connect.c 2005-08-07 10:05:22.000000000 -0700 +++ 2.6.13-rc5-mm1-dev/fs/cifs/connect.c 2005-08-12 13:42:49.000000000 -0700 @@ -3215,10 +3215,8 @@ cifs_umount(struct super_block *sb, stru...