Displaying 20 results from an estimated 25 matches for "prev_qcode".
2014 Mar 02
1
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
On 02/26, Waiman Long wrote:
>
> +void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
> +{
> + unsigned int cpu_nr, qn_idx;
> + struct qnode *node, *next;
> + u32 prev_qcode, my_qcode;
> +
> + /*
> + * Get the queue node
> + */
> + cpu_nr = smp_processor_id();
> + node = get_qnode(&qn_idx);
> +
> + /*
> + * It should never happen that all the queue nodes are being used.
> + */
> + BUG_ON(!node);
> +
> + /*
> + * Set...
2014 Mar 02
1
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
On 02/26, Waiman Long wrote:
>
> +void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
> +{
> + unsigned int cpu_nr, qn_idx;
> + struct qnode *node, *next;
> + u32 prev_qcode, my_qcode;
> +
> + /*
> + * Get the queue node
> + */
> + cpu_nr = smp_processor_id();
> + node = get_qnode(&qn_idx);
> +
> + /*
> + * It should never happen that all the queue nodes are being used.
> + */
> + BUG_ON(!node);
> +
> + /*
> + * Set...
2014 Feb 26
2
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
...+ * Nothing need to be done if the old value is
> + * (_QSPINLOCK_WAITING | _QSPINLOCK_LOCKED).
> + */
> + return 0;
> +}
> @@ -296,6 +478,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
> return;
> }
>
> +#ifdef queue_code_xchg
> + prev_qcode = queue_code_xchg(lock, my_qcode);
> +#else
> /*
> * Exchange current copy of the queue node code
> */
> @@ -329,6 +514,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
> } else
> prev_qcode &= ~_QSPINLOCK_LOCKED; /* Clear the lock bit */
&...
2014 Feb 26
2
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
...+ * Nothing need to be done if the old value is
> + * (_QSPINLOCK_WAITING | _QSPINLOCK_LOCKED).
> + */
> + return 0;
> +}
> @@ -296,6 +478,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
> return;
> }
>
> +#ifdef queue_code_xchg
> + prev_qcode = queue_code_xchg(lock, my_qcode);
> +#else
> /*
> * Exchange current copy of the queue node code
> */
> @@ -329,6 +514,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
> } else
> prev_qcode &= ~_QSPINLOCK_LOCKED; /* Clear the lock bit */
&...
2014 Feb 26
0
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
....h
@@ -0,0 +1,176 @@
+#ifndef _ASM_X86_PVQSPINLOCK_H
+#define _ASM_X86_PVQSPINLOCK_H
+
+/*
+ * Queue Spinlock Para-Virtualization Support
+ *
+ * +------+ +-----+ nxtcpu_p1 +----+
+ * | Lock | |Queue|----------->|Next|
+ * |Holder|<-----------|Head |<-----------|Node|
+ * +------+ prev_qcode +-----+ prev_qcode +----+
+ *
+ * As long as the current lock holder passes through the slowpath, the queue
+ * head CPU will have its CPU number stored in prev_qcode. The situation is
+ * the same for the node next to the queue head.
+ *
+ * The next node, while setting up the next pointer in the...
2014 Feb 27
14
[PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5:
- Move the optimized 2-task contending code to the generic file to
enable more architectures to use it without code duplication.
- Address some of the style-related comments by PeterZ.
- Allow the use of unfair queue spinlock in a real para-virtualized
execution environment.
- Add para-virtualization support to the qspinlock code by ensuring
that the lock holder and queue
2014 Feb 27
14
[PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5:
- Move the optimized 2-task contending code to the generic file to
enable more architectures to use it without code duplication.
- Address some of the style-related comments by PeterZ.
- Allow the use of unfair queue spinlock in a real para-virtualized
execution environment.
- Add para-virtualization support to the qspinlock code by ensuring
that the lock holder and queue
2014 Feb 27
0
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
...gt;> + * (_QSPINLOCK_WAITING | _QSPINLOCK_LOCKED).
>> + */
>> + return 0;
>> +}
>
>
>
>> @@ -296,6 +478,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
>> return;
>> }
>>
>> +#ifdef queue_code_xchg
>> + prev_qcode = queue_code_xchg(lock, my_qcode);
>> +#else
>> /*
>> * Exchange current copy of the queue node code
>> */
>> @@ -329,6 +514,7 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
>> } else
>> prev_qcode&= ~_QSPINLOCK_LO...
2014 Mar 12
0
[PATCH RFC v6 09/11] pvqspinlock, x86: Add qspinlock para-virtualization support
...-0,0 +1,232 @@
+#ifndef _ASM_X86_PVQSPINLOCK_H
+#define _ASM_X86_PVQSPINLOCK_H
+
+/*
+ * Queue Spinlock Para-Virtualization (PV) Support
+ *
+ * +------+ +-----+ nxtcpu_p1 +----+
+ * | Lock | |Queue|----------->|Next|
+ * |Holder|<-----------|Head |<-----------|Node|
+ * +------+ prev_qcode +-----+ prev_qcode +----+
+ *
+ * As long as the current lock holder passes through the slowpath, the queue
+ * head CPU will have its CPU number stored in prev_qcode. The situation is
+ * the same for the node next to the queue head.
+ *
+ * The next node, while setting up the next pointer in the...
2014 Apr 01
10
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
v7->v8:
- Remove one unneeded atomic operation from the slowpath, thus
improving performance.
- Simplify some of the codes and add more comments.
- Test for X86_FEATURE_HYPERVISOR CPU feature bit to enable/disable
unfair lock.
- Reduce unfair lock slowpath lock stealing frequency depending
on its distance from the queue head.
- Add performance data for IvyBridge-EX CPU.
2014 Apr 01
10
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
v7->v8:
- Remove one unneeded atomic operation from the slowpath, thus
improving performance.
- Simplify some of the codes and add more comments.
- Test for X86_FEATURE_HYPERVISOR CPU feature bit to enable/disable
unfair lock.
- Reduce unfair lock slowpath lock stealing frequency depending
on its distance from the queue head.
- Add performance data for IvyBridge-EX CPU.
2014 Feb 26
22
[PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5:
- Move the optimized 2-task contending code to the generic file to
enable more architectures to use it without code duplication.
- Address some of the style-related comments by PeterZ.
- Allow the use of unfair queue spinlock in a real para-virtualized
execution environment.
- Add para-virtualization support to the qspinlock code by ensuring
that the lock holder and queue
2014 Feb 26
22
[PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5:
- Move the optimized 2-task contending code to the generic file to
enable more architectures to use it without code duplication.
- Address some of the style-related comments by PeterZ.
- Allow the use of unfair queue spinlock in a real para-virtualized
execution environment.
- Add para-virtualization support to the qspinlock code by ensuring
that the lock holder and queue
2014 Feb 26
0
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
...eue_spin_lock_slowpath - acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * @qsval: Current value of the queue spinlock 32-bit word
+ */
+void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
+{
+ unsigned int cpu_nr, qn_idx;
+ struct qnode *node, *next;
+ u32 prev_qcode, my_qcode;
+
+ /*
+ * Get the queue node
+ */
+ cpu_nr = smp_processor_id();
+ node = get_qnode(&qn_idx);
+
+ /*
+ * It should never happen that all the queue nodes are being used.
+ */
+ BUG_ON(!node);
+
+ /*
+ * Set up the new cpu code to be exchanged
+ */
+ my_qcode = queue_encode_qc...
2014 Feb 27
0
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
...eue_spin_lock_slowpath - acquire the queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ * @qsval: Current value of the queue spinlock 32-bit word
+ */
+void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
+{
+ unsigned int cpu_nr, qn_idx;
+ struct qnode *node, *next;
+ u32 prev_qcode, my_qcode;
+
+ /*
+ * Get the queue node
+ */
+ cpu_nr = smp_processor_id();
+ node = get_qnode(&qn_idx);
+
+ /*
+ * It should never happen that all the queue nodes are being used.
+ */
+ BUG_ON(!node);
+
+ /*
+ * Set up the new cpu code to be exchanged
+ */
+ my_qcode = queue_encode_qc...
2014 Mar 19
15
[PATCH v7 00/11] qspinlock: a 4-byte queue spinlock with PV support
v6->v7:
- Remove an atomic operation from the 2-task contending code
- Shorten the names of some macros
- Make the queue waiter to attempt to steal lock when unfair lock is
enabled.
- Remove lock holder kick from the PV code and fix a race condition
- Run the unfair lock & PV code on overcommitted KVM guests to collect
performance data.
v5->v6:
- Change the optimized
2014 Mar 19
15
[PATCH v7 00/11] qspinlock: a 4-byte queue spinlock with PV support
v6->v7:
- Remove an atomic operation from the 2-task contending code
- Shorten the names of some macros
- Make the queue waiter to attempt to steal lock when unfair lock is
enabled.
- Remove lock holder kick from the PV code and fix a race condition
- Run the unfair lock & PV code on overcommitted KVM guests to collect
performance data.
v5->v6:
- Change the optimized
2014 Feb 26
0
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
...*****************************
*/
+#ifndef queue_spin_trylock_quick
+static inline int queue_spin_trylock_quick(struct qspinlock *lock, int qsval)
+{ return 0; }
+#endif
#ifndef queue_get_lock_qcode
/**
@@ -266,6 +443,11 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
u32 prev_qcode, my_qcode;
/*
+ * Try the quick spinning code path
+ */
+ if (queue_spin_trylock_quick(lock, qsval))
+ return;
+ /*
* Get the queue node
*/
cpu_nr = smp_processor_id();
@@ -296,6 +478,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
return;
}
+#ifdef queue...
2014 Feb 27
0
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
...*****************************
*/
+#ifndef queue_spin_trylock_quick
+static inline int queue_spin_trylock_quick(struct qspinlock *lock, int qsval)
+{ return 0; }
+#endif
#ifndef queue_get_lock_qcode
/**
@@ -266,6 +443,11 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
u32 prev_qcode, my_qcode;
/*
+ * Try the quick spinning code path
+ */
+ if (queue_spin_trylock_quick(lock, qsval))
+ return;
+ /*
* Get the queue node
*/
cpu_nr = smp_processor_id();
@@ -296,6 +478,9 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, int qsval)
return;
}
+#ifdef queue...
2014 Apr 02
17
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
N.B. Sorry for the duplicate. This patch series were resent as the
original one was rejected by the vger.kernel.org list server
due to long header. There is no change in content.
v7->v8:
- Remove one unneeded atomic operation from the slowpath, thus
improving performance.
- Simplify some of the codes and add more comments.
- Test for X86_FEATURE_HYPERVISOR CPU feature bit