Displaying 20 results from an estimated 24 matches for "_arch_supports_atomic_8_16_bits_ops".
2014 Mar 02
1
[PATCH v5 2/8] qspinlock, x86: Enable x86-64 to use queue spinlock
On 02/26, Waiman Long wrote:
>
> +#define _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS
> +
> +/*
> + * x86-64 specific queue spinlock union structure
> + */
> +union arch_qspinlock {
> + struct qspinlock slock;
> + u8 lock; /* Lock bit */
> +};
And this enables the optimized version of queue_spin_setlock().
But why does it check ACCESS_ONCE(qlock->lock)...
2014 Mar 02
1
[PATCH v5 2/8] qspinlock, x86: Enable x86-64 to use queue spinlock
On 02/26, Waiman Long wrote:
>
> +#define _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS
> +
> +/*
> + * x86-64 specific queue spinlock union structure
> + */
> +union arch_qspinlock {
> + struct qspinlock slock;
> + u8 lock; /* Lock bit */
> +};
And this enables the optimized version of queue_spin_setlock().
But why does it check ACCESS_ONCE(qlock->lock)...
2014 Feb 26
0
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
...that support: *
+ * 1) Atomic byte and short data write *
+ * 2) Byte and short data exchange and compare-exchange instructions *
+ * *
+ * For those architectures, their asm/qspinlock.h header file should *
+ * define the followings in order to use the optimized codes. *
+ * 1) The _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS macro *
+ * 2) A smp_u8_store_release() macro for byte size store operation *
+ * 3) A "union arch_qspinlock" structure that include the individual *
+ * fields of the qspinlock structure, including: *
+ * o slock - the qspinlock structure *
+ * o lock - the lock b...
2014 Feb 27
0
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
...that support: *
+ * 1) Atomic byte and short data write *
+ * 2) Byte and short data exchange and compare-exchange instructions *
+ * *
+ * For those architectures, their asm/qspinlock.h header file should *
+ * define the followings in order to use the optimized codes. *
+ * 1) The _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS macro *
+ * 2) A smp_u8_store_release() macro for byte size store operation *
+ * 3) A "union arch_qspinlock" structure that include the individual *
+ * fields of the qspinlock structure, including: *
+ * o slock - the qspinlock structure *
+ * o lock - the lock b...
2014 Feb 26
0
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
...+++++++++++++++++++++++++++-
3 files changed, 215 insertions(+), 5 deletions(-)
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index 44cefee..98db42e 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -7,12 +7,30 @@
#define _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS
+#define smp_u8_store_release(p, v) \
+do { \
+ barrier(); \
+ ACCESS_ONCE(*p) = (v); \
+} while (0)
+
+/*
+ * As the qcode will be accessed as a 16-bit word, no offset is needed
+ */
+#define _QCODE_VAL_OFFSET 0
+
/*
* x86-64 specific queue spinlock union structure
+ * Besides the sloc...
2014 Feb 27
0
[PATCH v5 3/8] qspinlock, x86: Add x86 specific optimization for 2 contending tasks
...+++++++++++++++++++++++++++-
3 files changed, 215 insertions(+), 5 deletions(-)
diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h
index 44cefee..98db42e 100644
--- a/arch/x86/include/asm/qspinlock.h
+++ b/arch/x86/include/asm/qspinlock.h
@@ -7,12 +7,30 @@
#define _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS
+#define smp_u8_store_release(p, v) \
+do { \
+ barrier(); \
+ ACCESS_ONCE(*p) = (v); \
+} while (0)
+
+/*
+ * As the qcode will be accessed as a 16-bit word, no offset is needed
+ */
+#define _QCODE_VAL_OFFSET 0
+
/*
* x86-64 specific queue spinlock union structure
+ * Besides the sloc...
2014 Feb 27
14
[PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5:
- Move the optimized 2-task contending code to the generic file to
enable more architectures to use it without code duplication.
- Address some of the style-related comments by PeterZ.
- Allow the use of unfair queue spinlock in a real para-virtualized
execution environment.
- Add para-virtualization support to the qspinlock code by ensuring
that the lock holder and queue
2014 Feb 27
14
[PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5:
- Move the optimized 2-task contending code to the generic file to
enable more architectures to use it without code duplication.
- Address some of the style-related comments by PeterZ.
- Allow the use of unfair queue spinlock in a real para-virtualized
execution environment.
- Add para-virtualization support to the qspinlock code by ensuring
that the lock holder and queue
2014 Feb 26
0
[PATCH v5 2/8] qspinlock, x86: Enable x86-64 to use queue spinlock
...100644
index 0000000..44cefee
--- /dev/null
+++ b/arch/x86/include/asm/qspinlock.h
@@ -0,0 +1,41 @@
+#ifndef _ASM_X86_QSPINLOCK_H
+#define _ASM_X86_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
+
+#define _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS
+
+/*
+ * x86-64 specific queue spinlock union structure
+ */
+union arch_qspinlock {
+ struct qspinlock slock;
+ u8 lock; /* Lock bit */
+};
+
+#define queue_spin_unlock queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ *
+...
2014 Feb 27
0
[PATCH v5 2/8] qspinlock, x86: Enable x86-64 to use queue spinlock
...100644
index 0000000..44cefee
--- /dev/null
+++ b/arch/x86/include/asm/qspinlock.h
@@ -0,0 +1,41 @@
+#ifndef _ASM_X86_QSPINLOCK_H
+#define _ASM_X86_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
+
+#define _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS
+
+/*
+ * x86-64 specific queue spinlock union structure
+ */
+union arch_qspinlock {
+ struct qspinlock slock;
+ u8 lock; /* Lock bit */
+};
+
+#define queue_spin_unlock queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ *
+...
2014 Mar 19
0
[PATCH v7 02/11] qspinlock, x86: Enable x86-64 to use queue spinlock
...100644
index 0000000..44cefee
--- /dev/null
+++ b/arch/x86/include/asm/qspinlock.h
@@ -0,0 +1,41 @@
+#ifndef _ASM_X86_QSPINLOCK_H
+#define _ASM_X86_QSPINLOCK_H
+
+#include <asm-generic/qspinlock_types.h>
+
+#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
+
+#define _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS
+
+/*
+ * x86-64 specific queue spinlock union structure
+ */
+union arch_qspinlock {
+ struct qspinlock slock;
+ u8 lock; /* Lock bit */
+};
+
+#define queue_spin_unlock queue_spin_unlock
+/**
+ * queue_spin_unlock - release a queue spinlock
+ * @lock : Pointer to queue spinlock structure
+ *
+...
2014 Mar 19
1
[PATCH v7 02/11] qspinlock, x86: Enable x86-64 to use queue spinlock
...++ b/arch/x86/include/asm/qspinlock.h
> @@ -0,0 +1,41 @@
> +#ifndef _ASM_X86_QSPINLOCK_H
> +#define _ASM_X86_QSPINLOCK_H
> +
> +#include <asm-generic/qspinlock_types.h>
> +
> +#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
> +
> +#define _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS
> +
> +/*
> + * x86-64 specific queue spinlock union structure
> + */
> +union arch_qspinlock {
> + struct qspinlock slock;
> + u8 lock; /* Lock bit */
> +};
> +
> +#define queue_spin_unlock queue_spin_unlock
> +/**
> + * queue_spin_unlock - release a queue spi...
2014 Mar 19
1
[PATCH v7 02/11] qspinlock, x86: Enable x86-64 to use queue spinlock
...++ b/arch/x86/include/asm/qspinlock.h
> @@ -0,0 +1,41 @@
> +#ifndef _ASM_X86_QSPINLOCK_H
> +#define _ASM_X86_QSPINLOCK_H
> +
> +#include <asm-generic/qspinlock_types.h>
> +
> +#if !defined(CONFIG_X86_OOSTORE) && !defined(CONFIG_X86_PPRO_FENCE)
> +
> +#define _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS
> +
> +/*
> + * x86-64 specific queue spinlock union structure
> + */
> +union arch_qspinlock {
> + struct qspinlock slock;
> + u8 lock; /* Lock bit */
> +};
> +
> +#define queue_spin_unlock queue_spin_unlock
> +/**
> + * queue_spin_unlock - release a queue spi...
2014 Mar 12
17
[PATCH v6 00/11] qspinlock: a 4-byte queue spinlock with PV support
v5->v6:
- Change the optimized 2-task contending code to make it fairer at the
expense of a bit of performance.
- Add a patch to support unfair queue spinlock for Xen.
- Modify the PV qspinlock code to follow what was done in the PV
ticketlock.
- Add performance data for the unfair lock as well as the PV
support code.
v4->v5:
- Move the optimized 2-task contending code to the
2014 Mar 12
17
[PATCH v6 00/11] qspinlock: a 4-byte queue spinlock with PV support
v5->v6:
- Change the optimized 2-task contending code to make it fairer at the
expense of a bit of performance.
- Add a patch to support unfair queue spinlock for Xen.
- Modify the PV qspinlock code to follow what was done in the PV
ticketlock.
- Add performance data for the unfair lock as well as the PV
support code.
v4->v5:
- Move the optimized 2-task contending code to the
2014 Mar 19
15
[PATCH v7 00/11] qspinlock: a 4-byte queue spinlock with PV support
v6->v7:
- Remove an atomic operation from the 2-task contending code
- Shorten the names of some macros
- Make the queue waiter to attempt to steal lock when unfair lock is
enabled.
- Remove lock holder kick from the PV code and fix a race condition
- Run the unfair lock & PV code on overcommitted KVM guests to collect
performance data.
v5->v6:
- Change the optimized
2014 Mar 19
15
[PATCH v7 00/11] qspinlock: a 4-byte queue spinlock with PV support
v6->v7:
- Remove an atomic operation from the 2-task contending code
- Shorten the names of some macros
- Make the queue waiter to attempt to steal lock when unfair lock is
enabled.
- Remove lock holder kick from the PV code and fix a race condition
- Run the unfair lock & PV code on overcommitted KVM guests to collect
performance data.
v5->v6:
- Change the optimized
2014 Mar 12
0
[PATCH v6 04/11] qspinlock: Optimized code path for 2 contending tasks
...7 +307,7 @@ static __always_inline int queue_spin_setlock(struct qspinlock *lock)
{
union arch_qspinlock *qlock = (union arch_qspinlock *)lock;
- return cmpxchg(&qlock->lock, 0, _QSPINLOCK_LOCKED) == 0;
+ return cmpxchg(&qlock->lock_wait, 0, _QSPINLOCK_LOCKED) == 0;
}
#else /* _ARCH_SUPPORTS_ATOMIC_8_16_BITS_OPS */
/*
@@ -214,6 +336,10 @@ static __always_inline int queue_spin_setlock(struct qspinlock *lock)
* that may get superseded by a more optimized version. *
************************************************************************
*/
+#ifndef queue_spin_trylock_quick
+static inline int queue_...
2014 Apr 01
10
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
v7->v8:
- Remove one unneeded atomic operation from the slowpath, thus
improving performance.
- Simplify some of the codes and add more comments.
- Test for X86_FEATURE_HYPERVISOR CPU feature bit to enable/disable
unfair lock.
- Reduce unfair lock slowpath lock stealing frequency depending
on its distance from the queue head.
- Add performance data for IvyBridge-EX CPU.
2014 Apr 01
10
[PATCH v8 00/10] qspinlock: a 4-byte queue spinlock with PV support
v7->v8:
- Remove one unneeded atomic operation from the slowpath, thus
improving performance.
- Simplify some of the codes and add more comments.
- Test for X86_FEATURE_HYPERVISOR CPU feature bit to enable/disable
unfair lock.
- Reduce unfair lock slowpath lock stealing frequency depending
on its distance from the queue head.
- Add performance data for IvyBridge-EX CPU.