Displaying 7 results from an estimated 7 matches for "55601b4".
2014 Apr 17
2
[PATCH v9 06/19] qspinlock: prolong the stay in the pending bit path
...this optimization.
Also note that I don't have this cmpxchg loop anymore.
> kernel/locking/qspinlock.c | 32 ++++++++++++++++++++++++++++++--
> 1 files changed, 30 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index 55601b4..497da24 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -216,6 +216,7 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
> static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
> {
> u32 old, new, val = *pval;
> + int...
2014 Apr 17
2
[PATCH v9 06/19] qspinlock: prolong the stay in the pending bit path
...this optimization.
Also note that I don't have this cmpxchg loop anymore.
> kernel/locking/qspinlock.c | 32 ++++++++++++++++++++++++++++++--
> 1 files changed, 30 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index 55601b4..497da24 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -216,6 +216,7 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
> static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
> {
> u32 old, new, val = *pval;
> + int...
2014 Apr 17
0
[PATCH v9 06/19] qspinlock: prolong the stay in the pending bit path
...cket spinlock is still quite a bit faster.
Signed-off-by: Waiman Long <Waiman.Long at hp.com>
---
kernel/locking/qspinlock.c | 32 ++++++++++++++++++++++++++++++--
1 files changed, 30 insertions(+), 2 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 55601b4..497da24 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -216,6 +216,7 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
{
u32 old, new, val = *pval;
+ int retry = 1;
/*
* trylock ||...
2014 Apr 18
0
[PATCH v9 06/19] qspinlock: prolong the stay in the pending bit path
...xpense of a bit of slowdown in the pending bit code path.
>> kernel/locking/qspinlock.c | 32 ++++++++++++++++++++++++++++++--
>> 1 files changed, 30 insertions(+), 2 deletions(-)
>>
>> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
>> index 55601b4..497da24 100644
>> --- a/kernel/locking/qspinlock.c
>> +++ b/kernel/locking/qspinlock.c
>> @@ -216,6 +216,7 @@ xchg_tail(struct qspinlock *lock, u32 tail, u32 *pval)
>> static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
>> {
>> u32 old, n...
2014 Apr 17
0
[PATCH v9 05/19] qspinlock: Optimize for smaller NR_CPUS
...ET)
#define _Q_TAIL_CPU_MASK _Q_SET_MASK(TAIL_CPU)
+#define _Q_TAIL_OFFSET _Q_TAIL_IDX_OFFSET
#define _Q_TAIL_MASK (_Q_TAIL_IDX_MASK | _Q_TAIL_CPU_MASK)
#define _Q_LOCKED_VAL (1U << _Q_LOCKED_OFFSET)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index fcf06cb..55601b4 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -22,8 +22,13 @@
#include <linux/percpu.h>
#include <linux/hardirq.h>
#include <linux/mutex.h>
+#include <asm/byteorder.h>
#include <asm/qspinlock.h>
+#if !defined(__LITTLE_ENDIAN) &&a...
2014 Apr 17
33
[PATCH v9 00/19] qspinlock: a 4-byte queue spinlock with PV support
v8->v9:
- Integrate PeterZ's version of the queue spinlock patch with some
modification:
http://lkml.kernel.org/r/20140310154236.038181843 at infradead.org
- Break the more complex patches into smaller ones to ease review effort.
- Fix a racing condition in the PV qspinlock code.
v7->v8:
- Remove one unneeded atomic operation from the slowpath, thus
improving
2014 Apr 17
33
[PATCH v9 00/19] qspinlock: a 4-byte queue spinlock with PV support
v8->v9:
- Integrate PeterZ's version of the queue spinlock patch with some
modification:
http://lkml.kernel.org/r/20140310154236.038181843 at infradead.org
- Break the more complex patches into smaller ones to ease review effort.
- Fix a racing condition in the PV qspinlock code.
v7->v8:
- Remove one unneeded atomic operation from the slowpath, thus
improving