Displaying 6 results from an estimated 6 matches for "9e7659e".
2014 May 08
1
[PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
On Wed, May 07, 2014 at 11:01:38AM -0400, Waiman Long wrote:
No, we want the unfair thing for VIRT, not PARAVIRT.
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index 9e7659e..10e87e1 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -227,6 +227,14 @@ static __always_inline int get_qlock(struct qspinlock *lock)
> {
> struct __qspinlock *l = (void *)lock;
>
> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
> + if (static_...
2014 May 08
1
[PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
On Wed, May 07, 2014 at 11:01:38AM -0400, Waiman Long wrote:
No, we want the unfair thing for VIRT, not PARAVIRT.
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index 9e7659e..10e87e1 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -227,6 +227,14 @@ static __always_inline int get_qlock(struct qspinlock *lock)
> {
> struct __qspinlock *l = (void *)lock;
>
> +#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
> + if (static_...
2014 May 07
0
[PATCH v10 09/19] qspinlock: Prepare for unfair lock support
...defined as a shorthand for mcs.locked.
Signed-off-by: Waiman Long <Waiman.Long at hp.com>
---
kernel/locking/qspinlock.c | 26 +++++++++++++++++++-------
1 files changed, 19 insertions(+), 7 deletions(-)
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index e98d7d4..9e7659e 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -64,6 +64,7 @@
struct qnode {
struct mcs_spinlock mcs;
};
+#define qhead mcs.locked /* The queue head flag */
/*
* Per-CPU queue node structures; we can never have more than 4 nested
@@ -216,18 +217,20 @@ xchg_tail...
2014 May 07
0
[PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
...X86_FEATURE_HYPERVISOR))
+ return 0;
+
+ static_key_slow_inc(¶virt_unfairlocks_enabled);
+ printk(KERN_INFO "Unfair spinlock enabled\n");
+
+ return 0;
+}
+early_initcall(unfair_locks_init_jump);
+
+#endif
diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
index 9e7659e..10e87e1 100644
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -227,6 +227,14 @@ static __always_inline int get_qlock(struct qspinlock *lock)
{
struct __qspinlock *l = (void *)lock;
+#ifdef CONFIG_PARAVIRT_UNFAIR_LOCKS
+ if (static_key_false(¶virt_unfairlocks_enab...
2014 May 07
32
[PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support
v9->v10:
- Make some minor changes to qspinlock.c to accommodate review feedback.
- Change author to PeterZ for 2 of the patches.
- Include Raghavendra KT's test results in patch 18.
v8->v9:
- Integrate PeterZ's version of the queue spinlock patch with some
modification:
http://lkml.kernel.org/r/20140310154236.038181843 at infradead.org
- Break the more complex
2014 May 07
32
[PATCH v10 00/19] qspinlock: a 4-byte queue spinlock with PV support
v9->v10:
- Make some minor changes to qspinlock.c to accommodate review feedback.
- Change author to PeterZ for 2 of the patches.
- Include Raghavendra KT's test results in patch 18.
v8->v9:
- Integrate PeterZ's version of the queue spinlock patch with some
modification:
http://lkml.kernel.org/r/20140310154236.038181843 at infradead.org
- Break the more complex