Displaying 20 results from an estimated 36 matches for "__pv_queue_spin_unlock".
2015 Apr 13
1
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
On Thu, Apr 09, 2015 at 05:41:44PM -0400, Waiman Long wrote:
> >>+__visible void __pv_queue_spin_unlock(struct qspinlock *lock)
> >>+{
> >>+ struct __qspinlock *l = (void *)lock;
> >>+ struct pv_node *node;
> >>+
> >>+ if (likely(cmpxchg(&l->locked, _Q_LOCKED_VAL, 0) == _Q_LOCKED_VAL))
> >>+ return;
> >>+
> >>+ /*
> &g...
2015 Apr 13
1
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
On Thu, Apr 09, 2015 at 05:41:44PM -0400, Waiman Long wrote:
> >>+__visible void __pv_queue_spin_unlock(struct qspinlock *lock)
> >>+{
> >>+ struct __qspinlock *l = (void *)lock;
> >>+ struct pv_node *node;
> >>+
> >>+ if (likely(cmpxchg(&l->locked, _Q_LOCKED_VAL, 0) == _Q_LOCKED_VAL))
> >>+ return;
> >>+
> >>+ /*
> &g...
2015 May 04
1
[PATCH v16 08/14] pvqspinlock: Implement simple paravirt support for the qspinlock
...ve
rewritten to hopefully be clearer (and also hopefully not wrecked them).
I took out the cacheline sized structure which takes out that double
loop and simplifies things. I've also added some comments which
hopefully explain how/why we ended up with this exact scheme.
I've also moved the __pv_queue_spin_unlock() function to the tail, such
that we keep the 'wait'/'kick' order for both node and head.
In any case, like I just wrote on the other email, I've stuck some
things in my queue (up to and including patch 11) and if it all works
out we can continue from there.
---
Subject: pvqsp...
2015 May 04
1
[PATCH v16 08/14] pvqspinlock: Implement simple paravirt support for the qspinlock
...ve
rewritten to hopefully be clearer (and also hopefully not wrecked them).
I took out the cacheline sized structure which takes out that double
loop and simplifies things. I've also added some comments which
hopefully explain how/why we ended up with this exact scheme.
I've also moved the __pv_queue_spin_unlock() function to the tail, such
that we keep the 'wait'/'kick' order for both node and head.
In any case, like I just wrote on the other email, I've stuck some
things in my queue (up to and including patch 11) and if it all works
out we can continue from there.
---
Subject: pvqsp...
2015 Apr 09
6
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...This relies on the architecture to provide two paravirt hypercalls:
> + *
> + * pv_wait(u8 *ptr, u8 val) -- suspends the vcpu if *ptr == val
> + * pv_kick(cpu) -- wakes a suspended vcpu
> + *
> + * Using these we implement __pv_queue_spin_lock_slowpath() and
> + * __pv_queue_spin_unlock() to replace native_queue_spin_lock_slowpath() and
> + * native_queue_spin_unlock().
> + */
> +
> +#define _Q_SLOW_VAL (3U << _Q_LOCKED_OFFSET)
> +
> +enum vcpu_state {
> + vcpu_running = 0,
> + vcpu_halted,
> +};
> +
> +struct pv_node {
> + struct mcs_spin...
2015 Apr 09
6
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...This relies on the architecture to provide two paravirt hypercalls:
> + *
> + * pv_wait(u8 *ptr, u8 val) -- suspends the vcpu if *ptr == val
> + * pv_kick(cpu) -- wakes a suspended vcpu
> + *
> + * Using these we implement __pv_queue_spin_lock_slowpath() and
> + * __pv_queue_spin_unlock() to replace native_queue_spin_lock_slowpath() and
> + * native_queue_spin_unlock().
> + */
> +
> +#define _Q_SLOW_VAL (3U << _Q_LOCKED_OFFSET)
> +
> +enum vcpu_state {
> + vcpu_running = 0,
> + vcpu_halted,
> +};
> +
> +struct pv_node {
> + struct mcs_spin...
2015 Apr 02
3
[PATCH 8/9] qspinlock: Generic paravirt support
...k value.
> So we need to have
> some kind of synchronization mechanism to let the lookup CPU know when is a
> good time to look up.
No, its all already ordered and working.
pv_wait_head():
pv_hash()
/* MB as per cmpxchg */
cmpxchg(&l->locked, _Q_LOCKED_VAL, _Q_SLOW_VAL);
VS
__pv_queue_spin_unlock():
if (xchg(&l->locked, 0) != _Q_SLOW_VAL)
return;
/* MB as per xchg */
pv_hash_find(lock);
2015 Apr 02
3
[PATCH 8/9] qspinlock: Generic paravirt support
...k value.
> So we need to have
> some kind of synchronization mechanism to let the lookup CPU know when is a
> good time to look up.
No, its all already ordered and working.
pv_wait_head():
pv_hash()
/* MB as per cmpxchg */
cmpxchg(&l->locked, _Q_LOCKED_VAL, _Q_SLOW_VAL);
VS
__pv_queue_spin_unlock():
if (xchg(&l->locked, 0) != _Q_SLOW_VAL)
return;
/* MB as per xchg */
pv_hash_find(lock);
2015 Mar 16
0
[PATCH 8/9] qspinlock: Generic paravirt support
...+++++++
3 files changed, 248 insertions(+), 1 deletion(-)
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -118,6 +118,9 @@ static __always_inline bool virt_queue_s
}
#endif
+extern void __pv_queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
+extern void __pv_queue_spin_unlock(struct qspinlock *lock);
+
/*
* Initializier
*/
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -18,6 +18,9 @@
* Authors: Waiman Long <waiman.long at hp.com>
* Peter Zijlstra <peterz at infradead.org>
*/
+
+#ifndef _GEN_PV_LOCK_SLOWPATH
+
#inclu...
2015 Mar 16
0
[PATCH 8/9] qspinlock: Generic paravirt support
...+++++++
3 files changed, 248 insertions(+), 1 deletion(-)
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -118,6 +118,9 @@ static __always_inline bool virt_queue_s
}
#endif
+extern void __pv_queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
+extern void __pv_queue_spin_unlock(struct qspinlock *lock);
+
/*
* Initializier
*/
--- a/kernel/locking/qspinlock.c
+++ b/kernel/locking/qspinlock.c
@@ -18,6 +18,9 @@
* Authors: Waiman Long <waiman.long at hp.com>
* Peter Zijlstra <peterz at infradead.org>
*/
+
+#ifndef _GEN_PV_LOCK_SLOWPATH
+
#inclu...
2015 Apr 24
0
[PATCH v16 08/14] pvqspinlock: Implement simple paravirt support for the qspinlock
...+ * of spinning them.
+ *
+ * This relies on the architecture to provide two paravirt hypercalls:
+ *
+ * pv_wait(u8 *ptr, u8 val) -- suspends the vcpu if *ptr == val
+ * pv_kick(cpu) -- wakes a suspended vcpu
+ *
+ * Using these we implement __pv_queue_spin_lock_slowpath() and
+ * __pv_queue_spin_unlock() to replace native_queue_spin_lock_slowpath() and
+ * native_queue_spin_unlock().
+ */
+
+#define _Q_SLOW_VAL (3U << _Q_LOCKED_OFFSET)
+
+enum vcpu_state {
+ vcpu_running = 0,
+ vcpu_halted,
+};
+
+struct pv_node {
+ struct mcs_spinlock mcs;
+ struct mcs_spinlock __res[3];
+
+ int cpu;
+ u...
2015 Apr 09
0
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...itecture to provide two paravirt hypercalls:
>> + *
>> + * pv_wait(u8 *ptr, u8 val) -- suspends the vcpu if *ptr == val
>> + * pv_kick(cpu) -- wakes a suspended vcpu
>> + *
>> + * Using these we implement __pv_queue_spin_lock_slowpath() and
>> + * __pv_queue_spin_unlock() to replace native_queue_spin_lock_slowpath() and
>> + * native_queue_spin_unlock().
>> + */
>> +
>> +#define _Q_SLOW_VAL (3U<< _Q_LOCKED_OFFSET)
>> +
>> +enum vcpu_state {
>> + vcpu_running = 0,
>> + vcpu_halted,
>> +};
>> +
>&...
2015 Mar 18
2
[PATCH 8/9] qspinlock: Generic paravirt support
...1 deletion(-)
>
> --- a/include/asm-generic/qspinlock.h
> +++ b/include/asm-generic/qspinlock.h
> @@ -118,6 +118,9 @@ static __always_inline bool virt_queue_s
> }
> #endif
>
> +extern void __pv_queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
> +extern void __pv_queue_spin_unlock(struct qspinlock *lock);
> +
> /*
> * Initializier
> */
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -18,6 +18,9 @@
> * Authors: Waiman Long<waiman.long at hp.com>
> * Peter Zijlstra<peterz at infradead.org>...
2015 Mar 18
2
[PATCH 8/9] qspinlock: Generic paravirt support
...1 deletion(-)
>
> --- a/include/asm-generic/qspinlock.h
> +++ b/include/asm-generic/qspinlock.h
> @@ -118,6 +118,9 @@ static __always_inline bool virt_queue_s
> }
> #endif
>
> +extern void __pv_queue_spin_lock_slowpath(struct qspinlock *lock, u32 val);
> +extern void __pv_queue_spin_unlock(struct qspinlock *lock);
> +
> /*
> * Initializier
> */
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -18,6 +18,9 @@
> * Authors: Waiman Long<waiman.long at hp.com>
> * Peter Zijlstra<peterz at infradead.org>...
2015 Apr 07
0
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...+ * of spinning them.
+ *
+ * This relies on the architecture to provide two paravirt hypercalls:
+ *
+ * pv_wait(u8 *ptr, u8 val) -- suspends the vcpu if *ptr == val
+ * pv_kick(cpu) -- wakes a suspended vcpu
+ *
+ * Using these we implement __pv_queue_spin_lock_slowpath() and
+ * __pv_queue_spin_unlock() to replace native_queue_spin_lock_slowpath() and
+ * native_queue_spin_unlock().
+ */
+
+#define _Q_SLOW_VAL (3U << _Q_LOCKED_OFFSET)
+
+enum vcpu_state {
+ vcpu_running = 0,
+ vcpu_halted,
+};
+
+struct pv_node {
+ struct mcs_spinlock mcs;
+ struct mcs_spinlock __res[3];
+
+ int cpu;
+ u...
2015 Mar 19
0
[Xen-devel] [PATCH 0/9] qspinlock stuff -v15
...id spin_time_accum_blocked(u64 start)
}
#endif /* CONFIG_XEN_DEBUG_FS */
+static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(char *, irq_name);
+static bool xen_pvspin = true;
+
+#ifdef CONFIG_QUEUE_SPINLOCK
+
+#include <asm/qspinlock.h>
+
+PV_CALLEE_SAVE_REGS_THUNK(__pv_queue_spin_unlock);
+
+static void xen_qlock_wait(u8 *ptr, u8 val)
+{
+ int irq = __this_cpu_read(lock_kicker_irq);
+
+ xen_clear_irq_pending(irq);
+
+ barrier();
+
+ if (READ_ONCE(*ptr) == val)
+ xen_poll_irq(irq);
+}
+
+static void xen_qlock_kick(int cpu)
+{
+ xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
+}
+
+...
2015 Mar 19
0
[Xen-devel] [PATCH 0/9] qspinlock stuff -v15
...id spin_time_accum_blocked(u64 start)
}
#endif /* CONFIG_XEN_DEBUG_FS */
+static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(char *, irq_name);
+static bool xen_pvspin = true;
+
+#ifdef CONFIG_QUEUE_SPINLOCK
+
+#include <asm/qspinlock.h>
+
+PV_CALLEE_SAVE_REGS_THUNK(__pv_queue_spin_unlock);
+
+static void xen_qlock_wait(u8 *ptr, u8 val)
+{
+ int irq = __this_cpu_read(lock_kicker_irq);
+
+ xen_clear_irq_pending(irq);
+
+ barrier();
+
+ if (READ_ONCE(*ptr) == val)
+ xen_poll_irq(irq);
+}
+
+static void xen_qlock_kick(int cpu)
+{
+ xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
+}
+
+...
2015 Mar 16
0
[PATCH 9/9] qspinlock, x86, kvm: Implement KVM support for paravirt qspinlock
Implement the paravirt qspinlock for x86-kvm.
We use the regular paravirt call patching to switch between:
native_queue_spin_lock_slowpath() __pv_queue_spin_lock_slowpath()
native_queue_spin_unlock() __pv_queue_spin_unlock()
We use a callee saved call for the unlock function which reduces the
i-cache footprint and allows 'inlining' of SPIN_UNLOCK functions
again.
We further optimize the unlock path by patching the direct call with a
"movb $0,%arg1" if we are indeed using the native unlock code. Th...
2015 Mar 16
0
[PATCH 9/9] qspinlock, x86, kvm: Implement KVM support for paravirt qspinlock
Implement the paravirt qspinlock for x86-kvm.
We use the regular paravirt call patching to switch between:
native_queue_spin_lock_slowpath() __pv_queue_spin_lock_slowpath()
native_queue_spin_unlock() __pv_queue_spin_unlock()
We use a callee saved call for the unlock function which reduces the
i-cache footprint and allows 'inlining' of SPIN_UNLOCK functions
again.
We further optimize the unlock path by patching the direct call with a
"movb $0,%arg1" if we are indeed using the native unlock code. Th...
2015 Apr 02
0
[PATCH 8/9] qspinlock: Generic paravirt support
On Thu, Apr 02, 2015 at 07:20:57PM +0200, Peter Zijlstra wrote:
> pv_wait_head():
>
> pv_hash()
> /* MB as per cmpxchg */
> cmpxchg(&l->locked, _Q_LOCKED_VAL, _Q_SLOW_VAL);
>
> VS
>
> __pv_queue_spin_unlock():
>
> if (xchg(&l->locked, 0) != _Q_SLOW_VAL)
> return;
>
> /* MB as per xchg */
> pv_hash_find(lock);
>
>
Something like so.. compile tested only.
I took out the LFSR because that was likely over engineering from my
side :-)
--- a/kernel/locking/qspinlo...