Displaying 8 results from an estimated 8 matches for "_q_locked_slow".
2014 Jun 16
4
[PATCH 10/11] qspinlock: Paravirt support
On 06/15/2014 08:47 AM, Peter Zijlstra wrote:
>
>
>
> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
> +
> +/*
> + * Write a comment about how all this works...
> + */
> +
> +#define _Q_LOCKED_SLOW (2U<< _Q_LOCKED_OFFSET)
> +
> +struct pv_node {
> + struct mcs_spinlock mcs;
> + struct mcs_spinlock __offset[3];
> + int cpu, head;
> +};
I am wondering why you need the separate cpu and head variables. I
thought one will be enough here. The wait code put the cpu number...
2014 Jun 16
4
[PATCH 10/11] qspinlock: Paravirt support
On 06/15/2014 08:47 AM, Peter Zijlstra wrote:
>
>
>
> +#ifdef CONFIG_PARAVIRT_SPINLOCKS
> +
> +/*
> + * Write a comment about how all this works...
> + */
> +
> +#define _Q_LOCKED_SLOW (2U<< _Q_LOCKED_OFFSET)
> +
> +struct pv_node {
> + struct mcs_spinlock mcs;
> + struct mcs_spinlock __offset[3];
> + int cpu, head;
> +};
I am wondering why you need the separate cpu and head variables. I
thought one will be enough here. The wait code put the cpu number...
2014 Jun 15
0
[PATCH 10/11] qspinlock: Paravirt support
...S]);
/*
* We must be able to distinguish between no-tail and the tail at 0:0,
@@ -218,6 +238,156 @@ static __always_inline void set_locked(s
ACCESS_ONCE(l->locked) = _Q_LOCKED_VAL;
}
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+
+/*
+ * Write a comment about how all this works...
+ */
+
+#define _Q_LOCKED_SLOW (2U << _Q_LOCKED_OFFSET)
+
+struct pv_node {
+ struct mcs_spinlock mcs;
+ struct mcs_spinlock __offset[3];
+ int cpu, head;
+};
+
+#define INVALID_HEAD -1
+#define NO_HEAD nr_cpu_ids
+
+void __pv_init_node(struct mcs_spinlock *node)
+{
+ struct pv_node *pn = (struct pv_node *)node;
+
+ BUILD...
2014 Jun 22
1
[PATCH 11/11] qspinlock, kvm: Add paravirt support
...return;
> +
> + /*
> + * Make sure an interrupt handler can't upset things in a
> + * partially setup state.
> + */
I am seeing hang with even 2 cpu guest (with patches on top of 3.15-rc6 ).
looking further with gdb I see one cpu is stuck with native_halt with
slowpath flag(_Q_LOCKED_SLOW) set when it was called.
(gdb) bt
#0 native_halt () at /test/master/arch/x86/include/asm/irqflags.h:55
#1 0xffffffff81033118 in halt (ptr=0xffffffff81eb0e58, val=524291) at
/test/master/arch/x86/include/asm/paravirt.h:116
#2 kvm_wait (ptr=0xffffffff81eb0e58, val=524291) at
arch/x86/kernel/kvm...
2014 Jun 22
1
[PATCH 11/11] qspinlock, kvm: Add paravirt support
...return;
> +
> + /*
> + * Make sure an interrupt handler can't upset things in a
> + * partially setup state.
> + */
I am seeing hang with even 2 cpu guest (with patches on top of 3.15-rc6 ).
looking further with gdb I see one cpu is stuck with native_halt with
slowpath flag(_Q_LOCKED_SLOW) set when it was called.
(gdb) bt
#0 native_halt () at /test/master/arch/x86/include/asm/irqflags.h:55
#1 0xffffffff81033118 in halt (ptr=0xffffffff81eb0e58, val=524291) at
/test/master/arch/x86/include/asm/paravirt.h:116
#2 kvm_wait (ptr=0xffffffff81eb0e58, val=524291) at
arch/x86/kernel/kvm...
2014 Jun 15
28
[PATCH 00/11] qspinlock with paravirt support
Since Waiman seems incapable of doing simple things; here's my take on the
paravirt crap.
The first few patches are taken from Waiman's latest series, but the virt
support is completely new. Its primary aim is to not mess up the native code.
I've not stress tested it, but the virt and paravirt (kvm) cases boot on simple
smp guests. I've not done Xen, but the patch should be
2014 Jun 15
28
[PATCH 00/11] qspinlock with paravirt support
Since Waiman seems incapable of doing simple things; here's my take on the
paravirt crap.
The first few patches are taken from Waiman's latest series, but the virt
support is completely new. Its primary aim is to not mess up the native code.
I've not stress tested it, but the virt and paravirt (kvm) cases boot on simple
smp guests. I've not done Xen, but the patch should be
2014 Jun 18
0
[PATCH 10/11] qspinlock: Paravirt support
Il 17/06/2014 00:08, Waiman Long ha scritto:
>> +void __pv_queue_unlock(struct qspinlock *lock)
>> +{
>> + int val = atomic_read(&lock->val);
>> +
>> + native_queue_unlock(lock);
>> +
>> + if (val & _Q_LOCKED_SLOW)
>> + ___pv_kick_head(lock);
>> +}
>> +
>
> Again a race can happen here between the reading and writing of the lock
> value. I can't think of a good way to do that without using cmpxchg.
Could you just use xchg on the locked byte?
Paolo