Displaying 20 results from an estimated 77 matches for "kick_cpu".
2014 Feb 27
3
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
...possibilities here,
considering that in undercommit cases we should not exceed
HEAD_SPIN_THRESHOLD,
1. the looping vcpu in pv_head_spin_check() should do halt()
considering that we have done enough spinning (more than typical
lock-hold time), and hence we are in potential overcommit.
2. multiplex kick_cpu to do directed yield in qspinlock case.
But this may result in some ping ponging?
2014 Feb 27
3
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
...possibilities here,
considering that in undercommit cases we should not exceed
HEAD_SPIN_THRESHOLD,
1. the looping vcpu in pv_head_spin_check() should do halt()
considering that we have done enough spinning (more than typical
lock-hold time), and hence we are in potential overcommit.
2. multiplex kick_cpu to do directed yield in qspinlock case.
But this may result in some ping ponging?
2013 Aug 26
7
[PATCH V13 0/4] Paravirtualized ticket spinlocks for KVM host
This series forms the kvm host part of paravirtual spinlock
based against kvm tree.
Please refer to https://lkml.org/lkml/2013/8/9/265 for
kvm guest and Xen, x86 part merged to -tip spinlocks.
Please note that:
kvm uapi: Add KICK_CPU and PV_UNHALT definition to uapi is a common patch
for both guest and host.
Changes since V12:
fold the patch 3 into patch 2 for bisection. (Eric Northup)
Raghavendra K T (3):
kvm uapi: Add KICK_CPU and PV_UNHALT definition to uapi
kvm hypervisor: Simplify kvm_for_each_vcpu with
kvm_i...
2013 Aug 26
7
[PATCH V13 0/4] Paravirtualized ticket spinlocks for KVM host
This series forms the kvm host part of paravirtual spinlock
based against kvm tree.
Please refer to https://lkml.org/lkml/2013/8/9/265 for
kvm guest and Xen, x86 part merged to -tip spinlocks.
Please note that:
kvm uapi: Add KICK_CPU and PV_UNHALT definition to uapi is a common patch
for both guest and host.
Changes since V12:
fold the patch 3 into patch 2 for bisection. (Eric Northup)
Raghavendra K T (3):
kvm uapi: Add KICK_CPU and PV_UNHALT definition to uapi
kvm hypervisor: Simplify kvm_for_each_vcpu with
kvm_i...
2013 Aug 26
0
[PATCH V13 0/4] Paravirtualized ticket spinlocks for KVM host
...ghavendra K T wrote:
>
> This series forms the kvm host part of paravirtual spinlock
> based against kvm tree.
>
> Please refer to https://lkml.org/lkml/2013/8/9/265 for
> kvm guest and Xen, x86 part merged to -tip spinlocks.
>
> Please note that:
> kvm uapi: Add KICK_CPU and PV_UNHALT definition to uapi is a common patch
> for both guest and host.
>
Thanks, applied. The patchset is not against kvm.git queue though, so I
had to fix one minor conflict manually.
> Changes since V12:
> fold the patch 3 into patch 2 for bisection. (Eric Northup)
>...
2014 Mar 13
2
[PATCH RFC v6 10/11] pvqspinlock, x86: Enable qspinlock PV support for KVM
Il 12/03/2014 19:54, Waiman Long ha scritto:
> @@ -807,8 +889,13 @@ void __init kvm_spinlock_init(void)
> if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
> return;
>
> +#ifdef CONFIG_QUEUE_SPINLOCK
> + pv_lock_ops.kick_cpu = kvm_kick_cpu_type;
> + pv_lock_ops.hibernate = kvm_hibernate;
> +#else
> pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
> pv_lock_ops.unlock_kick = kvm_unlock_kick;
> +#endif
This should also disable the unfair path.
Paolo
2014 Mar 13
2
[PATCH RFC v6 10/11] pvqspinlock, x86: Enable qspinlock PV support for KVM
Il 12/03/2014 19:54, Waiman Long ha scritto:
> @@ -807,8 +889,13 @@ void __init kvm_spinlock_init(void)
> if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
> return;
>
> +#ifdef CONFIG_QUEUE_SPINLOCK
> + pv_lock_ops.kick_cpu = kvm_kick_cpu_type;
> + pv_lock_ops.hibernate = kvm_hibernate;
> +#else
> pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
> pv_lock_ops.unlock_kick = kvm_unlock_kick;
> +#endif
This should also disable the unfair path.
Paolo
2014 Feb 27
1
[PATCH RFC v5 5/8] pvqspinlock, x86: Enable unfair queue spinlock in a KVM guest
...atic_key_slow_inc(¶virt_unfairlocks_enabled);
> + printk(KERN_INFO "KVM setup unfair spinlock\n");
> +
> + return 0;
> +}
> +early_initcall(kvm_unfair_locks_init_jump);
> +#endif
>
I think this should apply to all paravirt implementations, unless
pv_lock_ops.kick_cpu != NULL.
Paolo
2014 Feb 27
1
[PATCH RFC v5 5/8] pvqspinlock, x86: Enable unfair queue spinlock in a KVM guest
...atic_key_slow_inc(¶virt_unfairlocks_enabled);
> + printk(KERN_INFO "KVM setup unfair spinlock\n");
> +
> + return 0;
> +}
> +early_initcall(kvm_unfair_locks_init_jump);
> +#endif
>
I think this should apply to all paravirt implementations, unless
pv_lock_ops.kick_cpu != NULL.
Paolo
2014 Feb 26
0
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
...include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -711,7 +711,12 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
}
#if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
-
+#ifdef CONFIG_QUEUE_SPINLOCK
+static __always_inline void __queue_kick_cpu(int cpu, enum pv_kick_type type)
+{
+ PVOP_VCALL2(pv_lock_ops.kick_cpu, cpu, type);
+}
+#else
static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
__ticket_t ticket)
{
@@ -723,7 +728,7 @@ static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock...
2013 Aug 06
6
[PATCH V12 0/5] Paravirtualized ticket spinlocks for KVM host
This series forms the kvm host part of paravirtual spinlock
based against kvm tree.
Please refer https://lkml.org/lkml/2013/8/6/178 for kvm guest part
of the series.
Please note that:
kvm uapi: Add KICK_CPU and PV_UNHALT definition to uapi is a common patch
for both guest and host.
Srivatsa Vaddagiri (1):
kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
Raghavendra K T (4):
kvm uapi: Add KICK_CPU and PV_UNHALT definition to uapi
kvm : Fold pv_unhalt flag into GET...
2013 Aug 06
6
[PATCH V12 0/5] Paravirtualized ticket spinlocks for KVM host
This series forms the kvm host part of paravirtual spinlock
based against kvm tree.
Please refer https://lkml.org/lkml/2013/8/6/178 for kvm guest part
of the series.
Please note that:
kvm uapi: Add KICK_CPU and PV_UNHALT definition to uapi is a common patch
for both guest and host.
Srivatsa Vaddagiri (1):
kvm hypervisor : Add a hypercall to KVM hypervisor to support pv-ticketlocks
Raghavendra K T (4):
kvm uapi: Add KICK_CPU and PV_UNHALT definition to uapi
kvm : Fold pv_unhalt flag into GET...
2014 Mar 12
0
[PATCH RFC v6 09/11] pvqspinlock, x86: Add qspinlock para-virtualization support
...6/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -711,7 +711,17 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
}
#if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
+#ifdef CONFIG_QUEUE_SPINLOCK
+static __always_inline void __queue_kick_cpu(int cpu, enum pv_kick_type type)
+{
+ PVOP_VCALL2(pv_lock_ops.kick_cpu, cpu, type);
+}
+static __always_inline void __queue_hibernate(void)
+{
+ PVOP_VCALL0(pv_lock_ops.hibernate);
+}
+#else
static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock,
__ticket_t ticket)...
2014 Oct 29
1
[PATCH v13 09/11] pvqspinlock, x86: Add para-virtualization support
...h/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -712,6 +712,24 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
#if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
+#ifdef CONFIG_QUEUE_SPINLOCK
+
+static __always_inline void pv_kick_cpu(int cpu)
+{
+ PVOP_VCALLEE1(pv_lock_ops.kick_cpu, cpu);
+}
+
+static __always_inline void pv_lockwait(u8 *lockbyte)
+{
+ PVOP_VCALLEE1(pv_lock_ops.lockwait, lockbyte);
+}
+
+static __always_inline void pv_lockstat(enum pv_lock_stats type)
+{
+ PVOP_VCALLEE1(pv_lock_ops.lockstat, type);
+}
+
+#else...
2014 Oct 29
1
[PATCH v13 09/11] pvqspinlock, x86: Add para-virtualization support
...h/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -712,6 +712,24 @@ static inline void __set_fixmap(unsigned /* enum fixed_addresses */ idx,
#if defined(CONFIG_SMP) && defined(CONFIG_PARAVIRT_SPINLOCKS)
+#ifdef CONFIG_QUEUE_SPINLOCK
+
+static __always_inline void pv_kick_cpu(int cpu)
+{
+ PVOP_VCALLEE1(pv_lock_ops.kick_cpu, cpu);
+}
+
+static __always_inline void pv_lockwait(u8 *lockbyte)
+{
+ PVOP_VCALLEE1(pv_lock_ops.lockwait, lockbyte);
+}
+
+static __always_inline void pv_lockstat(enum pv_lock_stats type)
+{
+ PVOP_VCALLEE1(pv_lock_ops.lockstat, type);
+}
+
+#else...
2014 Jun 16
4
[PATCH 10/11] qspinlock: Paravirt support
...OCKED_OFFSET)
> +
> +struct pv_node {
> + struct mcs_spinlock mcs;
> + struct mcs_spinlock __offset[3];
> + int cpu, head;
> +};
I am wondering why you need the separate cpu and head variables. I
thought one will be enough here. The wait code put the cpu number in
head, the the kick_cpu code kick the one in cpu which is just the cpu #
of the tail.
> +
> +#define INVALID_HEAD -1
> +#define NO_HEAD nr_cpu_ids
> +
I think it is better to use a constant like -2 for NO_HEAD instead of an
external variable.
> +void __pv_init_node(struct mcs_spinlock *node)
> +{
&...
2014 Jun 16
4
[PATCH 10/11] qspinlock: Paravirt support
...OCKED_OFFSET)
> +
> +struct pv_node {
> + struct mcs_spinlock mcs;
> + struct mcs_spinlock __offset[3];
> + int cpu, head;
> +};
I am wondering why you need the separate cpu and head variables. I
thought one will be enough here. The wait code put the cpu number in
head, the the kick_cpu code kick the one in cpu which is just the cpu #
of the tail.
> +
> +#define INVALID_HEAD -1
> +#define NO_HEAD nr_cpu_ids
> +
I think it is better to use a constant like -2 for NO_HEAD instead of an
external variable.
> +void __pv_init_node(struct mcs_spinlock *node)
> +{
&...
2014 Feb 27
0
[PATCH RFC v5 7/8] pvqspinlock, x86: Add qspinlock para-virtualization support
...hat in undercommit cases we should not exceed
> HEAD_SPIN_THRESHOLD,
>
> 1. the looping vcpu in pv_head_spin_check() should do halt()
> considering that we have done enough spinning (more than typical
> lock-hold time), and hence we are in potential overcommit.
>
> 2. multiplex kick_cpu to do directed yield in qspinlock case.
> But this may result in some ping ponging?
Actually, I think the qspinlock can work roughly the same as the
pvticketlock, using the same lock_spinning and unlock_lock hooks.
The x86-specific codepath can use bit 1 in the ->wait byte as "I have...
2014 Mar 13
0
[PATCH RFC v6 10/11] pvqspinlock, x86: Enable qspinlock PV support for KVM
...aolo Bonzini wrote:
> Il 12/03/2014 19:54, Waiman Long ha scritto:
>> @@ -807,8 +889,13 @@ void __init kvm_spinlock_init(void)
>> if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT))
>> return;
>>
>> +#ifdef CONFIG_QUEUE_SPINLOCK
>> + pv_lock_ops.kick_cpu = kvm_kick_cpu_type;
>> + pv_lock_ops.hibernate = kvm_hibernate;
>> +#else
>> pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(kvm_lock_spinning);
>> pv_lock_ops.unlock_kick = kvm_unlock_kick;
>> +#endif
>
> This should also disable the unfair path.
>
&...
2014 Feb 26
0
[PATCH RFC v5 8/8] pvqspinlock, x86: Enable KVM to use qspinlock's PV support
...+++++++++++++++++++++++++++++++++
kernel/Kconfig.locks | 2 +-
2 files changed, 55 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index f318e78..3ddc436 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -568,6 +568,7 @@ static void kvm_kick_cpu(int cpu)
kvm_hypercall2(KVM_HC_KICK_CPU, flags, apicid);
}
+#ifndef CONFIG_QUEUE_SPINLOCK
enum kvm_contention_stat {
TAKEN_SLOW,
TAKEN_SLOW_PICKUP,
@@ -795,6 +796,55 @@ static void kvm_unlock_kick(struct arch_spinlock *lock, __ticket_t ticket)
}
}
}
+#else /* !CONFIG_QUEUE_SPINLOCK...