search for: kvm_wait

Displaying 20 results from an estimated 48 matches for "kvm_wait".

Did you mean: evo_wait
2020 Aug 11
3
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
On 11.08.20 09:41, Peter Zijlstra wrote: > On Fri, Aug 07, 2020 at 05:19:03PM +0200, Marco Elver wrote: > >> My hypothesis here is simply that kvm_wait() may be called in a place >> where we get the same case I mentioned to Peter, >> >> raw_local_irq_save(); /* or other IRQs off without tracing */ >> ... >> kvm_wait() /* IRQ state tracing gets confused */ >> ... >> raw_local_irq_restore(); >> &g...
2020 Aug 11
3
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
On 11.08.20 09:41, Peter Zijlstra wrote: > On Fri, Aug 07, 2020 at 05:19:03PM +0200, Marco Elver wrote: > >> My hypothesis here is simply that kvm_wait() may be called in a place >> where we get the same case I mentioned to Peter, >> >> raw_local_irq_save(); /* or other IRQs off without tracing */ >> ... >> kvm_wait() /* IRQ state tracing gets confused */ >> ... >> raw_local_irq_restore(); >> &g...
2020 Aug 11
0
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
...> >>>> Thanks for testing! >>>> >>>> I take it you are doing the tests in a KVM guest? >>> >>> Yes, correct. >>> >>>> If so I have a gut feeling that the use of local_irq_save() and >>>> local_irq_restore() in kvm_wait() might be fishy. I might be completely >>>> wrong here, though. >>> >>> Happy to help debug more, although I might need patches or pointers >>> what to play with. >>> >>>> BTW, I think Xen's variant of pv spinlocks is fine (no playing...
2020 Aug 11
0
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
On Fri, Aug 07, 2020 at 05:19:03PM +0200, Marco Elver wrote: > My hypothesis here is simply that kvm_wait() may be called in a place > where we get the same case I mentioned to Peter, > > raw_local_irq_save(); /* or other IRQs off without tracing */ > ... > kvm_wait() /* IRQ state tracing gets confused */ > ... > raw_local_irq_restore(); > > and therefore, using raw va...
2020 Aug 05
9
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
On Wed, Aug 05, 2020 at 03:59:40PM +0200, Marco Elver wrote: > On Wed, Aug 05, 2020 at 03:42PM +0200, peterz at infradead.org wrote: > > Shouldn't we __always_inline those? They're going to be really small. > > I can send a v2, and you can choose. For reference, though: > > ffffffff86271ee0 <arch_local_save_flags>: > ffffffff86271ee0: 0f 1f 44 00 00
2020 Aug 05
9
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
On Wed, Aug 05, 2020 at 03:59:40PM +0200, Marco Elver wrote: > On Wed, Aug 05, 2020 at 03:42PM +0200, peterz at infradead.org wrote: > > Shouldn't we __always_inline those? They're going to be really small. > > I can send a v2, and you can choose. For reference, though: > > ffffffff86271ee0 <arch_local_save_flags>: > ffffffff86271ee0: 0f 1f 44 00 00
2014 Jun 22
1
[PATCH 11/11] qspinlock, kvm: Add paravirt support
On 06/15/2014 06:17 PM, Peter Zijlstra wrote: > Signed-off-by: Peter Zijlstra<peterz at infradead.org> > --- [...] > + > +void kvm_wait(int *ptr, int val) > +{ > + unsigned long flags; > + > + if (in_nmi()) > + return; > + > + /* > + * Make sure an interrupt handler can't upset things in a > + * partially setup state. > + */ I am seeing hang with even 2 cpu guest (with patches on top of 3.15-r...
2014 Jun 22
1
[PATCH 11/11] qspinlock, kvm: Add paravirt support
On 06/15/2014 06:17 PM, Peter Zijlstra wrote: > Signed-off-by: Peter Zijlstra<peterz at infradead.org> > --- [...] > + > +void kvm_wait(int *ptr, int val) > +{ > + unsigned long flags; > + > + if (in_nmi()) > + return; > + > + /* > + * Make sure an interrupt handler can't upset things in a > + * partially setup state. > + */ I am seeing hang with even 2 cpu guest (with patches on top of 3.15-r...
2020 Aug 11
0
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
On Tue, Aug 11, 2020 at 09:57:55AM +0200, J?rgen Gro? wrote: > On 11.08.20 09:41, Peter Zijlstra wrote: > > On Fri, Aug 07, 2020 at 05:19:03PM +0200, Marco Elver wrote: > > > > > My hypothesis here is simply that kvm_wait() may be called in a place > > > where we get the same case I mentioned to Peter, > > > > > > raw_local_irq_save(); /* or other IRQs off without tracing */ > > > ... > > > kvm_wait() /* IRQ state tracing gets confused */ > > > ... > >...
2020 Aug 11
2
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
On 11.08.20 10:12, Peter Zijlstra wrote: > On Tue, Aug 11, 2020 at 09:57:55AM +0200, J?rgen Gro? wrote: >> On 11.08.20 09:41, Peter Zijlstra wrote: >>> On Fri, Aug 07, 2020 at 05:19:03PM +0200, Marco Elver wrote: >>> >>>> My hypothesis here is simply that kvm_wait() may be called in a place >>>> where we get the same case I mentioned to Peter, >>>> >>>> raw_local_irq_save(); /* or other IRQs off without tracing */ >>>> ... >>>> kvm_wait() /* IRQ state tracing gets confused */ >>>> .....
2020 Aug 11
2
[PATCH] x86/paravirt: Add missing noinstr to arch_local*() helpers
On 11.08.20 10:12, Peter Zijlstra wrote: > On Tue, Aug 11, 2020 at 09:57:55AM +0200, J?rgen Gro? wrote: >> On 11.08.20 09:41, Peter Zijlstra wrote: >>> On Fri, Aug 07, 2020 at 05:19:03PM +0200, Marco Elver wrote: >>> >>>> My hypothesis here is simply that kvm_wait() may be called in a place >>>> where we get the same case I mentioned to Peter, >>>> >>>> raw_local_irq_save(); /* or other IRQs off without tracing */ >>>> ... >>>> kvm_wait() /* IRQ state tracing gets confused */ >>>> .....
2014 Jun 15
0
[PATCH 11/11] qspinlock, kvm: Add paravirt support
...SPINLOCK */ + +#include <asm-generic/qspinlock.h> + +PV_CALLEE_SAVE_REGS_THUNK(__pv_init_node); +PV_CALLEE_SAVE_REGS_THUNK(__pv_link_and_wait_node); +PV_CALLEE_SAVE_REGS_THUNK(__pv_kick_node); + +PV_CALLEE_SAVE_REGS_THUNK(__pv_wait_head); +PV_CALLEE_SAVE_REGS_THUNK(__pv_queue_unlock); + +void kvm_wait(int *ptr, int val) +{ + unsigned long flags; + + if (in_nmi()) + return; + + /* + * Make sure an interrupt handler can't upset things in a + * partially setup state. + */ + local_irq_save(flags); + + /* + * check again make sure it didn't become free while + * we weren't looking....
2016 Nov 15
2
[PATCH v7 06/11] x86, paravirt: Add interface to support kvm/xen vcpu preempted check
...ara_has_feature(KVM_FEATURE_STEAL_TIME)) { has_steal_clock = 1; pv_time_ops.steal_clock = kvm_steal_clock; -#ifdef CONFIG_PARAVIRT_SPINLOCKS - pv_lock_ops.vcpu_is_preempted = kvm_vcpu_is_preempted; -#endif } if (kvm_para_has_feature(KVM_FEATURE_PV_EOI)) @@ -604,6 +592,14 @@ static void kvm_wait(u8 *ptr, u8 val) local_irq_restore(flags); } +static bool __kvm_vcpu_is_preempted(int cpu) +{ + struct kvm_steal_time *src = &per_cpu(steal_time, cpu); + + return !!src->preempted; +} +PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_preempted); + /* * Setup pv_lock_ops to exploit KVM_FEATURE...
2016 Nov 15
2
[PATCH v7 06/11] x86, paravirt: Add interface to support kvm/xen vcpu preempted check
...ara_has_feature(KVM_FEATURE_STEAL_TIME)) { has_steal_clock = 1; pv_time_ops.steal_clock = kvm_steal_clock; -#ifdef CONFIG_PARAVIRT_SPINLOCKS - pv_lock_ops.vcpu_is_preempted = kvm_vcpu_is_preempted; -#endif } if (kvm_para_has_feature(KVM_FEATURE_PV_EOI)) @@ -604,6 +592,14 @@ static void kvm_wait(u8 *ptr, u8 val) local_irq_restore(flags); } +static bool __kvm_vcpu_is_preempted(int cpu) +{ + struct kvm_steal_time *src = &per_cpu(steal_time, cpu); + + return !!src->preempted; +} +PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_preempted); + /* * Setup pv_lock_ops to exploit KVM_FEATURE...
2017 Feb 10
3
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...,7 +595,6 @@ __visible bool __kvm_vcpu_is_preempted(int cpu) return !!src->preempted; } -PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_preempted); /* * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present. @@ -614,10 +613,8 @@ void __init kvm_spinlock_init(void) pv_lock_ops.wait = kvm_wait; pv_lock_ops.kick = kvm_kick_cpu; - if (kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) { - pv_lock_ops.vcpu_is_preempted = - PV_CALLEE_SAVE(__kvm_vcpu_is_preempted); - } + if (kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) + pv_lock_ops.vcpu_is_preempted = __kvm_vcpu_is_preempted; } #endi...
2017 Feb 10
3
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...,7 +595,6 @@ __visible bool __kvm_vcpu_is_preempted(int cpu) return !!src->preempted; } -PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_preempted); /* * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present. @@ -614,10 +613,8 @@ void __init kvm_spinlock_init(void) pv_lock_ops.wait = kvm_wait; pv_lock_ops.kick = kvm_kick_cpu; - if (kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) { - pv_lock_ops.vcpu_is_preempted = - PV_CALLEE_SAVE(__kvm_vcpu_is_preempted); - } + if (kvm_para_has_feature(KVM_FEATURE_STEAL_TIME)) + pv_lock_ops.vcpu_is_preempted = __kvm_vcpu_is_preempted; } #endi...
2016 Nov 16
0
[PATCH v7 06/11] x86, paravirt: Add interface to support kvm/xen vcpu preempted check
...> has_steal_clock = 1; > pv_time_ops.steal_clock = kvm_steal_clock; > -#ifdef CONFIG_PARAVIRT_SPINLOCKS > - pv_lock_ops.vcpu_is_preempted = kvm_vcpu_is_preempted; > -#endif > } > > if (kvm_para_has_feature(KVM_FEATURE_PV_EOI)) > @@ -604,6 +592,14 @@ static void kvm_wait(u8 *ptr, u8 val) > local_irq_restore(flags); > } > > +static bool __kvm_vcpu_is_preempted(int cpu) > +{ > + struct kvm_steal_time *src = &per_cpu(steal_time, cpu); > + > + return !!src->preempted; > +} > +PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_preempted); &g...
2017 Feb 10
2
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...; Thinking about this again, wouldn't something like the below also work? > > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > index 099fcba4981d..6aa33702c15c 100644 > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -589,6 +589,7 @@ static void kvm_wait(u8 *ptr, u8 val) > local_irq_restore(flags); > } > > +#ifdef CONFIG_X86_32 > __visible bool __kvm_vcpu_is_preempted(int cpu) > { > struct kvm_steal_time *src = &per_cpu(steal_time, cpu); > @@ -597,6 +598,31 @@ __visible bool __kvm_vcpu_is_preempted(int cpu) >...
2017 Feb 10
2
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...; Thinking about this again, wouldn't something like the below also work? > > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > index 099fcba4981d..6aa33702c15c 100644 > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -589,6 +589,7 @@ static void kvm_wait(u8 *ptr, u8 val) > local_irq_restore(flags); > } > > +#ifdef CONFIG_X86_32 > __visible bool __kvm_vcpu_is_preempted(int cpu) > { > struct kvm_steal_time *src = &per_cpu(steal_time, cpu); > @@ -597,6 +598,31 @@ __visible bool __kvm_vcpu_is_preempted(int cpu) >...
2015 Mar 16
0
[PATCH 9/9] qspinlock, x86, kvm: Implement KVM support for paravirt qspinlock
...a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -584,6 +584,41 @@ static void kvm_kick_cpu(int cpu) kvm_hypercall2(KVM_HC_KICK_CPU, flags, apicid); } + +#ifdef CONFIG_QUEUE_SPINLOCK + +#include <asm/qspinlock.h> + +PV_CALLEE_SAVE_REGS_THUNK(__pv_queue_spin_unlock); + +static void kvm_wait(u8 *ptr, u8 val) +{ + unsigned long flags; + + if (in_nmi()) + return; + + local_irq_save(flags); + + if (READ_ONCE(*ptr) != val) + goto out; + + /* + * halt until it's our turn and kicked. Note that we do safe halt + * for irq enabled case to avoid hang when lock info is overwritten + * i...