On 09/05/2017 09:24 AM, Juergen Gross wrote:> There are cases where a guest tries to switch spinlocks to bare metal > behavior (e.g. by setting "xen_nopvspin" boot parameter). Today this > has the downside of falling back to unfair test and set scheme for > qspinlocks due to virt_spin_lock() detecting the virtualized > environment. > > Make virt_spin_lock() a paravirt operation in order to enable users > to select an explicit behavior like bare metal. > > Signed-off-by: Juergen Gross <jgross at suse.com> > --- > arch/x86/include/asm/paravirt.h | 5 ++++ > arch/x86/include/asm/paravirt_types.h | 1 + > arch/x86/include/asm/qspinlock.h | 48 ++++++++++++++++++++++++----------- > arch/x86/kernel/paravirt-spinlocks.c | 14 ++++++++++ > arch/x86/kernel/smpboot.c | 2 ++ > 5 files changed, 55 insertions(+), 15 deletions(-) > > diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h > index c25dd22f7c70..d9e954fb37df 100644 > --- a/arch/x86/include/asm/paravirt.h > +++ b/arch/x86/include/asm/paravirt.h > @@ -725,6 +725,11 @@ static __always_inline bool pv_vcpu_is_preempted(long cpu) > return PVOP_CALLEE1(bool, pv_lock_ops.vcpu_is_preempted, cpu); > } > > +static __always_inline bool pv_virt_spin_lock(struct qspinlock *lock) > +{ > + return PVOP_CALLEE1(bool, pv_lock_ops.virt_spin_lock, lock); > +} > + > #endif /* SMP && PARAVIRT_SPINLOCKS */ > > #ifdef CONFIG_X86_32 > diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h > index 19efefc0e27e..928f5e7953a7 100644 > --- a/arch/x86/include/asm/paravirt_types.h > +++ b/arch/x86/include/asm/paravirt_types.h > @@ -319,6 +319,7 @@ struct pv_lock_ops { > void (*kick)(int cpu); > > struct paravirt_callee_save vcpu_is_preempted; > + struct paravirt_callee_save virt_spin_lock; > } __no_randomize_layout; > > /* This contains all the paravirt structures: we get a convenient > diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h > index 48a706f641f2..fbd98896385c 100644 > --- a/arch/x86/include/asm/qspinlock.h > +++ b/arch/x86/include/asm/qspinlock.h > @@ -17,6 +17,25 @@ static inline void native_queued_spin_unlock(struct qspinlock *lock) > smp_store_release((u8 *)lock, 0); > } > > +static inline bool native_virt_spin_lock(struct qspinlock *lock) > +{ > + if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) > + return false; > +I think you can take the above if statement out as you has done test in native_pv_lock_init(). So the test will also be false here. As this patch series is x86 specific, you should probably add "x86/" in front of paravirt in the patch titles. Cheers, Longman
Peter Zijlstra
2017-Sep-05 14:08 UTC
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
On Tue, Sep 05, 2017 at 10:02:57AM -0400, Waiman Long wrote:> On 09/05/2017 09:24 AM, Juergen Gross wrote:> > +static inline bool native_virt_spin_lock(struct qspinlock *lock) > > +{ > > + if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) > > + return false; > > + > > I think you can take the above if statement out as you has done test in > native_pv_lock_init(). So the test will also be false here.That does mean we'll run a test-and-set spinlock until paravirt patching happens though. I prefer to not do that. One important point.. we must not be holding any locks when we switch over between the two locks. Back then I spend some time making sure that didn't happen with the X86 feature flag muck.
On 09/05/2017 10:08 AM, Peter Zijlstra wrote:> On Tue, Sep 05, 2017 at 10:02:57AM -0400, Waiman Long wrote: >> On 09/05/2017 09:24 AM, Juergen Gross wrote: >>> +static inline bool native_virt_spin_lock(struct qspinlock *lock) >>> +{ >>> + if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) >>> + return false; >>> + >> I think you can take the above if statement out as you has done test in >> native_pv_lock_init(). So the test will also be false here. > That does mean we'll run a test-and-set spinlock until paravirt patching > happens though. I prefer to not do that. > > One important point.. we must not be holding any locks when we switch > over between the two locks. Back then I spend some time making sure that > didn't happen with the X86 feature flag muck.AFAICT, native_pv_lock_init() is called before SMP init. So it shouldn't matter. Cheers, Longman
Maybe Matching Threads
- [PATCH 3/4] paravirt: add virt_spin_lock pvops function
- [PATCH 3/4] paravirt: add virt_spin_lock pvops function
- [PATCH 3/4] paravirt: add virt_spin_lock pvops function
- [PATCH 3/4] paravirt: add virt_spin_lock pvops function
- [PATCH 3/4] paravirt: add virt_spin_lock pvops function