search for: static_cpu_ha

Displaying 20 results from an estimated 83 matches for "static_cpu_ha".

Did you mean: static_cpu_has
2019 Mar 30
1
[PATCH 2/5] x86: Convert some slow-path static_cpu_has() callers to boot_cpu_has()
From: Borislav Petkov <bp at suse.de> Using static_cpu_has() is pointless on those paths, convert them to the boot_cpu_has() variant. No functional changes. Reported-by: Nadav Amit <nadav.amit at gmail.com> Signed-off-by: Borislav Petkov <bp at suse.de> Cc: Aubrey Li <aubrey.li at intel.com> Cc: Dave Hansen <dave.hansen at intel.com...
2017 Sep 06
2
[PATCH v2 1/2] paravirt/locks: use new static key for controlling call of virt_spin_lock()
...C_KEY_TRUE(virt_spin_lock_key); > + > +void native_pv_lock_init(void) __init; > + > #define virt_spin_lock virt_spin_lock > static inline bool virt_spin_lock(struct qspinlock *lock) > { > + if (!static_branch_likely(&virt_spin_lock_key)) > + return false; > if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) > return false; > > @@ -65,6 +72,10 @@ static inline bool virt_spin_lock(struct qspinlock *lock) > > return true; > } > +#else > +static inline void native_pv_lock_init(void) > +{ > +} > #endif /* CONFIG_PARAVIRT */ > >...
2017 Sep 06
2
[PATCH v2 1/2] paravirt/locks: use new static key for controlling call of virt_spin_lock()
...C_KEY_TRUE(virt_spin_lock_key); > + > +void native_pv_lock_init(void) __init; > + > #define virt_spin_lock virt_spin_lock > static inline bool virt_spin_lock(struct qspinlock *lock) > { > + if (!static_branch_likely(&virt_spin_lock_key)) > + return false; > if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) > return false; > > @@ -65,6 +72,10 @@ static inline bool virt_spin_lock(struct qspinlock *lock) > > return true; > } > +#else > +static inline void native_pv_lock_init(void) > +{ > +} > #endif /* CONFIG_PARAVIRT */ > >...
2017 Sep 06
1
[PATCH v2 1/2] paravirt/locks: use new static key for controlling call of virt_spin_lock()
...17 at 11:49:49AM -0400, Waiman Long wrote: >>> #define virt_spin_lock virt_spin_lock >>> static inline bool virt_spin_lock(struct qspinlock *lock) >>> { >>> + if (!static_branch_likely(&virt_spin_lock_key)) >>> + return false; >>> if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) >>> return false; >>> > Now native has two NOPs instead of one. Can't we merge these two static > branches? I guess we can remove the static_cpu_has() call. Just that any spin_lock calls before native_pv_lock_init() will use the virt_spin...
2017 Sep 06
1
[PATCH v2 1/2] paravirt/locks: use new static key for controlling call of virt_spin_lock()
...17 at 11:49:49AM -0400, Waiman Long wrote: >>> #define virt_spin_lock virt_spin_lock >>> static inline bool virt_spin_lock(struct qspinlock *lock) >>> { >>> + if (!static_branch_likely(&virt_spin_lock_key)) >>> + return false; >>> if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) >>> return false; >>> > Now native has two NOPs instead of one. Can't we merge these two static > branches? I guess we can remove the static_cpu_has() call. Just that any spin_lock calls before native_pv_lock_init() will use the virt_spin...
2016 Jan 29
3
[Xen-devel] [PATCH v5 09/10] vring: Use the DMA API on Xen
...On > + * such a configuration, virtio has never worked and will > + * not work without an even larger kludge. Instead, enable > + * the DMA API if we're a Xen guest, which at least allows > + * all of the sensible Xen configurations to work correctly. > + */ > + return static_cpu_has(X86_FEATURE_XENPV); You want: if (xen_domain()) return true; Without the #if so we use the DMA API for all types of Xen guest on all architectures. David
2016 Jan 29
3
[Xen-devel] [PATCH v5 09/10] vring: Use the DMA API on Xen
...On > + * such a configuration, virtio has never worked and will > + * not work without an even larger kludge. Instead, enable > + * the DMA API if we're a Xen guest, which at least allows > + * all of the sensible Xen configurations to work correctly. > + */ > + return static_cpu_has(X86_FEATURE_XENPV); You want: if (xen_domain()) return true; Without the #if so we use the DMA API for all types of Xen guest on all architectures. David
2017 Sep 05
3
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...k.h >>> @@ -17,6 +17,25 @@ static inline void native_queued_spin_unlock(struct qspinlock *lock) >>> smp_store_release((u8 *)lock, 0); >>> } >>> >>> +static inline bool native_virt_spin_lock(struct qspinlock *lock) >>> +{ >>> + if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) >>> + return false; >>> + >>> + /* >>> + * On hypervisors without PARAVIRT_SPINLOCKS support we fall >>> + * back to a Test-and-Set spinlock, because fair locks have >>> + * horrible lock 'holder' preemption...
2017 Sep 05
3
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...k.h >>> @@ -17,6 +17,25 @@ static inline void native_queued_spin_unlock(struct qspinlock *lock) >>> smp_store_release((u8 *)lock, 0); >>> } >>> >>> +static inline bool native_virt_spin_lock(struct qspinlock *lock) >>> +{ >>> + if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) >>> + return false; >>> + >>> + /* >>> + * On hypervisors without PARAVIRT_SPINLOCKS support we fall >>> + * back to a Test-and-Set spinlock, because fair locks have >>> + * horrible lock 'holder' preemption...
2017 Nov 01
2
[PATCH] x86/paravirt: Add kernel parameter to choose paravirt lock type
..._spinlock_type = locktype_unfair; + else + return -EINVAL; + + pr_info("PV lock type = %s (%d)\n", s, pv_spinlock_type); + return 0; +} + +early_param("pvlock_type", pvlock_setup); + DEFINE_STATIC_KEY_TRUE(virt_spin_lock_key); void __init native_pv_lock_init(void) { - if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) + if (pv_spinlock_type == locktype_unfair) + return; + + if (!static_cpu_has(X86_FEATURE_HYPERVISOR) || + (pv_spinlock_type != locktype_auto)) static_branch_disable(&virt_spin_lock_key); } @@ -473,3 +510,4 @@ struct pv_mmu_ops pv_mmu_ops __ro_after_init = {...
2017 Nov 01
2
[PATCH] x86/paravirt: Add kernel parameter to choose paravirt lock type
..._spinlock_type = locktype_unfair; + else + return -EINVAL; + + pr_info("PV lock type = %s (%d)\n", s, pv_spinlock_type); + return 0; +} + +early_param("pvlock_type", pvlock_setup); + DEFINE_STATIC_KEY_TRUE(virt_spin_lock_key); void __init native_pv_lock_init(void) { - if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) + if (pv_spinlock_type == locktype_unfair) + return; + + if (!static_cpu_has(X86_FEATURE_HYPERVISOR) || + (pv_spinlock_type != locktype_auto)) static_branch_disable(&virt_spin_lock_key); } @@ -473,3 +510,4 @@ struct pv_mmu_ops pv_mmu_ops __ro_after_init = {...
2017 Sep 05
2
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...asm/qspinlock.h > +++ b/arch/x86/include/asm/qspinlock.h > @@ -17,6 +17,25 @@ static inline void native_queued_spin_unlock(struct qspinlock *lock) > smp_store_release((u8 *)lock, 0); > } > > +static inline bool native_virt_spin_lock(struct qspinlock *lock) > +{ > + if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) > + return false; > + > + /* > + * On hypervisors without PARAVIRT_SPINLOCKS support we fall > + * back to a Test-and-Set spinlock, because fair locks have > + * horrible lock 'holder' preemption issues. > + */ > + > + do { > + w...
2017 Sep 05
2
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...asm/qspinlock.h > +++ b/arch/x86/include/asm/qspinlock.h > @@ -17,6 +17,25 @@ static inline void native_queued_spin_unlock(struct qspinlock *lock) > smp_store_release((u8 *)lock, 0); > } > > +static inline bool native_virt_spin_lock(struct qspinlock *lock) > +{ > + if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) > + return false; > + > + /* > + * On hypervisors without PARAVIRT_SPINLOCKS support we fall > + * back to a Test-and-Set spinlock, because fair locks have > + * horrible lock 'holder' preemption issues. > + */ > + > + do { > + w...
2017 Sep 05
2
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...asm/qspinlock.h > +++ b/arch/x86/include/asm/qspinlock.h > @@ -17,6 +17,25 @@ static inline void native_queued_spin_unlock(struct qspinlock *lock) > smp_store_release((u8 *)lock, 0); > } > > +static inline bool native_virt_spin_lock(struct qspinlock *lock) > +{ > + if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) > + return false; > + I think you can take the above if statement out as you has done test in native_pv_lock_init(). So the test will also be false here. As this patch series is x86 specific, you should probably add "x86/" in front of paravirt in the pat...
2017 Sep 05
2
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...asm/qspinlock.h > +++ b/arch/x86/include/asm/qspinlock.h > @@ -17,6 +17,25 @@ static inline void native_queued_spin_unlock(struct qspinlock *lock) > smp_store_release((u8 *)lock, 0); > } > > +static inline bool native_virt_spin_lock(struct qspinlock *lock) > +{ > + if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) > + return false; > + I think you can take the above if statement out as you has done test in native_pv_lock_init(). So the test will also be false here. As this patch series is x86 specific, you should probably add "x86/" in front of paravirt in the pat...
2017 Sep 05
0
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...98896385c 100644 --- a/arch/x86/include/asm/qspinlock.h +++ b/arch/x86/include/asm/qspinlock.h @@ -17,6 +17,25 @@ static inline void native_queued_spin_unlock(struct qspinlock *lock) smp_store_release((u8 *)lock, 0); } +static inline bool native_virt_spin_lock(struct qspinlock *lock) +{ + if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) + return false; + + /* + * On hypervisors without PARAVIRT_SPINLOCKS support we fall + * back to a Test-and-Set spinlock, because fair locks have + * horrible lock 'holder' preemption issues. + */ + + do { + while (atomic_read(&lock->val) != 0) + cpu...
2017 Sep 05
7
[PATCH 0/4] make virt_spin_lock() a pvops function
With virt_spin_lock() being a pvops function the bare metal case can be optimized by patching the call away completely. In case a kernel running as a guest it can decide whether to use paravitualized spinlocks, the current fallback to the unfair test-and-set scheme, or to mimic the bare metal behavior. Juergen Gross (4): paravirt: add generic _paravirt_false() function paravirt: switch
2017 Sep 05
7
[PATCH 0/4] make virt_spin_lock() a pvops function
With virt_spin_lock() being a pvops function the bare metal case can be optimized by patching the call away completely. In case a kernel running as a guest it can decide whether to use paravitualized spinlocks, the current fallback to the unfair test-and-set scheme, or to mimic the bare metal behavior. Juergen Gross (4): paravirt: add generic _paravirt_false() function paravirt: switch
2017 Sep 06
4
[PATCH v2 0/2] guard virt_spin_lock() with a static key
With virt_spin_lock() being guarded by a static key the bare metal case can be optimized by patching the call away completely. In case a kernel running as a guest it can decide whether to use paravitualized spinlocks, the current fallback to the unfair test-and-set scheme, or to mimic the bare metal behavior. V2: - use static key instead of making virt_spin_lock() a pvops function Juergen Gross
2017 Sep 06
4
[PATCH v2 0/2] guard virt_spin_lock() with a static key
With virt_spin_lock() being guarded by a static key the bare metal case can be optimized by patching the call away completely. In case a kernel running as a guest it can decide whether to use paravitualized spinlocks, the current fallback to the unfair test-and-set scheme, or to mimic the bare metal behavior. V2: - use static key instead of making virt_spin_lock() a pvops function Juergen Gross