Displaying 20 results from an estimated 190 matches for "paravirt_spinlocks".
2014 Feb 27
1
[PATCH RFC v5 8/8] pvqspinlock, x86: Enable KVM to use qspinlock's PV support
Il 26/02/2014 16:14, Waiman Long ha scritto:
> This patch enables KVM to use the queue spinlock's PV support code
> when the PARAVIRT_SPINLOCKS kernel config option is set. However,
> PV support for Xen is not ready yet and so the queue spinlock will
> still have to be disabled when PARAVIRT_SPINLOCKS config option is
> on with Xen.
>
> Signed-off-by: Waiman Long <Waiman.Long at hp.com>
> ---
> arch/x86/kernel/...
2014 Feb 27
1
[PATCH RFC v5 8/8] pvqspinlock, x86: Enable KVM to use qspinlock's PV support
Il 26/02/2014 16:14, Waiman Long ha scritto:
> This patch enables KVM to use the queue spinlock's PV support code
> when the PARAVIRT_SPINLOCKS kernel config option is set. However,
> PV support for Xen is not ready yet and so the queue spinlock will
> still have to be disabled when PARAVIRT_SPINLOCKS config option is
> on with Xen.
>
> Signed-off-by: Waiman Long <Waiman.Long at hp.com>
> ---
> arch/x86/kernel/...
2017 Sep 05
2
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...empted(long cpu)
> return PVOP_CALLEE1(bool, pv_lock_ops.vcpu_is_preempted, cpu);
> }
>
> +static __always_inline bool pv_virt_spin_lock(struct qspinlock *lock)
> +{
> + return PVOP_CALLEE1(bool, pv_lock_ops.virt_spin_lock, lock);
> +}
> +
> #endif /* SMP && PARAVIRT_SPINLOCKS */
>
> #ifdef CONFIG_X86_32
> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
> index 19efefc0e27e..928f5e7953a7 100644
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -319,6 +319,7 @...
2017 Sep 05
2
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...empted(long cpu)
> return PVOP_CALLEE1(bool, pv_lock_ops.vcpu_is_preempted, cpu);
> }
>
> +static __always_inline bool pv_virt_spin_lock(struct qspinlock *lock)
> +{
> + return PVOP_CALLEE1(bool, pv_lock_ops.virt_spin_lock, lock);
> +}
> +
> #endif /* SMP && PARAVIRT_SPINLOCKS */
>
> #ifdef CONFIG_X86_32
> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
> index 19efefc0e27e..928f5e7953a7 100644
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -319,6 +319,7 @...
2014 Feb 26
0
[PATCH RFC v5 8/8] pvqspinlock, x86: Enable KVM to use qspinlock's PV support
This patch enables KVM to use the queue spinlock's PV support code
when the PARAVIRT_SPINLOCKS kernel config option is set. However,
PV support for Xen is not ready yet and so the queue spinlock will
still have to be disabled when PARAVIRT_SPINLOCKS config option is
on with Xen.
Signed-off-by: Waiman Long <Waiman.Long at hp.com>
---
arch/x86/kernel/kvm.c | 54 ++++++++++++++++++++++...
2017 Sep 05
3
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...pu_is_preempted, cpu);
>>> }
>>>
>>> +static __always_inline bool pv_virt_spin_lock(struct qspinlock *lock)
>>> +{
>>> + return PVOP_CALLEE1(bool, pv_lock_ops.virt_spin_lock, lock);
>>> +}
>>> +
>>> #endif /* SMP && PARAVIRT_SPINLOCKS */
>>>
>>> #ifdef CONFIG_X86_32
>>> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
>>> index 19efefc0e27e..928f5e7953a7 100644
>>> --- a/arch/x86/include/asm/paravirt_types.h
>>> +++ b/arch/x86/incl...
2017 Sep 05
3
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...pu_is_preempted, cpu);
>>> }
>>>
>>> +static __always_inline bool pv_virt_spin_lock(struct qspinlock *lock)
>>> +{
>>> + return PVOP_CALLEE1(bool, pv_lock_ops.virt_spin_lock, lock);
>>> +}
>>> +
>>> #endif /* SMP && PARAVIRT_SPINLOCKS */
>>>
>>> #ifdef CONFIG_X86_32
>>> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
>>> index 19efefc0e27e..928f5e7953a7 100644
>>> --- a/arch/x86/include/asm/paravirt_types.h
>>> +++ b/arch/x86/incl...
2017 Sep 05
7
[PATCH 0/4] make virt_spin_lock() a pvops function
With virt_spin_lock() being a pvops function the bare metal case can be
optimized by patching the call away completely. In case a kernel running
as a guest it can decide whether to use paravitualized spinlocks, the
current fallback to the unfair test-and-set scheme, or to mimic the
bare metal behavior.
Juergen Gross (4):
paravirt: add generic _paravirt_false() function
paravirt: switch
2017 Sep 05
7
[PATCH 0/4] make virt_spin_lock() a pvops function
With virt_spin_lock() being a pvops function the bare metal case can be
optimized by patching the call away completely. In case a kernel running
as a guest it can decide whether to use paravitualized spinlocks, the
current fallback to the unfair test-and-set scheme, or to mimic the
bare metal behavior.
Juergen Gross (4):
paravirt: add generic _paravirt_false() function
paravirt: switch
2017 Sep 05
0
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...VOP_CALLEE1(bool, pv_lock_ops.vcpu_is_preempted, cpu);
>> }
>>
>> +static __always_inline bool pv_virt_spin_lock(struct qspinlock *lock)
>> +{
>> + return PVOP_CALLEE1(bool, pv_lock_ops.virt_spin_lock, lock);
>> +}
>> +
>> #endif /* SMP && PARAVIRT_SPINLOCKS */
>>
>> #ifdef CONFIG_X86_32
>> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
>> index 19efefc0e27e..928f5e7953a7 100644
>> --- a/arch/x86/include/asm/paravirt_types.h
>> +++ b/arch/x86/include/asm/paravirt_types.h...
2017 Sep 05
0
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...@@ static __always_inline bool pv_vcpu_is_preempted(long cpu)
return PVOP_CALLEE1(bool, pv_lock_ops.vcpu_is_preempted, cpu);
}
+static __always_inline bool pv_virt_spin_lock(struct qspinlock *lock)
+{
+ return PVOP_CALLEE1(bool, pv_lock_ops.virt_spin_lock, lock);
+}
+
#endif /* SMP && PARAVIRT_SPINLOCKS */
#ifdef CONFIG_X86_32
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index 19efefc0e27e..928f5e7953a7 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -319,6 +319,7 @@ struct pv_lock_ops {
void (*kic...
2014 Mar 13
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On Wed, Mar 12, 2014 at 02:54:52PM -0400, Waiman Long wrote:
> +static inline void arch_spin_lock(struct qspinlock *lock)
> +{
> + if (static_key_false(¶virt_unfairlocks_enabled))
> + queue_spin_lock_unfair(lock);
> + else
> + queue_spin_lock(lock);
> +}
So I would have expected something like:
if (static_key_false(¶virt_spinlock)) {
while
2014 Mar 13
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On Wed, Mar 12, 2014 at 02:54:52PM -0400, Waiman Long wrote:
> +static inline void arch_spin_lock(struct qspinlock *lock)
> +{
> + if (static_key_false(¶virt_unfairlocks_enabled))
> + queue_spin_lock_unfair(lock);
> + else
> + queue_spin_lock(lock);
> +}
So I would have expected something like:
if (static_key_false(¶virt_spinlock)) {
while
2014 Apr 02
1
[PATCH v8 10/10] pvqspinlock, x86: Enable qspinlock PV support for XEN
....locks b/kernel/Kconfig.locks
> index a70fdeb..451e392 100644
> --- a/kernel/Kconfig.locks
> +++ b/kernel/Kconfig.locks
> @@ -229,4 +229,4 @@ config ARCH_USE_QUEUE_SPINLOCK
>
> config QUEUE_SPINLOCK
> def_bool y if ARCH_USE_QUEUE_SPINLOCK
> - depends on SMP && (!PARAVIRT_SPINLOCKS || !XEN)
> + depends on SMP
If I read this correctly that means you cannot select any more the old
ticketlocks? As in, if you select CONFIG_PARAVIRT on X86 it will automatically
select ARCH_USE_QUEUE_SPINLOCK which will then enable this by default?
Should the 'def_bool' be selectable?...
2014 Jun 15
0
[PATCH 11/11] qspinlock, kvm: Add paravirt support
...onfig.locks
===================================================================
--- linux-2.6.orig/kernel/Kconfig.locks
+++ linux-2.6/kernel/Kconfig.locks
@@ -229,7 +229,7 @@ config ARCH_USE_QUEUE_SPINLOCK
config QUEUE_SPINLOCK
def_bool y if ARCH_USE_QUEUE_SPINLOCK
- depends on SMP && !PARAVIRT_SPINLOCKS
+ depends on SMP && !(PARAVIRT_SPINLOCKS && XEN)
config ARCH_USE_QUEUE_RWLOCK
bool
2014 Apr 02
1
[PATCH v8 10/10] pvqspinlock, x86: Enable qspinlock PV support for XEN
....locks b/kernel/Kconfig.locks
> index a70fdeb..451e392 100644
> --- a/kernel/Kconfig.locks
> +++ b/kernel/Kconfig.locks
> @@ -229,4 +229,4 @@ config ARCH_USE_QUEUE_SPINLOCK
>
> config QUEUE_SPINLOCK
> def_bool y if ARCH_USE_QUEUE_SPINLOCK
> - depends on SMP && (!PARAVIRT_SPINLOCKS || !XEN)
> + depends on SMP
If I read this correctly that means you cannot select any more the old
ticketlocks? As in, if you select CONFIG_PARAVIRT on X86 it will automatically
select ARCH_USE_QUEUE_SPINLOCK which will then enable this by default?
Should the 'def_bool' be selectable?...
2015 Apr 30
0
[PATCH 5/6] x86: switch config from UNINLINE_SPIN_UNLOCK to INLINE_SPIN_UNLOCK
There is no need any more for a special treatment of _raw_spin_unlock()
regarding inlining compared to the other spinlock functions. Just treat
it like all the other spinlock functions.
Remove selecting UNINLINE_SPIN_UNLOCK in case of PARAVIRT_SPINLOCKS.
Signed-off-by: Juergen Gross <jgross at suse.com>
---
arch/x86/Kconfig | 1 -
include/linux/spinlock_api_smp.h | 2 +-
kernel/Kconfig.locks | 7 ++++---
kernel/Kconfig.preempt | 3 +--
kernel/locking/spinlock.c | 2 +-
lib/Kconfig.debug...
2015 Mar 16
0
[PATCH 9/9] qspinlock, x86, kvm: Implement KVM support for paravirt qspinlock
...of SPIN_UNLOCK functions
again.
We further optimize the unlock path by patching the direct call with a
"movb $0,%arg1" if we are indeed using the native unlock code. This
makes the unlock code almost as fast as the !PARAVIRT case.
This significantly lowers the overhead of having
CONFIG_PARAVIRT_SPINLOCKS enabled, even for native code.
Signed-off-by: Peter Zijlstra (Intel) <peterz at infradead.org>
---
arch/x86/Kconfig | 2 -
arch/x86/include/asm/paravirt.h | 28 ++++++++++++++++++++-
arch/x86/include/asm/paravirt_types.h | 10 +++++++
arch/x86/include/asm/...
2015 Mar 16
0
[PATCH 9/9] qspinlock, x86, kvm: Implement KVM support for paravirt qspinlock
...of SPIN_UNLOCK functions
again.
We further optimize the unlock path by patching the direct call with a
"movb $0,%arg1" if we are indeed using the native unlock code. This
makes the unlock code almost as fast as the !PARAVIRT case.
This significantly lowers the overhead of having
CONFIG_PARAVIRT_SPINLOCKS enabled, even for native code.
Signed-off-by: Peter Zijlstra (Intel) <peterz at infradead.org>
---
arch/x86/Kconfig | 2 -
arch/x86/include/asm/paravirt.h | 28 ++++++++++++++++++++-
arch/x86/include/asm/paravirt_types.h | 10 +++++++
arch/x86/include/asm/...
2017 Sep 05
0
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...gt;> }
>>>>
>>>> +static __always_inline bool pv_virt_spin_lock(struct qspinlock *lock)
>>>> +{
>>>> + return PVOP_CALLEE1(bool, pv_lock_ops.virt_spin_lock, lock);
>>>> +}
>>>> +
>>>> #endif /* SMP && PARAVIRT_SPINLOCKS */
>>>>
>>>> #ifdef CONFIG_X86_32
>>>> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
>>>> index 19efefc0e27e..928f5e7953a7 100644
>>>> --- a/arch/x86/include/asm/paravirt_types.h
>>>...