Waiman Long
2017-Nov-01 20:58 UTC
[PATCH-tip v2 0/2] x86/paravirt: Enable users to choose PV lock type
v1->v2: - Make pv_spinlock_type a bit mask for easier checking. - Add patch 2 to deprecate xen_nopvspin v1 - https://lkml.org/lkml/2017/11/1/381 Patch 1 adds a new pvlock_type parameter for the administrators to specify the type of lock to be used in a para-virtualized kernel. Patch 2 deprecates Xen's xen_nopvspin parameter as it is no longer needed. Waiman Long (2): x86/paravirt: Add kernel parameter to choose paravirt lock type x86/xen: Deprecate xen_nopvspin Documentation/admin-guide/kernel-parameters.txt | 11 ++++--- arch/x86/include/asm/paravirt.h | 9 ++++++ arch/x86/kernel/kvm.c | 3 ++ arch/x86/kernel/paravirt.c | 40 ++++++++++++++++++++++++- arch/x86/xen/spinlock.c | 17 +++++------ 5 files changed, 65 insertions(+), 15 deletions(-) -- 1.8.3.1
Waiman Long
2017-Nov-01 20:58 UTC
[PATCH-tip v2 1/2] x86/paravirt: Add kernel parameter to choose paravirt lock type
Currently, there are 3 different lock types that can be chosen for the x86 architecture: - qspinlock - pvqspinlock - unfair lock One of the above lock types will be chosen at boot time depending on a number of different factors. Ideally, the hypervisors should be able to pick the best performing lock type for the current VM configuration. That is not currently the case as the performance of each lock type are affected by many different factors like the number of vCPUs in the VM, the amount vCPU overcommitment, the CPU type and so on. Generally speaking, unfair lock performs well for VMs with a small number of vCPUs. Native qspinlock may perform better than pvqspinlock if there is vCPU pinning and there is no vCPU over-commitment. This patch adds a new kernel parameter to allow administrator to choose the paravirt spinlock type to be used. VM administrators can experiment with the different lock types and choose one that can best suit their need, if they want to. Hypervisor developers can also use that to experiment with different lock types so that they can come up with a better algorithm to pick the best lock type. The hypervisor paravirt spinlock code will override this new parameter in determining if pvqspinlock should be used. The parameter, however, will override Xen's xen_nopvspin in term of disabling unfair lock. Signed-off-by: Waiman Long <longman at redhat.com> --- Documentation/admin-guide/kernel-parameters.txt | 7 +++++ arch/x86/include/asm/paravirt.h | 9 ++++++ arch/x86/kernel/kvm.c | 3 ++ arch/x86/kernel/paravirt.c | 40 ++++++++++++++++++++++++- arch/x86/xen/spinlock.c | 12 ++++---- 5 files changed, 65 insertions(+), 6 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index f7df49d..c98d9c7 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -3275,6 +3275,13 @@ [KNL] Number of legacy pty's. Overwrites compiled-in default number. + pvlock_type= [X86,PV_OPS] + Specify the paravirt spinlock type to be used. + Options are: + queued - native queued spinlock + pv - paravirt queued spinlock + unfair - simple TATAS unfair lock + quiet [KNL] Disable most log messages r128= [HW,DRM] diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index 12deec7..c8f9ad9 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -690,6 +690,15 @@ static __always_inline bool pv_vcpu_is_preempted(long cpu) #endif /* SMP && PARAVIRT_SPINLOCKS */ +enum pv_spinlock_type { + locktype_auto = 0, + locktype_queued = 1, + locktype_paravirt = 2, + locktype_unfair = 4, +}; + +extern enum pv_spinlock_type pv_spinlock_type; + #ifdef CONFIG_X86_32 #define PV_SAVE_REGS "pushl %ecx; pushl %edx;" #define PV_RESTORE_REGS "popl %edx; popl %ecx;" diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 8bb9594..3faee63 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -646,6 +646,9 @@ void __init kvm_spinlock_init(void) if (!kvm_para_has_feature(KVM_FEATURE_PV_UNHALT)) return; + if (pv_spinlock_type & (locktype_queued|locktype_unfair)) + return; + __pv_init_lock_hash(); pv_lock_ops.queued_spin_lock_slowpath = __pv_queued_spin_lock_slowpath; pv_lock_ops.queued_spin_unlock = PV_CALLEE_SAVE(__pv_queued_spin_unlock); diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index 041096b..aee6756 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -115,11 +115,48 @@ unsigned paravirt_patch_jmp(void *insnbuf, const void *target, return 5; } +/* + * The kernel argument "pvlock_type=" can be used to explicitly specify + * which type of spinlocks to be used. Currently, there are 3 options: + * 1) queued - the native queued spinlock + * 2) pv - the paravirt queued spinlock (if CONFIG_PARAVIRT_SPINLOCKS) + * 3) unfair - the simple TATAS unfair lock + * + * If this argument is not specified, the kernel will automatically choose + * an appropriate one depending on X86_FEATURE_HYPERVISOR and hypervisor + * specific settings. + */ +enum pv_spinlock_type __read_mostly pv_spinlock_type = locktype_auto; + +static int __init pvlock_setup(char *s) +{ + if (!s) + return -EINVAL; + + if (!strcmp(s, "queued")) + pv_spinlock_type = locktype_queued; + else if (!strcmp(s, "pv")) + pv_spinlock_type = locktype_paravirt; + else if (!strcmp(s, "unfair")) + pv_spinlock_type = locktype_unfair; + else + return -EINVAL; + + pr_info("PV lock type = %s (%d)\n", s, pv_spinlock_type); + return 0; +} + +early_param("pvlock_type", pvlock_setup); + DEFINE_STATIC_KEY_TRUE(virt_spin_lock_key); void __init native_pv_lock_init(void) { - if (!static_cpu_has(X86_FEATURE_HYPERVISOR)) + if (pv_spinlock_type == locktype_unfair) + return; + + if (!static_cpu_has(X86_FEATURE_HYPERVISOR) || + (pv_spinlock_type == locktype_queued)) static_branch_disable(&virt_spin_lock_key); } @@ -473,3 +510,4 @@ struct pv_mmu_ops pv_mmu_ops __ro_after_init = { EXPORT_SYMBOL (pv_mmu_ops); EXPORT_SYMBOL_GPL(pv_info); EXPORT_SYMBOL (pv_irq_ops); +EXPORT_SYMBOL (pv_spinlock_type); diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c index 1e1462d..d5f79ac 100644 --- a/arch/x86/xen/spinlock.c +++ b/arch/x86/xen/spinlock.c @@ -81,8 +81,9 @@ void xen_init_lock_cpu(int cpu) int irq; char *name; - if (!xen_pvspin) { - if (cpu == 0) + if (!xen_pvspin || + (pv_spinlock_type & (locktype_queued|locktype_unfair))) { + if ((cpu == 0) && !pv_spinlock_type) static_branch_disable(&virt_spin_lock_key); return; } @@ -109,7 +110,8 @@ void xen_init_lock_cpu(int cpu) void xen_uninit_lock_cpu(int cpu) { - if (!xen_pvspin) + if (!xen_pvspin || + (pv_spinlock_type & (locktype_queued|locktype_unfair))) return; unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL); @@ -130,8 +132,8 @@ void xen_uninit_lock_cpu(int cpu) */ void __init xen_init_spinlocks(void) { - - if (!xen_pvspin) { + if (!xen_pvspin || + (pv_spinlock_type & (locktype_queued|locktype_unfair))) { printk(KERN_DEBUG "xen: PV spinlocks disabled\n"); return; } -- 1.8.3.1
With the new pvlock_type kernel parameter, xen_nopvspin is no longer needed. This patch deprecates the xen_nopvspin parameter by removing its documentation and treating it as an alias of "pvlock_type=queued". Signed-off-by: Waiman Long <longman at redhat.com> --- Documentation/admin-guide/kernel-parameters.txt | 4 ---- arch/x86/xen/spinlock.c | 19 +++++++------------ 2 files changed, 7 insertions(+), 16 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt index c98d9c7..683a817 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -4596,10 +4596,6 @@ the unplug protocol never -- do not unplug even if version check succeeds - xen_nopvspin [X86,XEN] - Disables the ticketlock slowpath using Xen PV - optimizations. - xen_nopv [X86] Disables the PV optimizations forcing the HVM guest to run as generic HVM guest with no PV drivers. diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c index d5f79ac..19e2e75 100644 --- a/arch/x86/xen/spinlock.c +++ b/arch/x86/xen/spinlock.c @@ -20,7 +20,6 @@ static DEFINE_PER_CPU(int, lock_kicker_irq) = -1; static DEFINE_PER_CPU(char *, irq_name); -static bool xen_pvspin = true; #include <asm/qspinlock.h> @@ -81,12 +80,8 @@ void xen_init_lock_cpu(int cpu) int irq; char *name; - if (!xen_pvspin || - (pv_spinlock_type & (locktype_queued|locktype_unfair))) { - if ((cpu == 0) && !pv_spinlock_type) - static_branch_disable(&virt_spin_lock_key); + if (pv_spinlock_type & (locktype_queued|locktype_unfair)) return; - } WARN(per_cpu(lock_kicker_irq, cpu) >= 0, "spinlock on CPU%d exists on IRQ%d!\n", cpu, per_cpu(lock_kicker_irq, cpu)); @@ -110,8 +105,7 @@ void xen_init_lock_cpu(int cpu) void xen_uninit_lock_cpu(int cpu) { - if (!xen_pvspin || - (pv_spinlock_type & (locktype_queued|locktype_unfair))) + if (pv_spinlock_type & (locktype_queued|locktype_unfair)) return; unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL); @@ -132,8 +126,7 @@ void xen_uninit_lock_cpu(int cpu) */ void __init xen_init_spinlocks(void) { - if (!xen_pvspin || - (pv_spinlock_type & (locktype_queued|locktype_unfair))) { + if (pv_spinlock_type & (locktype_queued|locktype_unfair)) { printk(KERN_DEBUG "xen: PV spinlocks disabled\n"); return; } @@ -147,10 +140,12 @@ void __init xen_init_spinlocks(void) pv_lock_ops.vcpu_is_preempted = PV_CALLEE_SAVE(xen_vcpu_stolen); } +/* TODO: To be removed in a future kernel version */ static __init int xen_parse_nopvspin(char *arg) { - xen_pvspin = false; + pr_warn("xen_nopvspin is deprecated, replace it with \"pvlock_type=queued\"!\n"); + if (!pv_spinlock_type) + pv_spinlock_type = locktype_queued; return 0; } early_param("xen_nopvspin", xen_parse_nopvspin); - -- 1.8.3.1
On 11/01/2017 04:58 PM, Waiman Long wrote:> +/* TODO: To be removed in a future kernel version */ > static __init int xen_parse_nopvspin(char *arg) > { > - xen_pvspin = false; > + pr_warn("xen_nopvspin is deprecated, replace it with \"pvlock_type=queued\"!\n"); > + if (!pv_spinlock_type) > + pv_spinlock_type = locktype_queued;Since we currently end up using unfair locks and because you are deprecating xen_nopvspin I wonder whether it would be better to set this to locktype_unfair so that current behavior doesn't change. (Sorry, I haven't responded to your earlier message before you posted this). Juergen? I am also not sure I agree with making pv_spinlock an enum *and* a bitmask at the same time. I understand that it makes checks easier but I think not assuming a value or a pattern would be better, especially since none of the uses is on a critical path. (For example, !pv_spinlock_type is the same as locktype_auto, which is defined but never used) -boris> return 0; > } > early_param("xen_nopvspin", xen_parse_nopvspin); > -
Apparently Analagous Threads
- [PATCH-tip v2 0/2] x86/paravirt: Enable users to choose PV lock type
- [PATCH] x86/paravirt: Add kernel parameter to choose paravirt lock type
- [PATCH] x86/paravirt: Add kernel parameter to choose paravirt lock type
- [PATCH-tip v2 2/2] x86/xen: Deprecate xen_nopvspin
- [PATCH-tip v2 2/2] x86/xen: Deprecate xen_nopvspin