search for: pvlock_vcpu_state

Displaying 7 results from an estimated 7 matches for "pvlock_vcpu_state".

2019 Dec 26
0
[PATCH 5/5] KVM: arm64: Support the vcpu preemption check
...+#include <asm/pvlock-abi.h> >> >> struct static_key paravirt_steal_enabled; >> struct static_key paravirt_steal_rq_enabled; >> @@ -158,3 +159,93 @@ int __init pv_time_init(void) >> >> return 0; >> } >> + >> +DEFINE_PER_CPU(struct pvlock_vcpu_state, pvlock_vcpu_region) __aligned(64); >> +EXPORT_PER_CPU_SYMBOL(pvlock_vcpu_region); >> + >> +static int pvlock_vcpu_state_dying_cpu(unsigned int cpu) >> +{ >> + struct pvlock_vcpu_state *reg; >> + >> + reg = this_cpu_ptr(&pvlock_vcpu_region); >> +...
2019 Dec 17
10
[PATCH 0/5] KVM: arm64: vcpu preempted check support
From: Zengruan Ye <yezengruan at huawei.com> This patch set aims to support the vcpu_is_preempted() functionality under KVM/arm64, which allowing the guest to obtain the vcpu is currently running or not. This will enhance lock performance on overcommitted hosts (more runnable vcpus than physical cpus in the system) as doing busy waits for preempted vcpus will hurt system performance far
2019 Dec 17
10
[PATCH 0/5] KVM: arm64: vcpu preempted check support
From: Zengruan Ye <yezengruan at huawei.com> This patch set aims to support the vcpu_is_preempted() functionality under KVM/arm64, which allowing the guest to obtain the vcpu is currently running or not. This will enhance lock performance on overcommitted hosts (more runnable vcpus than physical cpus in the system) as doing busy waits for preempted vcpus will hurt system performance far
2019 Dec 26
7
[PATCH v2 0/6] KVM: arm64: VCPU preempted check support
This patch set aims to support the vcpu_is_preempted() functionality under KVM/arm64, which allowing the guest to obtain the VCPU is currently running or not. This will enhance lock performance on overcommitted hosts (more runnable VCPUs than physical CPUs in the system) as doing busy waits for preempted VCPUs will hurt system performance far worse than early yielding. We have observed some
2019 Dec 17
0
[PATCH 2/5] KVM: arm64: Implement PV_LOCK_FEATURES call
...a --- /dev/null +++ b/arch/arm64/include/asm/pvlock-abi.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright(c) 2019 Huawei Technologies Co., Ltd + * Author: Zengruan Ye <yezengruan at huawei.com> + */ + +#ifndef __ASM_PVLOCK_ABI_H +#define __ASM_PVLOCK_ABI_H + +struct pvlock_vcpu_state { + __le64 preempted; + /* Structure must be 64 byte aligned, pad to that size */ + u8 padding[56]; +} __packed; + +#endif diff --git a/include/linux/arm-smccc.h b/include/linux/arm-smccc.h index 59494df0f55b..59e65a951959 100644 --- a/include/linux/arm-smccc.h +++ b/include/linux/arm-smccc.h @@ -3...
2019 Dec 19
0
[PATCH 2/5] KVM: arm64: Implement PV_LOCK_FEATURES call
...DX-License-Identifier: GPL-2.0 */ >> +/* >> + * Copyright(c) 2019 Huawei Technologies Co., Ltd >> + * Author: Zengruan Ye <yezengruan at huawei.com> >> + */ >> + >> +#ifndef __ASM_PVLOCK_ABI_H >> +#define __ASM_PVLOCK_ABI_H >> + >> +struct pvlock_vcpu_state { >> + __le64 preempted; > > Somewhere we need to document when 'preempted' is. It looks like it's a > 1-bit field from the later patches. Good point, I'll document this in the pvlock doc. > >> + /* Structure must be 64 byte aligned, pad to that size */ &...
2019 Dec 19
0
[PATCH 1/5] KVM: arm64: Document PV-lock interface
...this vcpu's pv data structure is configured by >> + the hypervisor. >> + ============= ======== ========== > >>From the code it looks like there's another argument for this SMC - the > physical address (or IPA) of a struct pvlock_vcpu_state. This structure > also needs to be described as it is part of the ABI. Will update. > > Steve > > . > Thanks, Zengruan