Displaying 20 results from an estimated 100 matches similar to: "[PATCH 0/5] x86/vmware: Steal time accounting support"
2016 Oct 26
5
[PATCH 0/3] x86/vmware guest improvements
This patchset includes several VMware guest improvements:
Alexey Makhalov (3):
x86/vmware: Use tsc_khz value for calibrate_cpu()
x86/vmware: Add basic paravirt ops support
x86/vmware: Add paravirt sched clock
Documentation/kernel-parameters.txt | 4 +++
arch/x86/kernel/cpu/vmware.c | 51 +++++++++++++++++++++++++++++++++++++
2 files changed, 55 insertions(+)
--
2.10.1
2016 Oct 26
5
[PATCH 0/3] x86/vmware guest improvements
This patchset includes several VMware guest improvements:
Alexey Makhalov (3):
x86/vmware: Use tsc_khz value for calibrate_cpu()
x86/vmware: Add basic paravirt ops support
x86/vmware: Add paravirt sched clock
Documentation/kernel-parameters.txt | 4 +++
arch/x86/kernel/cpu/vmware.c | 51 +++++++++++++++++++++++++++++++++++++
2 files changed, 55 insertions(+)
--
2.10.1
2016 Oct 26
1
[PATCH 3/3] x86/vmware: Add paravirt sched clock
Set pv_time_ops.sched_clock to vmware_sched_clock(). It is simplified
version of native_sched_clock() without ring buffer of mult/shift/offset
triplets and preempt toggling.
Since VMware hypervisor provides constant tsc we can use constant
mult/shift/offset triplet calculated at boot time.
no-vmw-sched-clock kernel parameter is added to switch back to the
native_sched_clock() implementation.
2016 Oct 27
5
[RESEND PATCH 1/3] x86/vmware: Use tsc_khz value for calibrate_cpu()
After aa297292d708, there are separate native calibrations for cpu_khz and
tsc_khz. The code sets x86_platform.calibrate_cpu to native_calibrate_cpu()
which looks in cpuid leaf 0x16 or msrs for the cpu frequency. Since we keep
the tsc_khz constant (even after vmotion), the cpu_khz and tsc_khz may
start diverging.
tsc_init() now does
cpu_khz = x86_platform.calibrate_cpu();
tsc_khz =
2016 Oct 27
5
[RESEND PATCH 1/3] x86/vmware: Use tsc_khz value for calibrate_cpu()
After aa297292d708, there are separate native calibrations for cpu_khz and
tsc_khz. The code sets x86_platform.calibrate_cpu to native_calibrate_cpu()
which looks in cpuid leaf 0x16 or msrs for the cpu frequency. Since we keep
the tsc_khz constant (even after vmotion), the cpu_khz and tsc_khz may
start diverging.
tsc_init() now does
cpu_khz = x86_platform.calibrate_cpu();
tsc_khz =
2016 Oct 28
3
[PATCH v3 0/3] x86/vmware guest improvements
Thanks Thomas for the valuable comments.
Changelog for the updated patchset:
v1->v2 - Update pvinfo.name.
v2->v3 - Address comments from Thomas G,
* Created separate function: vmware_sched_clock_setup() (patch 3/3)
* Updated commit descriptions for 1/3 and 3/3
Alexey Makhalov (3):
x86/vmware: Use tsc_khz value for calibrate_cpu()
x86/vmware: Add basic paravirt ops support
2016 Oct 28
3
[PATCH v3 0/3] x86/vmware guest improvements
Thanks Thomas for the valuable comments.
Changelog for the updated patchset:
v1->v2 - Update pvinfo.name.
v2->v3 - Address comments from Thomas G,
* Created separate function: vmware_sched_clock_setup() (patch 3/3)
* Updated commit descriptions for 1/3 and 3/3
Alexey Makhalov (3):
x86/vmware: Use tsc_khz value for calibrate_cpu()
x86/vmware: Add basic paravirt ops support
2020 Feb 12
0
[PATCH 1/5] x86/vmware: Make vmware_select_hypercall() __init
vmware_select_hypercall() is used only by the __init
functions, and should be annotated with __init as well.
Signed-off-by: Alexey Makhalov <amakhalov at vmware.com>
Reviewed-by: Thomas Hellstrom <thellstrom at vmware.com>
---
arch/x86/kernel/cpu/vmware.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kernel/cpu/vmware.c b/arch/x86/kernel/cpu/vmware.c
2016 Nov 15
2
[PATCH v7 06/11] x86, paravirt: Add interface to support kvm/xen vcpu preempted check
On Wed, Nov 02, 2016 at 05:08:33AM -0400, Pan Xinhui wrote:
> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
> index 0f400c0..38c3bb7 100644
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -310,6 +310,8 @@ struct pv_lock_ops {
>
> void (*wait)(u8 *ptr, u8 val);
> void
2016 Nov 15
2
[PATCH v7 06/11] x86, paravirt: Add interface to support kvm/xen vcpu preempted check
On Wed, Nov 02, 2016 at 05:08:33AM -0400, Pan Xinhui wrote:
> diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
> index 0f400c0..38c3bb7 100644
> --- a/arch/x86/include/asm/paravirt_types.h
> +++ b/arch/x86/include/asm/paravirt_types.h
> @@ -310,6 +310,8 @@ struct pv_lock_ops {
>
> void (*wait)(u8 *ptr, u8 val);
> void
2016 Oct 26
0
[PATCH 3/3] x86/vmware: Add paravirt sched clock
On Tue, 25 Oct 2016, Alexey Makhalov wrote:
> no-vmw-sched-clock kernel parameter is added to switch back to the
> native_sched_clock() implementation.
You are not switching back. The parameter is used to disable the paravirt
sched clock.
> #ifdef CONFIG_PARAVIRT
> +static struct cyc2ns_data vmware_cyc2ns __ro_after_init;
> +
> +static int vmw_sched_clock __initdata = 1;
>
2016 Oct 27
0
[RESEND PATCH 3/3] x86/vmware: Add paravirt sched clock
On Thu, 27 Oct 2016, Alexey Makhalov wrote:
> Set pv_time_ops.sched_clock to vmware_sched_clock().
Please do not describe WHAT the patch does, describe why. Describe the
problem you are solving. I can see from the patch
> + pv_time_ops.sched_clock = vmware_sched_clock;
that you set pv_time_ops.sched_clock to vmware_sched_clock().
> It is simplified
> version of
2016 Oct 27
0
[RESEND PATCH 3/3] x86/vmware: Add paravirt sched clock
Set pv_time_ops.sched_clock to vmware_sched_clock(). It is simplified
version of native_sched_clock() without ring buffer of mult/shift/offset
triplets and preempt toggling.
Since VMware hypervisor provides constant tsc we can use constant
mult/shift/offset triplet calculated at boot time.
no-vmw-sched-clock kernel parameter is added to disable the paravirt
sched clock.
Signed-off-by: Alexey
2020 Feb 12
0
[PATCH 2/5] x86/vmware: Remove vmware_sched_clock_setup()
Move cyc2ns setup logic to separate function.
This separation will allow to use cyc2ns mult/shift pair
not only for the sched_clock but also for other clocks
such as steal_clock.
Signed-off-by: Alexey Makhalov <amakhalov at vmware.com>
Reviewed-by: Thomas Hellstrom <thellstrom at vmware.com>
---
arch/x86/kernel/cpu/vmware.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5
2016 Oct 20
15
[PATCH v5 0/9] implement vcpu preempted check
change from v4:
spilt x86 kvm vcpu preempted check into two patches.
add documentation patch.
add x86 vcpu preempted check patch under xen
add s390 vcpu preempted check patch
change from v3:
add x86 vcpu preempted check patch
change from v2:
no code change, fix typos, update some comments
change from v1:
a simplier definition of default vcpu_is_preempted
skip mahcine type check on ppc,
2016 Oct 20
15
[PATCH v5 0/9] implement vcpu preempted check
change from v4:
spilt x86 kvm vcpu preempted check into two patches.
add documentation patch.
add x86 vcpu preempted check patch under xen
add s390 vcpu preempted check patch
change from v3:
add x86 vcpu preempted check patch
change from v2:
no code change, fix typos, update some comments
change from v1:
a simplier definition of default vcpu_is_preempted
skip mahcine type check on ppc,
2016 Nov 02
13
[PATCH v7 00/11] implement vcpu preempted check
change from v6:
fix typos and remove uncessary comments.
change from v5:
spilt x86/kvm patch into guest/host part.
introduce kvm_write_guest_offset_cached.
fix some typos.
rebase patch onto 4.9.2
change from v4:
spilt x86 kvm vcpu preempted check into two patches.
add documentation patch.
add x86 vcpu preempted check patch under xen
add s390 vcpu preempted check patch
change from v3:
2016 Nov 02
13
[PATCH v7 00/11] implement vcpu preempted check
change from v6:
fix typos and remove uncessary comments.
change from v5:
spilt x86/kvm patch into guest/host part.
introduce kvm_write_guest_offset_cached.
fix some typos.
rebase patch onto 4.9.2
change from v4:
spilt x86 kvm vcpu preempted check into two patches.
add documentation patch.
add x86 vcpu preempted check patch under xen
add s390 vcpu preempted check patch
change from v3:
2017 Nov 13
7
[PATCH RFC v3 0/6] x86/idle: add halt poll support
From: Yang Zhang <yang.zhang.wz at gmail.com>
Some latency-intensive workload have seen obviously performance
drop when running inside VM. The main reason is that the overhead
is amplified when running inside VM. The most cost I have seen is
inside idle path.
This patch introduces a new mechanism to poll for a while before
entering idle state. If schedule is needed during poll, then we
2017 Nov 13
7
[PATCH RFC v3 0/6] x86/idle: add halt poll support
From: Yang Zhang <yang.zhang.wz at gmail.com>
Some latency-intensive workload have seen obviously performance
drop when running inside VM. The main reason is that the overhead
is amplified when running inside VM. The most cost I have seen is
inside idle path.
This patch introduces a new mechanism to poll for a while before
entering idle state. If schedule is needed during poll, then we