Displaying 20 results from an estimated 36 matches for "msr_kvm_steal_tim".
Did you mean:
msr_kvm_steal_time
2016 Oct 21
4
[PATCH v5 9/9] Documentation: virtual: kvm: Support vcpu preempted check
...+-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/Documentation/virtual/kvm/msr.txt b/Documentation/virtual/kvm/msr.txt
> index 2a71c8f..3376f13 100644
> --- a/Documentation/virtual/kvm/msr.txt
> +++ b/Documentation/virtual/kvm/msr.txt
> @@ -208,7 +208,8 @@ MSR_KVM_STEAL_TIME: 0x4b564d03
> __u64 steal;
> __u32 version;
> __u32 flags;
> - __u32 pad[12];
> + __u8 preempted;
> + __u32 pad[11];
> }
I think I'd be explicit about the 3 pad bytes you've left.
David
2016 Oct 21
4
[PATCH v5 9/9] Documentation: virtual: kvm: Support vcpu preempted check
...+-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/Documentation/virtual/kvm/msr.txt b/Documentation/virtual/kvm/msr.txt
> index 2a71c8f..3376f13 100644
> --- a/Documentation/virtual/kvm/msr.txt
> +++ b/Documentation/virtual/kvm/msr.txt
> @@ -208,7 +208,8 @@ MSR_KVM_STEAL_TIME: 0x4b564d03
> __u64 steal;
> __u32 version;
> __u32 flags;
> - __u32 pad[12];
> + __u8 preempted;
> + __u32 pad[11];
> }
I think I'd be explicit about the 3 pad bytes you've left.
David
2016 Oct 21
1
[PATCH v5 9/9] Documentation: virtual: kvm: Support vcpu preempted check
...+-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/Documentation/virtual/kvm/msr.txt b/Documentation/virtual/kvm/msr.txt
> index 2a71c8f..3376f13 100644
> --- a/Documentation/virtual/kvm/msr.txt
> +++ b/Documentation/virtual/kvm/msr.txt
> @@ -208,7 +208,8 @@ MSR_KVM_STEAL_TIME: 0x4b564d03
> __u64 steal;
> __u32 version;
> __u32 flags;
> - __u32 pad[12];
> + __u8 preempted;
> + __u32 pad[11];
> }
>
> whose data will be filled in by the hypervisor periodically. Only one
> @@ -232,6 +233,11 @@ MSR_KVM_STEAL_TIME: 0x4b564d03...
2016 Oct 21
1
[PATCH v5 9/9] Documentation: virtual: kvm: Support vcpu preempted check
...+-
> 1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/Documentation/virtual/kvm/msr.txt b/Documentation/virtual/kvm/msr.txt
> index 2a71c8f..3376f13 100644
> --- a/Documentation/virtual/kvm/msr.txt
> +++ b/Documentation/virtual/kvm/msr.txt
> @@ -208,7 +208,8 @@ MSR_KVM_STEAL_TIME: 0x4b564d03
> __u64 steal;
> __u32 version;
> __u32 flags;
> - __u32 pad[12];
> + __u8 preempted;
> + __u32 pad[11];
> }
>
> whose data will be filled in by the hypervisor periodically. Only one
> @@ -232,6 +233,11 @@ MSR_KVM_STEAL_TIME: 0x4b564d03...
2016 Oct 20
0
[PATCH v5 9/9] Documentation: virtual: kvm: Support vcpu preempted check
...tion/virtual/kvm/msr.txt | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/Documentation/virtual/kvm/msr.txt b/Documentation/virtual/kvm/msr.txt
index 2a71c8f..3376f13 100644
--- a/Documentation/virtual/kvm/msr.txt
+++ b/Documentation/virtual/kvm/msr.txt
@@ -208,7 +208,8 @@ MSR_KVM_STEAL_TIME: 0x4b564d03
__u64 steal;
__u32 version;
__u32 flags;
- __u32 pad[12];
+ __u8 preempted;
+ __u32 pad[11];
}
whose data will be filled in by the hypervisor periodically. Only one
@@ -232,6 +233,11 @@ MSR_KVM_STEAL_TIME: 0x4b564d03
nanoseconds. Time during which the vcpu is idle...
2016 Oct 24
1
[PATCH v5 9/9] Documentation: virtual: kvm: Support vcpu preempted check
...etion(-)
>>>
>>> diff --git a/Documentation/virtual/kvm/msr.txt b/Documentation/virtual/kvm/msr.txt
>>> index 2a71c8f..3376f13 100644
>>> --- a/Documentation/virtual/kvm/msr.txt
>>> +++ b/Documentation/virtual/kvm/msr.txt
>>> @@ -208,7 +208,8 @@ MSR_KVM_STEAL_TIME: 0x4b564d03
>>> __u64 steal;
>>> __u32 version;
>>> __u32 flags;
>>> - __u32 pad[12];
>>> + __u8 preempted;
>>> + __u32 pad[11];
>>> }
>>
>> I think I'd be explicit about the 3 pad bytes you've left....
2016 Oct 24
1
[PATCH v5 9/9] Documentation: virtual: kvm: Support vcpu preempted check
...etion(-)
>>>
>>> diff --git a/Documentation/virtual/kvm/msr.txt b/Documentation/virtual/kvm/msr.txt
>>> index 2a71c8f..3376f13 100644
>>> --- a/Documentation/virtual/kvm/msr.txt
>>> +++ b/Documentation/virtual/kvm/msr.txt
>>> @@ -208,7 +208,8 @@ MSR_KVM_STEAL_TIME: 0x4b564d03
>>> __u64 steal;
>>> __u32 version;
>>> __u32 flags;
>>> - __u32 pad[12];
>>> + __u8 preempted;
>>> + __u32 pad[11];
>>> }
>>
>> I think I'd be explicit about the 3 pad bytes you've left....
2016 Oct 20
15
[PATCH v5 0/9] implement vcpu preempted check
change from v4:
spilt x86 kvm vcpu preempted check into two patches.
add documentation patch.
add x86 vcpu preempted check patch under xen
add s390 vcpu preempted check patch
change from v3:
add x86 vcpu preempted check patch
change from v2:
no code change, fix typos, update some comments
change from v1:
a simplier definition of default vcpu_is_preempted
skip mahcine type check on ppc,
2016 Oct 20
15
[PATCH v5 0/9] implement vcpu preempted check
change from v4:
spilt x86 kvm vcpu preempted check into two patches.
add documentation patch.
add x86 vcpu preempted check patch under xen
add s390 vcpu preempted check patch
change from v3:
add x86 vcpu preempted check patch
change from v2:
no code change, fix typos, update some comments
change from v1:
a simplier definition of default vcpu_is_preempted
skip mahcine type check on ppc,
2016 Jul 06
3
[PATCH v2 0/4] implement vcpu preempted check
...gt;>> Paolo, could you help out with an (x86) KVM interface for this?
>>
>> If it's just for spin loops, you can check if the version field in the
>> steal time structure has changed.
>
> Steal time will not be updated until ahead of next vmentry except
> wrmsr MSR_KVM_STEAL_TIME. So it can't represent it is preempted
> currently, right?
Hmm, you're right. We can use bit 0 of struct kvm_steal_time's flags to
indicate that pad[0] is a "VCPU preempted" field; if pad[0] is 1, the
VCPU has been scheduled out since the last time the guest reset the bi...
2016 Jul 06
3
[PATCH v2 0/4] implement vcpu preempted check
...gt;>> Paolo, could you help out with an (x86) KVM interface for this?
>>
>> If it's just for spin loops, you can check if the version field in the
>> steal time structure has changed.
>
> Steal time will not be updated until ahead of next vmentry except
> wrmsr MSR_KVM_STEAL_TIME. So it can't represent it is preempted
> currently, right?
Hmm, you're right. We can use bit 0 of struct kvm_steal_time's flags to
indicate that pad[0] is a "VCPU preempted" field; if pad[0] is 1, the
VCPU has been scheduled out since the last time the guest reset the bi...
2016 Jul 06
3
[PATCH v2 0/4] implement vcpu preempted check
On 06/07/2016 08:52, Peter Zijlstra wrote:
> On Tue, Jun 28, 2016 at 10:43:07AM -0400, Pan Xinhui wrote:
>> change fomr v1:
>> a simplier definition of default vcpu_is_preempted
>> skip mahcine type check on ppc, and add config. remove dedicated macro.
>> add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner.
>> add more comments
2016 Jul 06
3
[PATCH v2 0/4] implement vcpu preempted check
On 06/07/2016 08:52, Peter Zijlstra wrote:
> On Tue, Jun 28, 2016 at 10:43:07AM -0400, Pan Xinhui wrote:
>> change fomr v1:
>> a simplier definition of default vcpu_is_preempted
>> skip mahcine type check on ppc, and add config. remove dedicated macro.
>> add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner.
>> add more comments
2012 Mar 23
12
[PATCH RFC V5 0/6] kvm : Paravirt-spinlock support for KVM guests
The 6-patch series to follow this email extends KVM-hypervisor and Linux guest
running on KVM-hypervisor to support pv-ticket spinlocks, based on Xen's
implementation.
One hypercall is introduced in KVM hypervisor,that allows a vcpu to kick
another vcpu out of halt state.
The blocking of vcpu is done using halt() in (lock_spinning) slowpath.
one MSR is added to aid live migration.
Changes
2012 Mar 23
12
[PATCH RFC V5 0/6] kvm : Paravirt-spinlock support for KVM guests
The 6-patch series to follow this email extends KVM-hypervisor and Linux guest
running on KVM-hypervisor to support pv-ticket spinlocks, based on Xen's
implementation.
One hypercall is introduced in KVM hypervisor,that allows a vcpu to kick
another vcpu out of halt state.
The blocking of vcpu is done using halt() in (lock_spinning) slowpath.
one MSR is added to aid live migration.
Changes
2016 Oct 19
3
[PATCH v4 5/5] x86, kvm: support vcpu preempted check
...kvm_para.h
> @@ -45,7 +45,8 @@ struct kvm_steal_time {
> __u64 steal;
> __u32 version;
> __u32 flags;
> - __u32 pad[12];
> + __u32 preempted;
Why __u32 instead of __u8?
> + __u32 pad[11];
> };
Please document the change in Documentation/virtual/kvm/msr.txt, section
MSR_KVM_STEAL_TIME.
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> @@ -415,6 +415,15 @@ void kvm_disable_steal_time(void)
> +static bool kvm_vcpu_is_preempted(int cpu)
> +{
> + struct kvm_steal_time *src;
> +
> + src = &per_cpu(steal_time, cpu);
> +
> + return !!src-&g...
2016 Oct 19
3
[PATCH v4 5/5] x86, kvm: support vcpu preempted check
...kvm_para.h
> @@ -45,7 +45,8 @@ struct kvm_steal_time {
> __u64 steal;
> __u32 version;
> __u32 flags;
> - __u32 pad[12];
> + __u32 preempted;
Why __u32 instead of __u8?
> + __u32 pad[11];
> };
Please document the change in Documentation/virtual/kvm/msr.txt, section
MSR_KVM_STEAL_TIME.
> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
> @@ -415,6 +415,15 @@ void kvm_disable_steal_time(void)
> +static bool kvm_vcpu_is_preempted(int cpu)
> +{
> + struct kvm_steal_time *src;
> +
> + src = &per_cpu(steal_time, cpu);
> +
> + return !!src-&g...
2016 Jul 06
0
[PATCH v2 0/4] implement vcpu preempted check
...tch set
>>>
>>
>> Paolo, could you help out with an (x86) KVM interface for this?
>
> If it's just for spin loops, you can check if the version field in the
> steal time structure has changed.
Steal time will not be updated until ahead of next vmentry except
wrmsr MSR_KVM_STEAL_TIME. So it can't represent it is preempted
currently, right?
Regards,
Wanpeng Li
2016 Oct 21
0
[PATCH v5 9/9] Documentation: virtual: kvm: Support vcpu preempted check
...7 insertions(+), 1 deletion(-)
>>
>> diff --git a/Documentation/virtual/kvm/msr.txt b/Documentation/virtual/kvm/msr.txt
>> index 2a71c8f..3376f13 100644
>> --- a/Documentation/virtual/kvm/msr.txt
>> +++ b/Documentation/virtual/kvm/msr.txt
>> @@ -208,7 +208,8 @@ MSR_KVM_STEAL_TIME: 0x4b564d03
>> __u64 steal;
>> __u32 version;
>> __u32 flags;
>> - __u32 pad[12];
>> + __u8 preempted;
>> + __u32 pad[11];
>> }
>
> I think I'd be explicit about the 3 pad bytes you've left.
Seconded.
With that change are al...
2016 Jul 07
0
[PATCH v2 0/4] implement vcpu preempted check
...d you help out with an (x86) KVM interface for this?
>>>
>>> If it's just for spin loops, you can check if the version field in the
>>> steal time structure has changed.
>>
>> Steal time will not be updated until ahead of next vmentry except
>> wrmsr MSR_KVM_STEAL_TIME. So it can't represent it is preempted
>> currently, right?
>
> Hmm, you're right. We can use bit 0 of struct kvm_steal_time's flags to
> indicate that pad[0] is a "VCPU preempted" field; if pad[0] is 1, the
> VCPU has been scheduled out since the last time...