search for: msr_val

Displaying 20 results from an estimated 26 matches for "msr_val".

Did you mean: max_val
2016 Oct 24
2
[PATCH v4 5/5] x86, kvm: support vcpu preempted check
2016-10-24 16:39+0200, Paolo Bonzini: > On 19/10/2016 19:24, Radim Kr?m?? wrote: >>> > + if (vcpu->arch.st.msr_val & KVM_MSR_ENABLED) >>> > + if (kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.st.stime, >>> > + &vcpu->arch.st.steal, >>> > + sizeof(struct kvm_steal_time)) == 0) { >>> > + vcpu->arch.st.steal.preempted = 1; >>&g...
2016 Oct 24
2
[PATCH v4 5/5] x86, kvm: support vcpu preempted check
2016-10-24 16:39+0200, Paolo Bonzini: > On 19/10/2016 19:24, Radim Kr?m?? wrote: >>> > + if (vcpu->arch.st.msr_val & KVM_MSR_ENABLED) >>> > + if (kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.st.stime, >>> > + &vcpu->arch.st.steal, >>> > + sizeof(struct kvm_steal_time)) == 0) { >>> > + vcpu->arch.st.steal.preempted = 1; >>&g...
2016 Oct 19
3
[PATCH v4 5/5] x86, kvm: support vcpu preempted check
...;arch.st.steal.version & 1) > vcpu->arch.st.steal.version += 1; /* first time write, random junk */ > > @@ -2812,6 +2814,16 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > > void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) > { > + if (vcpu->arch.st.msr_val & KVM_MSR_ENABLED) > + if (kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.st.stime, > + &vcpu->arch.st.steal, > + sizeof(struct kvm_steal_time)) == 0) { > + vcpu->arch.st.steal.preempted = 1; > + kvm_write_guest_cached(vcpu->kvm, &vcpu->ar...
2016 Oct 19
3
[PATCH v4 5/5] x86, kvm: support vcpu preempted check
...;arch.st.steal.version & 1) > vcpu->arch.st.steal.version += 1; /* first time write, random junk */ > > @@ -2812,6 +2814,16 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) > > void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) > { > + if (vcpu->arch.st.msr_val & KVM_MSR_ENABLED) > + if (kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.st.stime, > + &vcpu->arch.st.steal, > + sizeof(struct kvm_steal_time)) == 0) { > + vcpu->arch.st.steal.preempted = 1; > + kvm_write_guest_cached(vcpu->kvm, &vcpu->ar...
2016 Jul 07
5
[PATCH v2 0/4] implement vcpu preempted check
...b6 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1997,8 +1997,29 @@ static void kvmclock_reset(struct kvm_vcpu *vcpu) vcpu->arch.pv_time_enabled = false; } +static void update_steal_time_preempt(struct kvm_vcpu *vcpu) +{ + struct kvm_steal_time *st; + + if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED)) + return; + + if (unlikely(kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.st.stime, + &vcpu->arch.st.steal, sizeof(struct kvm_steal_time)))) + return; + + st = &vcpu->arch.st.steal; + + st->pad[KVM_ST_PAD_PREEMPT] = 1; /* we've stopped running...
2016 Jul 07
5
[PATCH v2 0/4] implement vcpu preempted check
...b6 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -1997,8 +1997,29 @@ static void kvmclock_reset(struct kvm_vcpu *vcpu) vcpu->arch.pv_time_enabled = false; } +static void update_steal_time_preempt(struct kvm_vcpu *vcpu) +{ + struct kvm_steal_time *st; + + if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED)) + return; + + if (unlikely(kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.st.stime, + &vcpu->arch.st.steal, sizeof(struct kvm_steal_time)))) + return; + + st = &vcpu->arch.st.steal; + + st->pad[KVM_ST_PAD_PREEMPT] = 1; /* we've stopped running...
2016 Dec 19
2
[PATCH v7 08/11] x86, kvm/x86.c: support vcpu preempted check
...gt; --- > arch/x86/include/uapi/asm/kvm_para.h | 4 +++- > arch/x86/kvm/x86.c | 16 ++++++++++++++++ > 2 files changed, 19 insertions(+), 1 deletion(-) > [..] > +static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu) > +{ > + if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED)) > + return; > + > + vcpu->arch.st.steal.preempted = 1; > + > + kvm_write_guest_offset_cached(vcpu->kvm, &vcpu->arch.st.stime, > + &vcpu->arch.st.steal.preempted, > + offsetof(struct kvm_steal_time, preempted), > + sizeof(vcp...
2016 Dec 19
2
[PATCH v7 08/11] x86, kvm/x86.c: support vcpu preempted check
...gt; --- > arch/x86/include/uapi/asm/kvm_para.h | 4 +++- > arch/x86/kvm/x86.c | 16 ++++++++++++++++ > 2 files changed, 19 insertions(+), 1 deletion(-) > [..] > +static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu) > +{ > + if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED)) > + return; > + > + vcpu->arch.st.steal.preempted = 1; > + > + kvm_write_guest_offset_cached(vcpu->kvm, &vcpu->arch.st.stime, > + &vcpu->arch.st.steal.preempted, > + offsetof(struct kvm_steal_time, preempted), > + sizeof(vcp...
2016 Jul 06
3
[PATCH v2 0/4] implement vcpu preempted check
On 06/07/2016 14:08, Wanpeng Li wrote: > 2016-07-06 18:44 GMT+08:00 Paolo Bonzini <pbonzini at redhat.com>: >> >> >> On 06/07/2016 08:52, Peter Zijlstra wrote: >>> On Tue, Jun 28, 2016 at 10:43:07AM -0400, Pan Xinhui wrote: >>>> change fomr v1: >>>> a simplier definition of default vcpu_is_preempted >>>> skip mahcine
2016 Jul 06
3
[PATCH v2 0/4] implement vcpu preempted check
On 06/07/2016 14:08, Wanpeng Li wrote: > 2016-07-06 18:44 GMT+08:00 Paolo Bonzini <pbonzini at redhat.com>: >> >> >> On 06/07/2016 08:52, Peter Zijlstra wrote: >>> On Tue, Jun 28, 2016 at 10:43:07AM -0400, Pan Xinhui wrote: >>>> change fomr v1: >>>> a simplier definition of default vcpu_is_preempted >>>> skip mahcine
2016 Oct 24
0
[PATCH v4 5/5] x86, kvm: support vcpu preempted check
On 19/10/2016 19:24, Radim Kr?m?? wrote: >> > + if (vcpu->arch.st.msr_val & KVM_MSR_ENABLED) >> > + if (kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.st.stime, >> > + &vcpu->arch.st.steal, >> > + sizeof(struct kvm_steal_time)) == 0) { >> > + vcpu->arch.st.steal.preempted = 1; >> > + kvm_writ...
2016 Oct 24
0
[PATCH v4 5/5] x86, kvm: support vcpu preempted check
On 24/10/2016 17:14, Radim Kr?m?? wrote: > 2016-10-24 16:39+0200, Paolo Bonzini: >> On 19/10/2016 19:24, Radim Kr?m?? wrote: >>>>> + if (vcpu->arch.st.msr_val & KVM_MSR_ENABLED) >>>>> + if (kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.st.stime, >>>>> + &vcpu->arch.st.steal, >>>>> + sizeof(struct kvm_steal_time)) == 0) { >>>>> + vcpu->arch.st.steal.preempted = 1...
2016 Nov 02
0
[PATCH v7 08/11] x86, kvm/x86.c: support vcpu preempted check
...cpu->arch.st.steal.version += 1; /* first time write, random junk */ @@ -2810,8 +2812,22 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) kvm_make_request(KVM_REQ_STEAL_UPDATE, vcpu); } +static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu) +{ + if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED)) + return; + + vcpu->arch.st.steal.preempted = 1; + + kvm_write_guest_offset_cached(vcpu->kvm, &vcpu->arch.st.stime, + &vcpu->arch.st.steal.preempted, + offsetof(struct kvm_steal_time, preempted), + sizeof(vcpu->arch.st.steal.preempted)); +} + v...
2016 Oct 20
0
[PATCH v5 6/9] x86, kvm: support vcpu preempted check
...cpu->arch.st.steal.version += 1; /* first time write, random junk */ @@ -2810,8 +2812,24 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) kvm_make_request(KVM_REQ_STEAL_UPDATE, vcpu); } +static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu) +{ + if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED)) + return; + + if (unlikely(kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.st.stime, + &vcpu->arch.st.steal, sizeof(struct kvm_steal_time)))) + return; + + vcpu->arch.st.steal.preempted = 1; + + kvm_write_guest_cached(vcpu->kvm, &vcpu->arch.st....
2016 Dec 19
0
[PATCH v7 08/11] x86, kvm/x86.c: support vcpu preempted check
...de/uapi/asm/kvm_para.h | 4 +++- >> arch/x86/kvm/x86.c | 16 ++++++++++++++++ >> 2 files changed, 19 insertions(+), 1 deletion(-) >> > [..] >> +static void kvm_steal_time_set_preempted(struct kvm_vcpu *vcpu) >> +{ >> + if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED)) >> + return; >> + >> + vcpu->arch.st.steal.preempted = 1; >> + >> + kvm_write_guest_offset_cached(vcpu->kvm, &vcpu->arch.st.stime, >> + &vcpu->arch.st.steal.preempted, >> + offsetof(struct kvm_steal_time, pre...
2016 Oct 19
0
[PATCH v4 5/5] x86, kvm: support vcpu preempted check
...eal.preempted = 0; + if (vcpu->arch.st.steal.version & 1) vcpu->arch.st.steal.version += 1; /* first time write, random junk */ @@ -2812,6 +2814,16 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu) void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu) { + if (vcpu->arch.st.msr_val & KVM_MSR_ENABLED) + if (kvm_read_guest_cached(vcpu->kvm, &vcpu->arch.st.stime, + &vcpu->arch.st.steal, + sizeof(struct kvm_steal_time)) == 0) { + vcpu->arch.st.steal.preempted = 1; + kvm_write_guest_cached(vcpu->kvm, &vcpu->arch.st.stime, + &v...
2016 Oct 20
15
[PATCH v5 0/9] implement vcpu preempted check
change from v4: spilt x86 kvm vcpu preempted check into two patches. add documentation patch. add x86 vcpu preempted check patch under xen add s390 vcpu preempted check patch change from v3: add x86 vcpu preempted check patch change from v2: no code change, fix typos, update some comments change from v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc,
2016 Oct 20
15
[PATCH v5 0/9] implement vcpu preempted check
change from v4: spilt x86 kvm vcpu preempted check into two patches. add documentation patch. add x86 vcpu preempted check patch under xen add s390 vcpu preempted check patch change from v3: add x86 vcpu preempted check patch change from v2: no code change, fix typos, update some comments change from v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc,
2016 Oct 19
10
[PATCH v4 0/5] implement vcpu preempted check
change from v3: add x86 vcpu preempted check patch change from v2: no code change, fix typos, update some comments change from v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc, and add config. remove dedicated macro. add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner. add more comments thanks boqun and Peter's suggestion.
2016 Oct 19
10
[PATCH v4 0/5] implement vcpu preempted check
change from v3: add x86 vcpu preempted check patch change from v2: no code change, fix typos, update some comments change from v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc, and add config. remove dedicated macro. add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner. add more comments thanks boqun and Peter's suggestion.