search for: pv_vcpu_is_preempted

Displaying 20 results from an estimated 51 matches for "pv_vcpu_is_preempted".

2019 Mar 25
2
[PATCH] x86/paravirt: Guard against invalid cpu # in pv_vcpu_is_preempted()
It was found that passing an invalid cpu number to pv_vcpu_is_preempted() might panic the kernel in a VM guest. For example, [ 2.531077] Oops: 0000 [#1] SMP PTI : [ 2.532545] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 [ 2.533321] RIP: 0010:__raw_callee_save___kvm_vcpu_is_preempted+0x0/0x20 To guard against this kind of kernel panic, check is added t...
2019 Mar 25
2
[PATCH] x86/paravirt: Guard against invalid cpu # in pv_vcpu_is_preempted()
It was found that passing an invalid cpu number to pv_vcpu_is_preempted() might panic the kernel in a VM guest. For example, [ 2.531077] Oops: 0000 [#1] SMP PTI : [ 2.532545] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 [ 2.533321] RIP: 0010:__raw_callee_save___kvm_vcpu_is_preempted+0x0/0x20 To guard against this kind of kernel panic, check is added t...
2019 Mar 25
2
[PATCH] x86/paravirt: Guard against invalid cpu # in pv_vcpu_is_preempted()
On 03/25/2019 12:40 PM, Juergen Gross wrote: > On 25/03/2019 16:57, Waiman Long wrote: >> It was found that passing an invalid cpu number to pv_vcpu_is_preempted() >> might panic the kernel in a VM guest. For example, >> >> [ 2.531077] Oops: 0000 [#1] SMP PTI >> : >> [ 2.532545] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 >> [ 2.533321] RIP: 0010:__raw_callee_save___kvm_vcpu_is_preempted+0x0/0x20 >>...
2019 Mar 25
2
[PATCH] x86/paravirt: Guard against invalid cpu # in pv_vcpu_is_preempted()
On 03/25/2019 12:40 PM, Juergen Gross wrote: > On 25/03/2019 16:57, Waiman Long wrote: >> It was found that passing an invalid cpu number to pv_vcpu_is_preempted() >> might panic the kernel in a VM guest. For example, >> >> [ 2.531077] Oops: 0000 [#1] SMP PTI >> : >> [ 2.532545] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 >> [ 2.533321] RIP: 0010:__raw_callee_save___kvm_vcpu_is_preempted+0x0/0x20 >>...
2019 Mar 25
0
[PATCH] x86/paravirt: Guard against invalid cpu # in pv_vcpu_is_preempted()
On 25/03/2019 16:57, Waiman Long wrote: > It was found that passing an invalid cpu number to pv_vcpu_is_preempted() > might panic the kernel in a VM guest. For example, > > [ 2.531077] Oops: 0000 [#1] SMP PTI > : > [ 2.532545] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 > [ 2.533321] RIP: 0010:__raw_callee_save___kvm_vcpu_is_preempted+0x0/0x20 > > To guard against thi...
2019 Apr 01
0
[PATCH] x86/paravirt: Guard against invalid cpu # in pv_vcpu_is_preempted()
On 25/03/2019 19:03, Waiman Long wrote: > On 03/25/2019 12:40 PM, Juergen Gross wrote: >> On 25/03/2019 16:57, Waiman Long wrote: >>> It was found that passing an invalid cpu number to pv_vcpu_is_preempted() >>> might panic the kernel in a VM guest. For example, >>> >>> [ 2.531077] Oops: 0000 [#1] SMP PTI >>> : >>> [ 2.532545] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011 >>> [ 2.533321] RIP: 0010:__raw_callee_save___kvm_vcpu_is_pre...
2019 Dec 26
1
[PATCH v2 5/6] KVM: arm64: Add interface to support VCPU preempted check
...pi/ghes.h:5, from include/linux/arm_sdei.h:8, from arch/arm64/kernel/asm-offsets.c:10: arch/arm64/include/asm/spinlock.h: In function 'vcpu_is_preempted': >> arch/arm64/include/asm/spinlock.h:18:9: error: implicit declaration of function 'pv_vcpu_is_preempted'; did you mean 'vcpu_is_preempted'? [-Werror=implicit-function-declaration] return pv_vcpu_is_preempted(cpu); ^~~~~~~~~~~~~~~~~~~~ vcpu_is_preempted cc1: some warnings being treated as errors make[2]: *** [arch/arm64/kernel/asm-offsets.s] Error 1 ma...
2017 Feb 15
0
[PATCH v4 1/2] x86/paravirt: Change vcp_is_preempted() arg type to long
...include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h index 1eea6ca..f75fbfe 100644 --- a/arch/x86/include/asm/paravirt.h +++ b/arch/x86/include/asm/paravirt.h @@ -673,7 +673,7 @@ static __always_inline void pv_kick(int cpu) PVOP_VCALL1(pv_lock_ops.kick, cpu); } -static __always_inline bool pv_vcpu_is_preempted(int cpu) +static __always_inline bool pv_vcpu_is_preempted(long cpu) { return PVOP_CALLEE1(bool, pv_lock_ops.vcpu_is_preempted, cpu); } diff --git a/arch/x86/include/asm/qspinlock.h b/arch/x86/include/asm/qspinlock.h index c343ab5..48a706f 100644 --- a/arch/x86/include/asm/qspinlock.h +++ b/arc...
2017 Sep 05
3
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...gt; diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h >>> index c25dd22f7c70..d9e954fb37df 100644 >>> --- a/arch/x86/include/asm/paravirt.h >>> +++ b/arch/x86/include/asm/paravirt.h >>> @@ -725,6 +725,11 @@ static __always_inline bool pv_vcpu_is_preempted(long cpu) >>> return PVOP_CALLEE1(bool, pv_lock_ops.vcpu_is_preempted, cpu); >>> } >>> >>> +static __always_inline bool pv_virt_spin_lock(struct qspinlock *lock) >>> +{ >>> + return PVOP_CALLEE1(bool, pv_lock_ops.virt_spin_lock, lock); >...
2017 Sep 05
3
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...gt; diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h >>> index c25dd22f7c70..d9e954fb37df 100644 >>> --- a/arch/x86/include/asm/paravirt.h >>> +++ b/arch/x86/include/asm/paravirt.h >>> @@ -725,6 +725,11 @@ static __always_inline bool pv_vcpu_is_preempted(long cpu) >>> return PVOP_CALLEE1(bool, pv_lock_ops.vcpu_is_preempted, cpu); >>> } >>> >>> +static __always_inline bool pv_virt_spin_lock(struct qspinlock *lock) >>> +{ >>> + return PVOP_CALLEE1(bool, pv_lock_ops.virt_spin_lock, lock); >...
2017 Sep 05
2
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...tions(+), 15 deletions(-) > > diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h > index c25dd22f7c70..d9e954fb37df 100644 > --- a/arch/x86/include/asm/paravirt.h > +++ b/arch/x86/include/asm/paravirt.h > @@ -725,6 +725,11 @@ static __always_inline bool pv_vcpu_is_preempted(long cpu) > return PVOP_CALLEE1(bool, pv_lock_ops.vcpu_is_preempted, cpu); > } > > +static __always_inline bool pv_virt_spin_lock(struct qspinlock *lock) > +{ > + return PVOP_CALLEE1(bool, pv_lock_ops.virt_spin_lock, lock); > +} > + > #endif /* SMP && PARAVI...
2017 Sep 05
2
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...tions(+), 15 deletions(-) > > diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h > index c25dd22f7c70..d9e954fb37df 100644 > --- a/arch/x86/include/asm/paravirt.h > +++ b/arch/x86/include/asm/paravirt.h > @@ -725,6 +725,11 @@ static __always_inline bool pv_vcpu_is_preempted(long cpu) > return PVOP_CALLEE1(bool, pv_lock_ops.vcpu_is_preempted, cpu); > } > > +static __always_inline bool pv_virt_spin_lock(struct qspinlock *lock) > +{ > + return PVOP_CALLEE1(bool, pv_lock_ops.virt_spin_lock, lock); > +} > + > #endif /* SMP && PARAVI...
2017 Sep 05
0
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
.../arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h >>>> index c25dd22f7c70..d9e954fb37df 100644 >>>> --- a/arch/x86/include/asm/paravirt.h >>>> +++ b/arch/x86/include/asm/paravirt.h >>>> @@ -725,6 +725,11 @@ static __always_inline bool pv_vcpu_is_preempted(long cpu) >>>> return PVOP_CALLEE1(bool, pv_lock_ops.vcpu_is_preempted, cpu); >>>> } >>>> >>>> +static __always_inline bool pv_virt_spin_lock(struct qspinlock *lock) >>>> +{ >>>> + return PVOP_CALLEE1(bool, pv_lock_ops.vir...
2019 Dec 26
0
[PATCH v2 5/6] KVM: arm64: Add interface to support VCPU preempted check
...tch_template { struct pv_time_ops time; + struct pv_lock_ops lock; }; extern struct paravirt_patch_template pv_ops; @@ -24,6 +29,13 @@ static inline u64 paravirt_steal_clock(int cpu) int __init pv_time_init(void); +__visible bool __native_vcpu_is_preempted(int cpu); + +static inline bool pv_vcpu_is_preempted(int cpu) +{ + return pv_ops.lock.vcpu_is_preempted(cpu); +} + #else #define pv_time_init() do {} while (0) diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h index b093b287babf..45ff1b2949a6 100644 --- a/arch/arm64/include/asm/spinlock.h +++ b/arch/arm64/include/...
2017 Feb 15
4
[PATCH v4 0/2] x86/kvm: Reduce vcpu_is_preempted() overhead
v3->v4: - Fix x86-32 build error. v2->v3: - Provide an optimized __raw_callee_save___kvm_vcpu_is_preempted() in assembly as suggested by PeterZ. - Add a new patch to change vcpu_is_preempted() argument type to long to ease the writing of the assembly code. v1->v2: - Rerun the fio test on a different system on both bare-metal and a KVM guest. Both sockets were
2017 Feb 15
4
[PATCH v4 0/2] x86/kvm: Reduce vcpu_is_preempted() overhead
v3->v4: - Fix x86-32 build error. v2->v3: - Provide an optimized __raw_callee_save___kvm_vcpu_is_preempted() in assembly as suggested by PeterZ. - Add a new patch to change vcpu_is_preempted() argument type to long to ease the writing of the assembly code. v1->v2: - Rerun the fio test on a different system on both bare-metal and a KVM guest. Both sockets were
2017 Sep 05
0
[PATCH 3/4] paravirt: add virt_spin_lock pvops function
...) >> >> diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h >> index c25dd22f7c70..d9e954fb37df 100644 >> --- a/arch/x86/include/asm/paravirt.h >> +++ b/arch/x86/include/asm/paravirt.h >> @@ -725,6 +725,11 @@ static __always_inline bool pv_vcpu_is_preempted(long cpu) >> return PVOP_CALLEE1(bool, pv_lock_ops.vcpu_is_preempted, cpu); >> } >> >> +static __always_inline bool pv_virt_spin_lock(struct qspinlock *lock) >> +{ >> + return PVOP_CALLEE1(bool, pv_lock_ops.virt_spin_lock, lock); >> +} >> + >&...
2019 Dec 26
7
[PATCH v2 0/6] KVM: arm64: VCPU preempted check support
This patch set aims to support the vcpu_is_preempted() functionality under KVM/arm64, which allowing the guest to obtain the VCPU is currently running or not. This will enhance lock performance on overcommitted hosts (more runnable VCPUs than physical CPUs in the system) as doing busy waits for preempted VCPUs will hurt system performance far worse than early yielding. We have observed some
2017 Sep 05
7
[PATCH 0/4] make virt_spin_lock() a pvops function
With virt_spin_lock() being a pvops function the bare metal case can be optimized by patching the call away completely. In case a kernel running as a guest it can decide whether to use paravitualized spinlocks, the current fallback to the unfair test-and-set scheme, or to mimic the bare metal behavior. Juergen Gross (4): paravirt: add generic _paravirt_false() function paravirt: switch
2017 Sep 05
7
[PATCH 0/4] make virt_spin_lock() a pvops function
With virt_spin_lock() being a pvops function the bare metal case can be optimized by patching the call away completely. In case a kernel running as a guest it can decide whether to use paravitualized spinlocks, the current fallback to the unfair test-and-set scheme, or to mimic the bare metal behavior. Juergen Gross (4): paravirt: add generic _paravirt_false() function paravirt: switch