search for: vcpu_is_preempt

Displaying 20 results from an estimated 194 matches for "vcpu_is_preempt".

Did you mean: vcpu_is_preempted
2016 Nov 15
2
[PATCH v7 06/11] x86, paravirt: Add interface to support kvm/xen vcpu preempted check
...ude/asm/paravirt_types.h > index 0f400c0..38c3bb7 100644 > --- a/arch/x86/include/asm/paravirt_types.h > +++ b/arch/x86/include/asm/paravirt_types.h > @@ -310,6 +310,8 @@ struct pv_lock_ops { > > void (*wait)(u8 *ptr, u8 val); > void (*kick)(int cpu); > + > + bool (*vcpu_is_preempted)(int cpu); > }; So that ends up with a full function call in the native case. I did something like the below on top, completely untested, not been near a compiler etc.. It doesn't get rid of the branch, but at least it avoids the function call, and hardware should have no trouble predic...
2016 Nov 15
2
[PATCH v7 06/11] x86, paravirt: Add interface to support kvm/xen vcpu preempted check
...ude/asm/paravirt_types.h > index 0f400c0..38c3bb7 100644 > --- a/arch/x86/include/asm/paravirt_types.h > +++ b/arch/x86/include/asm/paravirt_types.h > @@ -310,6 +310,8 @@ struct pv_lock_ops { > > void (*wait)(u8 *ptr, u8 val); > void (*kick)(int cpu); > + > + bool (*vcpu_is_preempted)(int cpu); > }; So that ends up with a full function call in the native case. I did something like the below on top, completely untested, not been near a compiler etc.. It doesn't get rid of the branch, but at least it avoids the function call, and hardware should have no trouble predic...
2017 Feb 08
4
[PATCH 1/2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...2-socket x86-64 system, the %CPU times as reported by perf were as follows: 71.27% 0.28% fio [k] down_write 70.99% 0.01% fio [k] call_rwsem_down_write_failed 69.43% 1.18% fio [k] rwsem_down_write_failed 65.51% 54.57% fio [k] osq_lock 9.72% 7.99% fio [k] __raw_callee_save___kvm_vcpu_is_preempted 4.16% 4.16% fio [k] __kvm_vcpu_is_preempted So making vcpu_is_preempted() a callee-save function has a pretty high cost associated with it. As vcpu_is_preempted() is called within the spinlock, mutex and rwsem slowpaths, there isn't much to gain by making it callee-save. So it is now ch...
2017 Feb 08
4
[PATCH 1/2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...2-socket x86-64 system, the %CPU times as reported by perf were as follows: 71.27% 0.28% fio [k] down_write 70.99% 0.01% fio [k] call_rwsem_down_write_failed 69.43% 1.18% fio [k] rwsem_down_write_failed 65.51% 54.57% fio [k] osq_lock 9.72% 7.99% fio [k] __raw_callee_save___kvm_vcpu_is_preempted 4.16% 4.16% fio [k] __kvm_vcpu_is_preempted So making vcpu_is_preempted() a callee-save function has a pretty high cost associated with it. As vcpu_is_preempted() is called within the spinlock, mutex and rwsem slowpaths, there isn't much to gain by making it callee-save. So it is now ch...
2016 Nov 16
0
[PATCH v7 06/11] x86, paravirt: Add interface to support kvm/xen vcpu preempted check
...index 0f400c0..38c3bb7 100644 >> --- a/arch/x86/include/asm/paravirt_types.h >> +++ b/arch/x86/include/asm/paravirt_types.h >> @@ -310,6 +310,8 @@ struct pv_lock_ops { >> >> void (*wait)(u8 *ptr, u8 val); >> void (*kick)(int cpu); >> + >> + bool (*vcpu_is_preempted)(int cpu); >> }; > > So that ends up with a full function call in the native case. I did > something like the below on top, completely untested, not been near a > compiler etc.. > Hi, Peter. I think we can avoid a function call in a simpler way. How about below static inli...
2017 Feb 10
3
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...2-socket x86-64 system, the %CPU times as reported by perf were as follows: 69.75% 0.59% fio [k] down_write 69.15% 0.01% fio [k] call_rwsem_down_write_failed 67.12% 1.12% fio [k] rwsem_down_write_failed 63.48% 52.77% fio [k] osq_lock 9.46% 7.88% fio [k] __raw_callee_save___kvm_vcpu_is_preempt 3.93% 3.93% fio [k] __kvm_vcpu_is_preempted Making vcpu_is_preempted() a callee-save function has a relatively high cost on x86-64 primarily due to at least one more cacheline of data access from the saving and restoring of registers (8 of them) to and from stack as well as one more level of...
2017 Feb 10
3
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...2-socket x86-64 system, the %CPU times as reported by perf were as follows: 69.75% 0.59% fio [k] down_write 69.15% 0.01% fio [k] call_rwsem_down_write_failed 67.12% 1.12% fio [k] rwsem_down_write_failed 63.48% 52.77% fio [k] osq_lock 9.46% 7.88% fio [k] __raw_callee_save___kvm_vcpu_is_preempt 3.93% 3.93% fio [k] __kvm_vcpu_is_preempted Making vcpu_is_preempted() a callee-save function has a relatively high cost on x86-64 primarily due to at least one more cacheline of data access from the saving and restoring of registers (8 of them) to and from stack as well as one more level of...
2016 Jun 28
11
[PATCH v2 0/4] implement vcpu preempted check
change fomr v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc, and add config. remove dedicated macro. add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner. add more comments thanks boqun and Peter's suggestion. This patch set aims to fix lock holder preemption issues. test-case: perf record...
2016 Jun 28
11
[PATCH v2 0/4] implement vcpu preempted check
change fomr v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc, and add config. remove dedicated macro. add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner. add more comments thanks boqun and Peter's suggestion. This patch set aims to fix lock holder preemption issues. test-case: perf record...
2017 Feb 08
0
[PATCH 2/2] locking/mutex, rwsem: Reduce vcpu_is_preempted() calling frequency
As the vcpu_is_preempted() call is pretty costly compared with other checks within mutex_spin_on_owner() and rwsem_spin_on_owner(), they are done at a reduce frequency of once every 256 iterations. Signed-off-by: Waiman Long <longman at redhat.com> --- kernel/locking/mutex.c | 5 ++++- kernel/locking/rwsem-x...
2016 Nov 02
13
[PATCH v7 00/11] implement vcpu preempted check
...heck into two patches. add documentation patch. add x86 vcpu preempted check patch under xen add s390 vcpu preempted check patch change from v3: add x86 vcpu preempted check patch change from v2: no code change, fix typos, update some comments change from v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc, and add config. remove dedicated macro. add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner. add more comments thanks boqun and Peter's suggestion. This patch set aims to fix lock holder preemption issues. test-case: perf record...
2016 Nov 02
13
[PATCH v7 00/11] implement vcpu preempted check
...heck into two patches. add documentation patch. add x86 vcpu preempted check patch under xen add s390 vcpu preempted check patch change from v3: add x86 vcpu preempted check patch change from v2: no code change, fix typos, update some comments change from v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc, and add config. remove dedicated macro. add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner. add more comments thanks boqun and Peter's suggestion. This patch set aims to fix lock holder preemption issues. test-case: perf record...
2016 Jul 21
5
[PATCH v3 0/4] implement vcpu preempted check
change from v2: no code change, fix typos, update some comments change from v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc, and add config. remove dedicated macro. add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner. add more comments thanks boqun and Peter's suggestion. This patch set aims to fix lock holder preemption issues. test-case: perf record...
2016 Jul 21
5
[PATCH v3 0/4] implement vcpu preempted check
change from v2: no code change, fix typos, update some comments change from v1: a simplier definition of default vcpu_is_preempted skip mahcine type check on ppc, and add config. remove dedicated macro. add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner. add more comments thanks boqun and Peter's suggestion. This patch set aims to fix lock holder preemption issues. test-case: perf record...
2017 Feb 15
4
[PATCH v4 0/2] x86/kvm: Reduce vcpu_is_preempted() overhead
v3->v4: - Fix x86-32 build error. v2->v3: - Provide an optimized __raw_callee_save___kvm_vcpu_is_preempted() in assembly as suggested by PeterZ. - Add a new patch to change vcpu_is_preempted() argument type to long to ease the writing of the assembly code. v1->v2: - Rerun the fio test on a different system on both bare-metal and a KVM guest. Both sockets were utilized in this test...
2017 Feb 15
4
[PATCH v4 0/2] x86/kvm: Reduce vcpu_is_preempted() overhead
v3->v4: - Fix x86-32 build error. v2->v3: - Provide an optimized __raw_callee_save___kvm_vcpu_is_preempted() in assembly as suggested by PeterZ. - Add a new patch to change vcpu_is_preempted() argument type to long to ease the writing of the assembly code. v1->v2: - Rerun the fio test on a different system on both bare-metal and a KVM guest. Both sockets were utilized in this test...
2019 Dec 26
1
[PATCH v2 5/6] KVM: arm64: Add interface to support VCPU preempted check
...from include/linux/acpi.h:13, from include/acpi/apei.h:9, from include/acpi/ghes.h:5, from include/linux/arm_sdei.h:8, from arch/arm64/kernel/asm-offsets.c:10: arch/arm64/include/asm/spinlock.h: In function 'vcpu_is_preempted': >> arch/arm64/include/asm/spinlock.h:18:9: error: implicit declaration of function 'pv_vcpu_is_preempted'; did you mean 'vcpu_is_preempted'? [-Werror=implicit-function-declaration] return pv_vcpu_is_preempted(cpu); ^~~~~~~~~~~~~~~~~~~~ vcpu...
2016 Nov 02
0
[PATCH v7 06/11] x86, paravirt: Add interface to support kvm/xen vcpu preempted check
This is to fix some lock holder preemption issues. Some other locks implementation do a spin loop before acquiring the lock itself. Currently kernel has an interface of bool vcpu_is_preempted(int cpu). It takes the cpu as parameter and return true if the cpu is preempted. Then kernel can break the spin loops upon the retval of vcpu_is_preempted. As kernel has used this interface, So lets support it. To deal with kernel and kvm/xen, add vcpu_is_preempted into struct pv_lock_ops. The...
2016 Nov 16
3
[PATCH v7 06/11] x86, paravirt: Add interface to support kvm/xen vcpu preempted check
On Wed, Nov 16, 2016 at 12:19:09PM +0800, Pan Xinhui wrote: > Hi, Peter. > I think we can avoid a function call in a simpler way. How about below > > static inline bool vcpu_is_preempted(int cpu) > { > /* only set in pv case*/ > if (pv_lock_ops.vcpu_is_preempted) > return pv_lock_ops.vcpu_is_preempted(cpu); > return false; > } That is still more expensive. It needs to do an actual load and makes it hard to predict the branch, you'd have to actually wai...
2016 Nov 16
3
[PATCH v7 06/11] x86, paravirt: Add interface to support kvm/xen vcpu preempted check
On Wed, Nov 16, 2016 at 12:19:09PM +0800, Pan Xinhui wrote: > Hi, Peter. > I think we can avoid a function call in a simpler way. How about below > > static inline bool vcpu_is_preempted(int cpu) > { > /* only set in pv case*/ > if (pv_lock_ops.vcpu_is_preempted) > return pv_lock_ops.vcpu_is_preempted(cpu); > return false; > } That is still more expensive. It needs to do an actual load and makes it hard to predict the branch, you'd have to actually wai...