search for: __kvm_vcpu_is_preempted

Displaying 20 results from an estimated 31 matches for "__kvm_vcpu_is_preempted".

2017 Feb 10
3
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...s reported by perf were as follows: 69.75% 0.59% fio [k] down_write 69.15% 0.01% fio [k] call_rwsem_down_write_failed 67.12% 1.12% fio [k] rwsem_down_write_failed 63.48% 52.77% fio [k] osq_lock 9.46% 7.88% fio [k] __raw_callee_save___kvm_vcpu_is_preempt 3.93% 3.93% fio [k] __kvm_vcpu_is_preempted Making vcpu_is_preempted() a callee-save function has a relatively high cost on x86-64 primarily due to at least one more cacheline of data access from the saving and restoring of registers (8 of them) to and from stack as well as one more level of function call. As vcpu_is_preempted() is called w...
2017 Feb 10
3
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...s reported by perf were as follows: 69.75% 0.59% fio [k] down_write 69.15% 0.01% fio [k] call_rwsem_down_write_failed 67.12% 1.12% fio [k] rwsem_down_write_failed 63.48% 52.77% fio [k] osq_lock 9.46% 7.88% fio [k] __raw_callee_save___kvm_vcpu_is_preempt 3.93% 3.93% fio [k] __kvm_vcpu_is_preempted Making vcpu_is_preempted() a callee-save function has a relatively high cost on x86-64 primarily due to at least one more cacheline of data access from the saving and restoring of registers (8 of them) to and from stack as well as one more level of function call. As vcpu_is_preempted() is called w...
2017 Feb 10
2
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...0.59% fio [k] down_write >> 69.15% 0.01% fio [k] call_rwsem_down_write_failed >> 67.12% 1.12% fio [k] rwsem_down_write_failed >> 63.48% 52.77% fio [k] osq_lock >> 9.46% 7.88% fio [k] __raw_callee_save___kvm_vcpu_is_preempt >> 3.93% 3.93% fio [k] __kvm_vcpu_is_preempted >> > Thinking about this again, wouldn't something like the below also work? > > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > index 099fcba4981d..6aa33702c15c 100644 > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -589,6 +589,7 @@...
2017 Feb 10
2
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...0.59% fio [k] down_write >> 69.15% 0.01% fio [k] call_rwsem_down_write_failed >> 67.12% 1.12% fio [k] rwsem_down_write_failed >> 63.48% 52.77% fio [k] osq_lock >> 9.46% 7.88% fio [k] __raw_callee_save___kvm_vcpu_is_preempt >> 3.93% 3.93% fio [k] __kvm_vcpu_is_preempted >> > Thinking about this again, wouldn't something like the below also work? > > > diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c > index 099fcba4981d..6aa33702c15c 100644 > --- a/arch/x86/kernel/kvm.c > +++ b/arch/x86/kernel/kvm.c > @@ -589,6 +589,7 @@...
2017 Feb 10
0
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...n_write >>> 69.15% 0.01% fio [k] call_rwsem_down_write_failed >>> 67.12% 1.12% fio [k] rwsem_down_write_failed >>> 63.48% 52.77% fio [k] osq_lock >>> 9.46% 7.88% fio [k] __raw_callee_save___kvm_vcpu_is_preempt >>> 3.93% 3.93% fio [k] __kvm_vcpu_is_preempted >>> >> Thinking about this again, wouldn't something like the below also work? >> >> >> diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c >> index 099fcba4981d..6aa33702c15c 100644 >> --- a/arch/x86/kernel/kvm.c >> +++ b/arch/x86/kerne...
2017 Feb 15
0
[PATCH v3 2/2] x86/kvm: Provide optimized version of vcpu_is_preempted() for x86-64
...s reported by perf were as follows: 69.75% 0.59% fio [k] down_write 69.15% 0.01% fio [k] call_rwsem_down_write_failed 67.12% 1.12% fio [k] rwsem_down_write_failed 63.48% 52.77% fio [k] osq_lock 9.46% 7.88% fio [k] __raw_callee_save___kvm_vcpu_is_preempt 3.93% 3.93% fio [k] __kvm_vcpu_is_preempted Making vcpu_is_preempted() a callee-save function has a relatively high cost on x86-64 primarily due to at least one more cacheline of data access from the saving and restoring of registers (8 of them) to and from stack as well as one more level of function call. To reduce this performance overhe...
2017 Feb 15
0
[PATCH v4 2/2] x86/kvm: Provide optimized version of vcpu_is_preempted() for x86-64
...s reported by perf were as follows: 69.75% 0.59% fio [k] down_write 69.15% 0.01% fio [k] call_rwsem_down_write_failed 67.12% 1.12% fio [k] rwsem_down_write_failed 63.48% 52.77% fio [k] osq_lock 9.46% 7.88% fio [k] __raw_callee_save___kvm_vcpu_is_preempt 3.93% 3.93% fio [k] __kvm_vcpu_is_preempted Making vcpu_is_preempted() a callee-save function has a relatively high cost on x86-64 primarily due to at least one more cacheline of data access from the saving and restoring of registers (8 of them) to and from stack as well as one more level of function call. To reduce this performance overhe...
2017 Feb 15
4
[PATCH v4 0/2] x86/kvm: Reduce vcpu_is_preempted() overhead
v3->v4: - Fix x86-32 build error. v2->v3: - Provide an optimized __raw_callee_save___kvm_vcpu_is_preempted() in assembly as suggested by PeterZ. - Add a new patch to change vcpu_is_preempted() argument type to long to ease the writing of the assembly code. v1->v2: - Rerun the fio test on a different system on both bare-metal and a KVM guest. Both sockets were utilized in this test....
2017 Feb 15
4
[PATCH v4 0/2] x86/kvm: Reduce vcpu_is_preempted() overhead
v3->v4: - Fix x86-32 build error. v2->v3: - Provide an optimized __raw_callee_save___kvm_vcpu_is_preempted() in assembly as suggested by PeterZ. - Add a new patch to change vcpu_is_preempted() argument type to long to ease the writing of the assembly code. v1->v2: - Rerun the fio test on a different system on both bare-metal and a KVM guest. Both sockets were utilized in this test....
2017 Feb 15
3
[PATCH v3 0/2] x86/kvm: Reduce vcpu_is_preempted() overhead
v2->v3: - Provide an optimized __raw_callee_save___kvm_vcpu_is_preempted() in assembly as suggested by PeterZ. - Add a new patch to change vcpu_is_preempted() argument type to long to ease the writing of the assembly code. v1->v2: - Rerun the fio test on a different system on both bare-metal and a KVM guest. Both sockets were utilized in this test....
2017 Feb 15
3
[PATCH v3 0/2] x86/kvm: Reduce vcpu_is_preempted() overhead
v2->v3: - Provide an optimized __raw_callee_save___kvm_vcpu_is_preempted() in assembly as suggested by PeterZ. - Add a new patch to change vcpu_is_preempted() argument type to long to ease the writing of the assembly code. v1->v2: - Rerun the fio test on a different system on both bare-metal and a KVM guest. Both sockets were utilized in this test....
2017 Feb 08
4
[PATCH 1/2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...k on a 2-socket x86-64 system, the %CPU times as reported by perf were as follows: 71.27% 0.28% fio [k] down_write 70.99% 0.01% fio [k] call_rwsem_down_write_failed 69.43% 1.18% fio [k] rwsem_down_write_failed 65.51% 54.57% fio [k] osq_lock 9.72% 7.99% fio [k] __raw_callee_save___kvm_vcpu_is_preempted 4.16% 4.16% fio [k] __kvm_vcpu_is_preempted So making vcpu_is_preempted() a callee-save function has a pretty high cost associated with it. As vcpu_is_preempted() is called within the spinlock, mutex and rwsem slowpaths, there isn't much to gain by making it callee-save. So it is now chan...
2017 Feb 08
4
[PATCH 1/2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...k on a 2-socket x86-64 system, the %CPU times as reported by perf were as follows: 71.27% 0.28% fio [k] down_write 70.99% 0.01% fio [k] call_rwsem_down_write_failed 69.43% 1.18% fio [k] rwsem_down_write_failed 65.51% 54.57% fio [k] osq_lock 9.72% 7.99% fio [k] __raw_callee_save___kvm_vcpu_is_preempted 4.16% 4.16% fio [k] __kvm_vcpu_is_preempted So making vcpu_is_preempted() a callee-save function has a pretty high cost associated with it. As vcpu_is_preempted() is called within the spinlock, mutex and rwsem slowpaths, there isn't much to gain by making it callee-save. So it is now chan...
2017 Feb 10
0
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
...> > 69.75% 0.59% fio [k] down_write > 69.15% 0.01% fio [k] call_rwsem_down_write_failed > 67.12% 1.12% fio [k] rwsem_down_write_failed > 63.48% 52.77% fio [k] osq_lock > 9.46% 7.88% fio [k] __raw_callee_save___kvm_vcpu_is_preempt > 3.93% 3.93% fio [k] __kvm_vcpu_is_preempted > Thinking about this again, wouldn't something like the below also work? diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 099fcba4981d..6aa33702c15c 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -589,6 +589,7 @@ static void kvm_wait(u8 *ptr, u8 val)...
2016 Nov 15
2
[PATCH v7 06/11] x86, paravirt: Add interface to support kvm/xen vcpu preempted check
..._time_ops.steal_clock = kvm_steal_clock; -#ifdef CONFIG_PARAVIRT_SPINLOCKS - pv_lock_ops.vcpu_is_preempted = kvm_vcpu_is_preempted; -#endif } if (kvm_para_has_feature(KVM_FEATURE_PV_EOI)) @@ -604,6 +592,14 @@ static void kvm_wait(u8 *ptr, u8 val) local_irq_restore(flags); } +static bool __kvm_vcpu_is_preempted(int cpu) +{ + struct kvm_steal_time *src = &per_cpu(steal_time, cpu); + + return !!src->preempted; +} +PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_preempted); + /* * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present. */ @@ -620,6 +616,12 @@ void __init kvm_spinlock_init(void)...
2016 Nov 15
2
[PATCH v7 06/11] x86, paravirt: Add interface to support kvm/xen vcpu preempted check
..._time_ops.steal_clock = kvm_steal_clock; -#ifdef CONFIG_PARAVIRT_SPINLOCKS - pv_lock_ops.vcpu_is_preempted = kvm_vcpu_is_preempted; -#endif } if (kvm_para_has_feature(KVM_FEATURE_PV_EOI)) @@ -604,6 +592,14 @@ static void kvm_wait(u8 *ptr, u8 val) local_irq_restore(flags); } +static bool __kvm_vcpu_is_preempted(int cpu) +{ + struct kvm_steal_time *src = &per_cpu(steal_time, cpu); + + return !!src->preempted; +} +PV_CALLEE_SAVE_REGS_THUNK(__kvm_vcpu_is_preempted); + /* * Setup pv_lock_ops to exploit KVM_FEATURE_PV_UNHALT if present. */ @@ -620,6 +616,12 @@ void __init kvm_spinlock_init(void)...
2017 Feb 14
3
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
On Mon, Feb 13, 2017 at 05:34:01PM -0500, Waiman Long wrote: > It is the address of &steal_time that will exceed the 32-bit limit. That seems extremely unlikely. That would mean we have more than 4G worth of per-cpu variables declared in the kernel.
2017 Feb 14
3
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
On Mon, Feb 13, 2017 at 05:34:01PM -0500, Waiman Long wrote: > It is the address of &steal_time that will exceed the 32-bit limit. That seems extremely unlikely. That would mean we have more than 4G worth of per-cpu variables declared in the kernel.
2017 Feb 13
4
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
On Fri, Feb 10, 2017 at 12:00:43PM -0500, Waiman Long wrote: > >> +asm( > >> +".pushsection .text;" > >> +".global __raw_callee_save___kvm_vcpu_is_preempted;" > >> +".type __raw_callee_save___kvm_vcpu_is_preempted, @function;" > >> +"__raw_callee_save___kvm_vcpu_is_preempted:" > >> +FRAME_BEGIN > >> +"push %rdi;" > >> +"push %rdx;" > >> +"movslq %ed...
2017 Feb 13
4
[PATCH v2] x86/paravirt: Don't make vcpu_is_preempted() a callee-save function
On Fri, Feb 10, 2017 at 12:00:43PM -0500, Waiman Long wrote: > >> +asm( > >> +".pushsection .text;" > >> +".global __raw_callee_save___kvm_vcpu_is_preempted;" > >> +".type __raw_callee_save___kvm_vcpu_is_preempted, @function;" > >> +"__raw_callee_save___kvm_vcpu_is_preempted:" > >> +FRAME_BEGIN > >> +"push %rdi;" > >> +"push %rdx;" > >> +"movslq %ed...