search for: preempted_offset

Displaying 8 results from an estimated 8 matches for "preempted_offset".

Did you mean: preempt_offset
2017 Feb 16
1
[PATCH v4 2/2] x86/kvm: Provide optimized version of vcpu_is_preempted() for x86-64
...rsion for x86-64 to avoid 8 64-bit register saving and > + * restoring to/from the stack. It is assumed that the preempted value > + * is at an offset of 16 from the beginning of the kvm_steal_time structure > + * which is verified by the BUILD_BUG_ON() macro below. > + */ > +#define PREEMPTED_OFFSET 16 As per Andrew's suggestion, the 'right' way is something like so. --- asm-offsets_64.c | 11 +++++++++++ kvm.c | 14 ++++---------- 2 files changed, 15 insertions(+), 10 deletions(-) --- a/arch/x86/kernel/asm-offsets_64.c +++ b/arch/x86/kernel/asm-offsets_64.c @@ -...
2017 Feb 16
1
[PATCH v4 2/2] x86/kvm: Provide optimized version of vcpu_is_preempted() for x86-64
...rsion for x86-64 to avoid 8 64-bit register saving and > + * restoring to/from the stack. It is assumed that the preempted value > + * is at an offset of 16 from the beginning of the kvm_steal_time structure > + * which is verified by the BUILD_BUG_ON() macro below. > + */ > +#define PREEMPTED_OFFSET 16 As per Andrew's suggestion, the 'right' way is something like so. --- asm-offsets_64.c | 11 +++++++++++ kvm.c | 14 ++++---------- 2 files changed, 15 insertions(+), 10 deletions(-) --- a/arch/x86/kernel/asm-offsets_64.c +++ b/arch/x86/kernel/asm-offsets_64.c @@ -...
2017 Feb 15
3
[PATCH v3 0/2] x86/kvm: Reduce vcpu_is_preempted() overhead
v2->v3: - Provide an optimized __raw_callee_save___kvm_vcpu_is_preempted() in assembly as suggested by PeterZ. - Add a new patch to change vcpu_is_preempted() argument type to long to ease the writing of the assembly code. v1->v2: - Rerun the fio test on a different system on both bare-metal and a KVM guest. Both sockets were utilized in this test. - The commit log was
2017 Feb 15
3
[PATCH v3 0/2] x86/kvm: Reduce vcpu_is_preempted() overhead
v2->v3: - Provide an optimized __raw_callee_save___kvm_vcpu_is_preempted() in assembly as suggested by PeterZ. - Add a new patch to change vcpu_is_preempted() argument type to long to ease the writing of the assembly code. v1->v2: - Rerun the fio test on a different system on both bare-metal and a KVM guest. Both sockets were utilized in this test. - The commit log was
2017 Feb 15
4
[PATCH v4 0/2] x86/kvm: Reduce vcpu_is_preempted() overhead
v3->v4: - Fix x86-32 build error. v2->v3: - Provide an optimized __raw_callee_save___kvm_vcpu_is_preempted() in assembly as suggested by PeterZ. - Add a new patch to change vcpu_is_preempted() argument type to long to ease the writing of the assembly code. v1->v2: - Rerun the fio test on a different system on both bare-metal and a KVM guest. Both sockets were
2017 Feb 15
4
[PATCH v4 0/2] x86/kvm: Reduce vcpu_is_preempted() overhead
v3->v4: - Fix x86-32 build error. v2->v3: - Provide an optimized __raw_callee_save___kvm_vcpu_is_preempted() in assembly as suggested by PeterZ. - Add a new patch to change vcpu_is_preempted() argument type to long to ease the writing of the assembly code. v1->v2: - Rerun the fio test on a different system on both bare-metal and a KVM guest. Both sockets were
2017 Feb 15
0
[PATCH v3 2/2] x86/kvm: Provide optimized version of vcpu_is_preempted() for x86-64
...+/* + * Hand-optimize version for x86-64 to avoid 8 64-bit register saving and + * restoring to/from the stack. It is assumed that the preempted value + * is at an offset of 16 from the beginning of the kvm_steal_time structure + * which is verified by the BUILD_BUG_ON() macro below. + */ +#define PREEMPTED_OFFSET 16 +asm( +".pushsection .text;" +".global __raw_callee_save___kvm_vcpu_is_preempted;" +".type __raw_callee_save___kvm_vcpu_is_preempted, @function;" +"__raw_callee_save___kvm_vcpu_is_preempted:" +"movq __per_cpu_offset(,%rdi,8), %rax;" +"cmpb $...
2017 Feb 15
0
[PATCH v4 2/2] x86/kvm: Provide optimized version of vcpu_is_preempted() for x86-64
...+/* + * Hand-optimize version for x86-64 to avoid 8 64-bit register saving and + * restoring to/from the stack. It is assumed that the preempted value + * is at an offset of 16 from the beginning of the kvm_steal_time structure + * which is verified by the BUILD_BUG_ON() macro below. + */ +#define PREEMPTED_OFFSET 16 +asm( +".pushsection .text;" +".global __raw_callee_save___kvm_vcpu_is_preempted;" +".type __raw_callee_save___kvm_vcpu_is_preempted, @function;" +"__raw_callee_save___kvm_vcpu_is_preempted:" +"movq __per_cpu_offset(,%rdi,8), %rax;" +"cmpb $...