Displaying 3 results from an estimated 3 matches for "syscall_sequence".
2017 Oct 04
0
[PATCH 09/13] x86/asm: Convert ALTERNATIVE*() assembler macros to preprocessor macros
...dex ed4bc9731cbb..a0c5f9e8226c 100644
--- a/arch/x86/entry/vdso/vdso32/system_call.S
+++ b/arch/x86/entry/vdso/vdso32/system_call.S
@@ -48,15 +48,15 @@ __kernel_vsyscall:
CFI_ADJUST_CFA_OFFSET 4
CFI_REL_OFFSET ebp, 0
- #define SYSENTER_SEQUENCE "movl %esp, %ebp; sysenter"
- #define SYSCALL_SEQUENCE "movl %ecx, %ebp; syscall"
+ #define SYSENTER_SEQUENCE movl %esp, %ebp; sysenter
+ #define SYSCALL_SEQUENCE movl %ecx, %ebp; syscall
#ifdef CONFIG_X86_64
/* If SYSENTER (Intel) or SYSCALL32 (AMD) is available, use it. */
- ALTERNATIVE_2 "", SYSENTER_SEQUENCE, X86_FEATURE_SY...
2017 Oct 04
31
[PATCH 00/13] x86/paravirt: Make pv ops code generation more closely match reality
This changes the pv ops code generation to more closely match reality.
For example, instead of:
callq *0xffffffff81e3a400 (pv_irq_ops.save_fl)
vmlinux will now show:
pushfq
pop %rax
nop
nop
nop
nop
nop
which is what the runtime version of the code will show in most cases.
This idea was suggested by Andy Lutomirski.
The benefits are:
- For the most common runtime cases
2017 Oct 04
31
[PATCH 00/13] x86/paravirt: Make pv ops code generation more closely match reality
This changes the pv ops code generation to more closely match reality.
For example, instead of:
callq *0xffffffff81e3a400 (pv_irq_ops.save_fl)
vmlinux will now show:
pushfq
pop %rax
nop
nop
nop
nop
nop
which is what the runtime version of the code will show in most cases.
This idea was suggested by Andy Lutomirski.
The benefits are:
- For the most common runtime cases