Displaying 20 results from an estimated 20 matches for "asm_call_constraint".
2017 Oct 04
0
[PATCH 07/13] x86/paravirt: Simplify ____PVOP_CALL()
...PVOP_TEST_NULL(op); \
- /* This is 32-bit specific, but is okay in 64-bit */ \
- /* since this condition will never hold */ \
- if (sizeof(rettype) > sizeof(unsigned long)) { \
- asm volatile(pre \
- paravirt_alt(PARAVIRT_CALL) \
- post \
- : call_clbr, ASM_CALL_CONSTRAINT \
- : paravirt_type(op), \
- paravirt_clobber(clbr), \
- ##__VA_ARGS__ \
- : "memory", "cc" extra_clbr); \
- __ret = (rettype)((((u64)__edx) << 32) | __eax); \
- } else { \
- asm volatile(pre \
- paravirt_alt(PAR...
2018 Dec 17
0
[PATCH v3 04/12] Revert "x86/paravirt: Work around GCC inlining bugs when compiling paravirt ops"
....
@@ -509,7 +531,7 @@ int paravirt_disable_iospace(void);
/* since this condition will never hold */ \
if (sizeof(rettype) > sizeof(unsigned long)) { \
asm volatile(pre \
- paravirt_call \
+ paravirt_alt(PARAVIRT_CALL) \
post \
: call_clbr, ASM_CALL_CONSTRAINT \
: paravirt_type(op), \
@@ -519,7 +541,7 @@ int paravirt_disable_iospace(void);
__ret = (rettype)((((u64)__edx) << 32) | __eax); \
} else { \
asm volatile(pre \
- paravirt_call \
+ paravirt_alt(PARAVIRT_CALL) \
post \
: c...
2017 Oct 04
0
[PATCH 08/13] x86/paravirt: Clean up paravirt_types.h
...; \
- PVOP_CALL_ARGS; \
- PVOP_TEST_NULL(op); \
+({ \
+ rettype __ret; \
+ PVOP_CALL_ARGS; \
+ PVOP_TEST_NULL(op); \
asm volatile(pre \
- paravirt_alt(PARAVIRT_CALL) \
+ PV_SITE(PV_CALL_STR) \
post \
: call_clbr, ASM_CALL_CONSTRAINT \
- : paravirt_type(op), \
- paravirt_clobber(clbr), \
+ : PV_INPUT_CONSTRAINTS(op, clbr), \
##__VA_ARGS__ \
: "memory", "cc" extra_clbr); \
- if (IS_ENABLED(CONFIG_X86_32) && \
- sizeof(rettype) > sizeof(un...
2017 Oct 04
31
[PATCH 00/13] x86/paravirt: Make pv ops code generation more closely match reality
This changes the pv ops code generation to more closely match reality.
For example, instead of:
callq *0xffffffff81e3a400 (pv_irq_ops.save_fl)
vmlinux will now show:
pushfq
pop %rax
nop
nop
nop
nop
nop
which is what the runtime version of the code will show in most cases.
This idea was suggested by Andy Lutomirski.
The benefits are:
- For the most common runtime cases
2017 Oct 04
31
[PATCH 00/13] x86/paravirt: Make pv ops code generation more closely match reality
This changes the pv ops code generation to more closely match reality.
For example, instead of:
callq *0xffffffff81e3a400 (pv_irq_ops.save_fl)
vmlinux will now show:
pushfq
pop %rax
nop
nop
nop
nop
nop
which is what the runtime version of the code will show in most cases.
This idea was suggested by Andy Lutomirski.
The benefits are:
- For the most common runtime cases
2018 Dec 17
3
[PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code
This series reverts the in-kernel workarounds for inlining issues.
The commit description of 77b0bf55bc67 mentioned
"We also hope that GCC will eventually get fixed,..."
Now, GCC provides a solution.
https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html
explains the new "asm inline" syntax.
The performance issue will be eventually solved.
[About Code cleanups]
I know Nadam
2018 Dec 17
3
[PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code
This series reverts the in-kernel workarounds for inlining issues.
The commit description of 77b0bf55bc67 mentioned
"We also hope that GCC will eventually get fixed,..."
Now, GCC provides a solution.
https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html
explains the new "asm inline" syntax.
The performance issue will be eventually solved.
[About Code cleanups]
I know Nadam
2018 Dec 16
1
[PATCH v2] x86, kbuild: revert macrofying inline assembly code
....
@@ -509,7 +531,7 @@ int paravirt_disable_iospace(void);
/* since this condition will never hold */ \
if (sizeof(rettype) > sizeof(unsigned long)) { \
asm volatile(pre \
- paravirt_call \
+ paravirt_alt(PARAVIRT_CALL) \
post \
: call_clbr, ASM_CALL_CONSTRAINT \
: paravirt_type(op), \
@@ -519,7 +541,7 @@ int paravirt_disable_iospace(void);
__ret = (rettype)((((u64)__edx) << 32) | __eax); \
} else { \
asm volatile(pre \
- paravirt_call \
+ paravirt_alt(PARAVIRT_CALL) \
post \
: c...
2017 Oct 04
1
[PATCH 11/13] x86/paravirt: Add paravirt alternatives infrastructure
...pre, post, ##__VA_ARGS__)
+#define ____PVOP_ALT_CALL(rettype, native, op, clbr, call_clbr, \
+ extra_clbr, ...) \
+({ \
+ rettype __ret; \
+ PVOP_CALL_ARGS; \
+ PVOP_TEST_NULL(op); \
+ asm volatile(PV_ALT_SITE(native, PV_CALL_STR) \
+ : call_clbr, ASM_CALL_CONSTRAINT \
+ : PV_INPUT_CONSTRAINTS(op, clbr), \
+ ##__VA_ARGS__ \
+ : "memory", "cc" extra_clbr); \
+ if (IS_ENABLED(CONFIG_X86_32) && \
+ sizeof(rettype) > sizeof(unsigned long)) \
+ __ret = (rettype)((((u64)__edx) << 32) | __eax...
2018 Dec 13
2
[PATCH] kbuild, x86: revert macros in extended asm workarounds
....
@@ -509,7 +531,7 @@ int paravirt_disable_iospace(void);
/* since this condition will never hold */ \
if (sizeof(rettype) > sizeof(unsigned long)) { \
asm volatile(pre \
- paravirt_call \
+ paravirt_alt(PARAVIRT_CALL) \
post \
: call_clbr, ASM_CALL_CONSTRAINT \
: paravirt_type(op), \
@@ -519,7 +541,7 @@ int paravirt_disable_iospace(void);
__ret = (rettype)((((u64)__edx) << 32) | __eax); \
} else { \
asm volatile(pre \
- paravirt_call \
+ paravirt_alt(PARAVIRT_CALL) \
post \
: c...
2018 Dec 13
2
[PATCH] kbuild, x86: revert macros in extended asm workarounds
....
@@ -509,7 +531,7 @@ int paravirt_disable_iospace(void);
/* since this condition will never hold */ \
if (sizeof(rettype) > sizeof(unsigned long)) { \
asm volatile(pre \
- paravirt_call \
+ paravirt_alt(PARAVIRT_CALL) \
post \
: call_clbr, ASM_CALL_CONSTRAINT \
: paravirt_type(op), \
@@ -519,7 +541,7 @@ int paravirt_disable_iospace(void);
__ret = (rettype)((((u64)__edx) << 32) | __eax); \
} else { \
asm volatile(pre \
- paravirt_call \
+ paravirt_alt(PARAVIRT_CALL) \
post \
: c...
2023 Jun 08
3
[RFC PATCH 0/3] x86/paravirt: Get rid of paravirt patching
This is a small series getting rid of paravirt patching by switching
completely to alternative patching for the same functionality.
The basic idea is to add the capability to switch from indirect to
direct calls via a special alternative patching option.
This removes _some_ of the paravirt macro maze, but most of it needs
to stay due to the need of hiding the call instructions from the
compiler
2023 Jun 08
3
[RFC PATCH 0/3] x86/paravirt: Get rid of paravirt patching
This is a small series getting rid of paravirt patching by switching
completely to alternative patching for the same functionality.
The basic idea is to add the capability to switch from indirect to
direct calls via a special alternative patching option.
This removes _some_ of the paravirt macro maze, but most of it needs
to stay due to the need of hiding the call instructions from the
compiler
2018 Mar 13
32
[PATCH v2 00/27] x86: PIE support and option to extend KASLR randomization
Changes:
- patch v2:
- Adapt patch to work post KPTI and compiler changes
- Redo all performance testing with latest configs and compilers
- Simplify mov macro on PIE (MOVABS now)
- Reduce GOT footprint
- patch v1:
- Simplify ftrace implementation.
- Use gcc mstack-protector-guard-reg=%gs with PIE when possible.
- rfc v3:
- Use --emit-relocs instead of -pie to reduce
2018 Mar 13
32
[PATCH v2 00/27] x86: PIE support and option to extend KASLR randomization
Changes:
- patch v2:
- Adapt patch to work post KPTI and compiler changes
- Redo all performance testing with latest configs and compilers
- Simplify mov macro on PIE (MOVABS now)
- Reduce GOT footprint
- patch v1:
- Simplify ftrace implementation.
- Use gcc mstack-protector-guard-reg=%gs with PIE when possible.
- rfc v3:
- Use --emit-relocs instead of -pie to reduce
2017 Oct 04
28
x86: PIE support and option to extend KASLR randomization
These patches make the changes necessary to build the kernel as Position
Independent Executable (PIE) on x86_64. A PIE kernel can be relocated below
the top 2G of the virtual address space. It allows to optionally extend the
KASLR randomization range from 1G to 3G.
Thanks a lot to Ard Biesheuvel & Kees Cook on their feedback on compiler
changes, PIE support and KASLR in general. Thanks to
2017 Oct 04
28
x86: PIE support and option to extend KASLR randomization
These patches make the changes necessary to build the kernel as Position
Independent Executable (PIE) on x86_64. A PIE kernel can be relocated below
the top 2G of the virtual address space. It allows to optionally extend the
KASLR randomization range from 1G to 3G.
Thanks a lot to Ard Biesheuvel & Kees Cook on their feedback on compiler
changes, PIE support and KASLR in general. Thanks to
2018 May 23
33
[PATCH v3 00/27] x86: PIE support and option to extend KASLR randomization
Changes:
- patch v3:
- Update on message to describe longer term PIE goal.
- Minor change on ftrace if condition.
- Changed code using xchgq.
- patch v2:
- Adapt patch to work post KPTI and compiler changes
- Redo all performance testing with latest configs and compilers
- Simplify mov macro on PIE (MOVABS now)
- Reduce GOT footprint
- patch v1:
- Simplify ftrace
2017 Oct 11
32
[PATCH v1 00/27] x86: PIE support and option to extend KASLR randomization
Changes:
- patch v1:
- Simplify ftrace implementation.
- Use gcc mstack-protector-guard-reg=%gs with PIE when possible.
- rfc v3:
- Use --emit-relocs instead of -pie to reduce dynamic relocation space on
mapped memory. It also simplifies the relocation process.
- Move the start the module section next to the kernel. Remove the need for
-mcmodel=large on modules. Extends
2017 Oct 11
32
[PATCH v1 00/27] x86: PIE support and option to extend KASLR randomization
Changes:
- patch v1:
- Simplify ftrace implementation.
- Use gcc mstack-protector-guard-reg=%gs with PIE when possible.
- rfc v3:
- Use --emit-relocs instead of -pie to reduce dynamic relocation space on
mapped memory. It also simplifies the relocation process.
- Move the start the module section next to the kernel. Remove the need for
-mcmodel=large on modules. Extends