search for: static_key_init_nop

Displaying 14 results from an estimated 14 matches for "static_key_init_nop".

2019 Sep 26
2
An error of asm goto occured while compiling Linux kernel 5.3
...release. This error said "invalid operand for inline asm constraint 'i'" in arch/x86/include/asm/jump_table.h. The source code is static __always_inline bool arch_static_branch(struct static_key *key, bool branch) { asm_volatile_goto("1:" ".byte " __stringify(STATIC_KEY_INIT_NOP) "\n\t" ".pushsection __jump_table, \"aw\" \n\t" _ASM_ALIGN "\n\t" ".long 1b - ., %l[l_yes] - . \n\t" _ASM_PTR "%c0 + %c1 - .\n\t" ".popsection \n\t" : : "i" (key), "i" (branch) : : l_yes); return false; l_...
2018 Dec 19
0
[PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code
...mp-label kernel +#endif + #define JUMP_LABEL_NOP_SIZE 5 #ifdef CONFIG_X86_64 @@ -53,26 +66,37 @@ static __always_inline bool arch_static_branch_jump(struct static_key *key, bool #else /* __ASSEMBLY__ */ -.macro STATIC_BRANCH_NOP l_yes:req key:req branch:req -.Lstatic_branch_nop_\@: - .byte STATIC_KEY_INIT_NOP -.Lstatic_branch_no_after_\@: +.macro STATIC_JUMP_IF_TRUE target, key, def +.Lstatic_jump_\@: + .if \def + /* Equivalent to "jmp.d32 \target" */ + .byte 0xe9 + .long \target - .Lstatic_jump_after_\@ +.Lstatic_jump_after_\@: + .else + .byte STATIC_KEY_INIT_NOP + .endif .pushsection __...
2018 Dec 17
3
[PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code
This series reverts the in-kernel workarounds for inlining issues. The commit description of 77b0bf55bc67 mentioned "We also hope that GCC will eventually get fixed,..." Now, GCC provides a solution. https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html explains the new "asm inline" syntax. The performance issue will be eventually solved. [About Code cleanups] I know Nadam
2018 Dec 17
3
[PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code
This series reverts the in-kernel workarounds for inlining issues. The commit description of 77b0bf55bc67 mentioned "We also hope that GCC will eventually get fixed,..." Now, GCC provides a solution. https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html explains the new "asm inline" syntax. The performance issue will be eventually solved. [About Code cleanups] I know Nadam
2018 Dec 13
2
[PATCH] kbuild, x86: revert macros in extended asm workarounds
...ey, bool branch) { - asm_volatile_goto("STATIC_BRANCH_NOP l_yes=\"%l[l_yes]\" key=\"%c0\" " - "branch=\"%c1\"" - : : "i" (key), "i" (branch) : : l_yes); + asm_volatile_goto("1:" + ".byte " __stringify(STATIC_KEY_INIT_NOP) "\n\t" + ".pushsection __jump_table, \"aw\" \n\t" + _ASM_ALIGN "\n\t" + ".long 1b - ., %l[l_yes] - . \n\t" + _ASM_PTR "%c0 + %c1 - .\n\t" + ".popsection \n\t" + : : "i" (key), "i" (branch) : : l_yes);...
2018 Dec 13
2
[PATCH] kbuild, x86: revert macros in extended asm workarounds
...ey, bool branch) { - asm_volatile_goto("STATIC_BRANCH_NOP l_yes=\"%l[l_yes]\" key=\"%c0\" " - "branch=\"%c1\"" - : : "i" (key), "i" (branch) : : l_yes); + asm_volatile_goto("1:" + ".byte " __stringify(STATIC_KEY_INIT_NOP) "\n\t" + ".pushsection __jump_table, \"aw\" \n\t" + _ASM_ALIGN "\n\t" + ".long 1b - ., %l[l_yes] - . \n\t" + _ASM_PTR "%c0 + %c1 - .\n\t" + ".popsection \n\t" + : : "i" (key), "i" (branch) : : l_yes);...
2018 Dec 16
1
[PATCH v2] x86, kbuild: revert macrofying inline assembly code
...ey, bool branch) { - asm_volatile_goto("STATIC_BRANCH_NOP l_yes=\"%l[l_yes]\" key=\"%c0\" " - "branch=\"%c1\"" - : : "i" (key), "i" (branch) : : l_yes); + asm_volatile_goto("1:" + ".byte " __stringify(STATIC_KEY_INIT_NOP) "\n\t" + ".pushsection __jump_table, \"aw\" \n\t" + _ASM_ALIGN "\n\t" + ".long 1b - ., %l[l_yes] - . \n\t" + _ASM_PTR "%c0 + %c1 - .\n\t" + ".popsection \n\t" + : : "i" (key), "i" (branch) : : l_yes);...
2018 Mar 13
32
[PATCH v2 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v2: - Adapt patch to work post KPTI and compiler changes - Redo all performance testing with latest configs and compilers - Simplify mov macro on PIE (MOVABS now) - Reduce GOT footprint - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%gs with PIE when possible. - rfc v3: - Use --emit-relocs instead of -pie to reduce
2018 Mar 13
32
[PATCH v2 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v2: - Adapt patch to work post KPTI and compiler changes - Redo all performance testing with latest configs and compilers - Simplify mov macro on PIE (MOVABS now) - Reduce GOT footprint - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%gs with PIE when possible. - rfc v3: - Use --emit-relocs instead of -pie to reduce
2017 Oct 04
28
x86: PIE support and option to extend KASLR randomization
These patches make the changes necessary to build the kernel as Position Independent Executable (PIE) on x86_64. A PIE kernel can be relocated below the top 2G of the virtual address space. It allows to optionally extend the KASLR randomization range from 1G to 3G. Thanks a lot to Ard Biesheuvel & Kees Cook on their feedback on compiler changes, PIE support and KASLR in general. Thanks to
2017 Oct 04
28
x86: PIE support and option to extend KASLR randomization
These patches make the changes necessary to build the kernel as Position Independent Executable (PIE) on x86_64. A PIE kernel can be relocated below the top 2G of the virtual address space. It allows to optionally extend the KASLR randomization range from 1G to 3G. Thanks a lot to Ard Biesheuvel & Kees Cook on their feedback on compiler changes, PIE support and KASLR in general. Thanks to
2018 May 23
33
[PATCH v3 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v3: - Update on message to describe longer term PIE goal. - Minor change on ftrace if condition. - Changed code using xchgq. - patch v2: - Adapt patch to work post KPTI and compiler changes - Redo all performance testing with latest configs and compilers - Simplify mov macro on PIE (MOVABS now) - Reduce GOT footprint - patch v1: - Simplify ftrace
2017 Oct 11
32
[PATCH v1 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%gs with PIE when possible. - rfc v3: - Use --emit-relocs instead of -pie to reduce dynamic relocation space on mapped memory. It also simplifies the relocation process. - Move the start the module section next to the kernel. Remove the need for -mcmodel=large on modules. Extends
2017 Oct 11
32
[PATCH v1 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%gs with PIE when possible. - rfc v3: - Use --emit-relocs instead of -pie to reduce dynamic relocation space on mapped memory. It also simplifies the relocation process. - Move the start the module section next to the kernel. Remove the need for -mcmodel=large on modules. Extends