search for: static_branch_jmp

Displaying 6 results from an estimated 6 matches for "static_branch_jmp".

2018 Dec 19
0
[PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code
...6/entry/calling.h index 25e5a6bda8c3..20d0885b00fb 100644 --- a/arch/x86/entry/calling.h +++ b/arch/x86/entry/calling.h @@ -352,7 +352,7 @@ For 32-bit we have the following conventions - kernel is built with .macro CALL_enter_from_user_mode #ifdef CONFIG_CONTEXT_TRACKING #ifdef HAVE_JUMP_LABEL - STATIC_BRANCH_JMP l_yes=.Lafter_call_\@, key=context_tracking_enabled, branch=1 + STATIC_JUMP_IF_FALSE .Lafter_call_\@, context_tracking_enabled, def=0 #endif call enter_from_user_mode .Lafter_call_\@: diff --git a/arch/x86/include/asm/jump_label.h b/arch/x86/include/asm/jump_label.h index cf88ebf9a4ca..21efc9d0...
2018 Dec 17
3
[PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code
This series reverts the in-kernel workarounds for inlining issues. The commit description of 77b0bf55bc67 mentioned "We also hope that GCC will eventually get fixed,..." Now, GCC provides a solution. https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html explains the new "asm inline" syntax. The performance issue will be eventually solved. [About Code cleanups] I know Nadam
2018 Dec 17
3
[PATCH v3 00/12] x86, kbuild: revert macrofying inline assembly code
This series reverts the in-kernel workarounds for inlining issues. The commit description of 77b0bf55bc67 mentioned "We also hope that GCC will eventually get fixed,..." Now, GCC provides a solution. https://gcc.gnu.org/onlinedocs/gcc/Extended-Asm.html explains the new "asm inline" syntax. The performance issue will be eventually solved. [About Code cleanups] I know Nadam
2018 Dec 13
2
[PATCH] kbuild, x86: revert macros in extended asm workarounds
...b/arch/x86/entry/calling.h index 25e5a6b..20d0885 100644 --- a/arch/x86/entry/calling.h +++ b/arch/x86/entry/calling.h @@ -352,7 +352,7 @@ For 32-bit we have the following conventions - kernel is built with .macro CALL_enter_from_user_mode #ifdef CONFIG_CONTEXT_TRACKING #ifdef HAVE_JUMP_LABEL - STATIC_BRANCH_JMP l_yes=.Lafter_call_\@, key=context_tracking_enabled, branch=1 + STATIC_JUMP_IF_FALSE .Lafter_call_\@, context_tracking_enabled, def=0 #endif call enter_from_user_mode .Lafter_call_\@: diff --git a/arch/x86/include/asm/alternative-asm.h b/arch/x86/include/asm/alternative-asm.h index 8e4ea39..31b...
2018 Dec 13
2
[PATCH] kbuild, x86: revert macros in extended asm workarounds
...b/arch/x86/entry/calling.h index 25e5a6b..20d0885 100644 --- a/arch/x86/entry/calling.h +++ b/arch/x86/entry/calling.h @@ -352,7 +352,7 @@ For 32-bit we have the following conventions - kernel is built with .macro CALL_enter_from_user_mode #ifdef CONFIG_CONTEXT_TRACKING #ifdef HAVE_JUMP_LABEL - STATIC_BRANCH_JMP l_yes=.Lafter_call_\@, key=context_tracking_enabled, branch=1 + STATIC_JUMP_IF_FALSE .Lafter_call_\@, context_tracking_enabled, def=0 #endif call enter_from_user_mode .Lafter_call_\@: diff --git a/arch/x86/include/asm/alternative-asm.h b/arch/x86/include/asm/alternative-asm.h index 8e4ea39..31b...
2018 Dec 16
1
[PATCH v2] x86, kbuild: revert macrofying inline assembly code
...), "i" (branch) : : l_yes); + return false; l_yes: return true; @@ -30,8 +36,14 @@ static __always_inline bool arch_static_branch(struct static_key *key, bool bran static __always_inline bool arch_static_branch_jump(struct static_key *key, bool branch) { - asm_volatile_goto("STATIC_BRANCH_JMP l_yes=\"%l[l_yes]\" key=\"%c0\" " - "branch=\"%c1\"" + asm_volatile_goto("1:" + ".byte 0xe9\n\t .long %l[l_yes] - 2f\n\t" + "2:\n\t" + ".pushsection __jump_table, \"aw\" \n\t" + _ASM_ALIGN "...