search for: recordmcount

Displaying 20 results from an estimated 23 matches for "recordmcount".

2018 May 24
2
[PATCH v3 21/27] x86/ftrace: Adapt function tracing for PIE support
...> --- a/arch/x86/include/asm/ftrace.h > +++ b/arch/x86/include/asm/ftrace.h > @@ -25,9 +25,11 @@ extern void __fentry__(void); > static inline unsigned long ftrace_call_adjust(unsigned long addr) > { > /* > - * addr is the address of the mcount call instruction. > - * recordmcount does the necessary offset calculation. > + * addr is the address of the mcount call instruction. PIE has always a > + * byte added to the start of the function. > */ > + if (IS_ENABLED(CONFIG_X86_PIE)) > + addr -= 1; This seems to modify the address even for modules that are _...
2018 May 24
2
[PATCH v3 21/27] x86/ftrace: Adapt function tracing for PIE support
...> --- a/arch/x86/include/asm/ftrace.h > +++ b/arch/x86/include/asm/ftrace.h > @@ -25,9 +25,11 @@ extern void __fentry__(void); > static inline unsigned long ftrace_call_adjust(unsigned long addr) > { > /* > - * addr is the address of the mcount call instruction. > - * recordmcount does the necessary offset calculation. > + * addr is the address of the mcount call instruction. PIE has always a > + * byte added to the start of the function. > */ > + if (IS_ENABLED(CONFIG_X86_PIE)) > + addr -= 1; This seems to modify the address even for modules that are _...
2018 May 24
1
[PATCH v3 21/27] x86/ftrace: Adapt function tracing for PIE support
...ude/asm/ftrace.h > > > @@ -25,9 +25,11 @@ extern void __fentry__(void); > > > static inline unsigned long ftrace_call_adjust(unsigned long addr) > > > { > > > /* > > > - * addr is the address of the mcount call instruction. > > > - * recordmcount does the necessary offset calculation. > > > + * addr is the address of the mcount call instruction. PIE has always a > > > + * byte added to the start of the function. > > > */ > > > + if (IS_ENABLED(CONFIG_X86_PIE)) > > > + addr...
2012 Feb 14
3
ftrace_enabled set to 1 on bootup, slow downs with CONFIG_FUNCTION_TRACER in virt environments?
Hey, I was running some benchmarks (netserver/netperf) where the init script just launched the netserver and nothing else and was concerned to see the performance not up to par. This was an HVM guest running with PV drivers. If I compile the kernel without CONFIG_FUNCTION_TRACER it is much better - but it was my understanding that the tracing code does not impact the machine unless it is
2018 May 24
0
[PATCH v3 21/27] x86/ftrace: Adapt function tracing for PIE support
...race.h > > +++ b/arch/x86/include/asm/ftrace.h > > @@ -25,9 +25,11 @@ extern void __fentry__(void); > > static inline unsigned long ftrace_call_adjust(unsigned long addr) > > { > > /* > > - * addr is the address of the mcount call instruction. > > - * recordmcount does the necessary offset calculation. > > + * addr is the address of the mcount call instruction. PIE has always a > > + * byte added to the start of the function. > > */ > > + if (IS_ENABLED(CONFIG_X86_PIE)) > > + addr -= 1; > > This seems to modify th...
2017 Oct 05
2
[RFC v3 20/27] x86/ftrace: Adapt function tracing for PIE support
On Thu, Oct 5, 2017 at 6:06 AM, Steven Rostedt <rostedt at goodmis.org> wrote: > On Wed, 4 Oct 2017 14:19:56 -0700 > Thomas Garnier <thgarnie at google.com> wrote: > >> When using -fPIE/PIC with function tracing, the compiler generates a >> call through the GOT (call *__fentry__ at GOTPCREL). This instruction >> takes 6 bytes instead of 5 on the usual
2017 Oct 05
2
[RFC v3 20/27] x86/ftrace: Adapt function tracing for PIE support
On Thu, Oct 5, 2017 at 6:06 AM, Steven Rostedt <rostedt at goodmis.org> wrote: > On Wed, 4 Oct 2017 14:19:56 -0700 > Thomas Garnier <thgarnie at google.com> wrote: > >> When using -fPIE/PIC with function tracing, the compiler generates a >> call through the GOT (call *__fentry__ at GOTPCREL). This instruction >> takes 6 bytes instead of 5 on the usual
2013 Jun 01
2
[LLVMdev] Compile Linux Kernel module into LLVM bitcode
...'t create the .bc file and sometimes errors out like this: ==================== objdump: scripts/mod/.tmp_empty.o: File format not recognized if [ "-pg" = "-pg" ]; then if [ scripts/mod/empty.o != "scripts/mod/empty.o" ]; then /home/kevin/split_io_Linux/scripts/recordmcount "scripts/mod/empty.o"; fi; fi; gcc -Wp,-MD,scripts/mod/.mk_elfconfig.d -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer -o scripts/mod/mk_elfconfig scripts/mod/mk_elfconfig.c scripts/mod/mk_elfconfig < scripts/mod/empty.o > scripts/mod/elfconfig.h E...
2013 Jun 01
0
[LLVMdev] Compile Linux Kernel module into LLVM bitcode
...put: and sometimes errors out like this: > > ==================== > objdump: scripts/mod/.tmp_empty.o: File format not recognized > if [ "-pg" = "-pg" ]; then if [ scripts/mod/empty.o != "scripts/mod/empty.o" ]; then /home/kevin/split_io_Linux/scripts/recordmcount "scripts/mod/empty.o"; fi; fi; > gcc -Wp,-MD,scripts/mod/.mk_elfconfig.d -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer -o scripts/mod/mk_elfconfig scripts/mod/mk_elfconfig.c > scripts/mod/mk_elfconfig < scripts/mod/empty.o > scripts/mod/elf...
2013 Jun 08
1
[LLVMdev] Compile Linux Kernel module into LLVM bitcode
...s errors out like this: > > > > ==================== > > objdump: scripts/mod/.tmp_empty.o: File format not recognized > > if [ "-pg" = "-pg" ]; then if [ scripts/mod/empty.o != "scripts/mod/empty.o" ]; then /home/kevin/split_io_Linux/scripts/recordmcount "scripts/mod/empty.o"; fi; fi; > > gcc -Wp,-MD,scripts/mod/.mk_elfconfig.d -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer -o scripts/mod/mk_elfconfig scripts/mod/mk_elfconfig.c > > scripts/mod/mk_elfconfig < scripts/mod/empty.o > scrip...
2017 Oct 05
0
[RFC v3 20/27] x86/ftrace: Adapt function tracing for PIE support
...When > > function tracing is enabled, the calls are back to the normal call to > > the ftrace trampoline? > > That is correct. > Then I think a better idea is to simply nop them out at compile time, and have the code that updates them to nops to know about it. See scripts/recordmcount.c Could we simply add a 5 byte nop followed by a 1 byte nop, and treat it the same as if it didn't exist? This code can be a little complex, and can cause really nasty side effects if things go wrong. I would like to keep from adding more variables to the changes here. -- Steve
2018 May 23
0
[PATCH v3 21/27] x86/ftrace: Adapt function tracing for PIE support
...index c18ed65287d5..8f2decce38d8 100644 --- a/arch/x86/include/asm/ftrace.h +++ b/arch/x86/include/asm/ftrace.h @@ -25,9 +25,11 @@ extern void __fentry__(void); static inline unsigned long ftrace_call_adjust(unsigned long addr) { /* - * addr is the address of the mcount call instruction. - * recordmcount does the necessary offset calculation. + * addr is the address of the mcount call instruction. PIE has always a + * byte added to the start of the function. */ + if (IS_ENABLED(CONFIG_X86_PIE)) + addr -= 1; return addr; } diff --git a/arch/x86/include/asm/sections.h b/arch/x86/include/as...
2018 Mar 13
0
[PATCH v2 21/27] x86/ftrace: Adapt function tracing for PIE support
...index 09ad88572746..61fa02d81b95 100644 --- a/arch/x86/include/asm/ftrace.h +++ b/arch/x86/include/asm/ftrace.h @@ -25,9 +25,11 @@ extern void __fentry__(void); static inline unsigned long ftrace_call_adjust(unsigned long addr) { /* - * addr is the address of the mcount call instruction. - * recordmcount does the necessary offset calculation. + * addr is the address of the mcount call instruction. PIE has always a + * byte added to the start of the function. */ + if (IS_ENABLED(CONFIG_X86_PIE)) + addr -= 1; return addr; } diff --git a/arch/x86/include/asm/sections.h b/arch/x86/include/as...
2018 May 29
1
[PATCH v4 00/27] x86: PIE support and option to extend KASLR randomization
...| 7 + init/Kconfig | 16 ++ kernel/kallsyms.c | 16 +- kernel/trace/trace.h | 4 lib/dynamic_debug.c | 4 scripts/link-vmlinux.sh | 14 ++ scripts/recordmcount.c | 79 +++++++---- 75 files changed, 1109 insertions(+), 343 deletions(-)
2018 Jun 25
1
[PATCH v5 00/27] x86: PIE support and option to extend KASLR randomization
...| 7 + init/Kconfig | 16 ++ kernel/kallsyms.c | 16 +- kernel/trace/trace.h | 4 lib/dynamic_debug.c | 4 scripts/link-vmlinux.sh | 14 ++ scripts/recordmcount.c | 79 +++++++---- 80 files changed, 1134 insertions(+), 358 deletions(-)
2017 Oct 04
1
[RFC v3 20/27] x86/ftrace: Adapt function tracing for PIE support
...ctions.h> + extern void mcount(void); extern atomic_t modifying_ftrace_code; extern void __fentry__(void); @@ -24,9 +39,11 @@ extern void __fentry__(void); static inline unsigned long ftrace_call_adjust(unsigned long addr) { /* - * addr is the address of the mcount call instruction. - * recordmcount does the necessary offset calculation. + * addr is the address of the mcount call instruction. PIE has always a + * byte added to the start of the function. */ + if (IS_ENABLED(CONFIG_X86_PIE)) + addr -= 1; return addr; } diff --git a/arch/x86/include/asm/sections.h b/arch/x86/include/as...
2018 May 23
33
[PATCH v3 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v3: - Update on message to describe longer term PIE goal. - Minor change on ftrace if condition. - Changed code using xchgq. - patch v2: - Adapt patch to work post KPTI and compiler changes - Redo all performance testing with latest configs and compilers - Simplify mov macro on PIE (MOVABS now) - Reduce GOT footprint - patch v1: - Simplify ftrace
2018 Mar 13
32
[PATCH v2 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v2: - Adapt patch to work post KPTI and compiler changes - Redo all performance testing with latest configs and compilers - Simplify mov macro on PIE (MOVABS now) - Reduce GOT footprint - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%gs with PIE when possible. - rfc v3: - Use --emit-relocs instead of -pie to reduce
2018 Mar 13
32
[PATCH v2 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v2: - Adapt patch to work post KPTI and compiler changes - Redo all performance testing with latest configs and compilers - Simplify mov macro on PIE (MOVABS now) - Reduce GOT footprint - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%gs with PIE when possible. - rfc v3: - Use --emit-relocs instead of -pie to reduce
2017 Oct 04
28
x86: PIE support and option to extend KASLR randomization
These patches make the changes necessary to build the kernel as Position Independent Executable (PIE) on x86_64. A PIE kernel can be relocated below the top 2G of the virtual address space. It allows to optionally extend the KASLR randomization range from 1G to 3G. Thanks a lot to Ard Biesheuvel & Kees Cook on their feedback on compiler changes, PIE support and KASLR in general. Thanks to