Masami Hiramatsu
2013-Nov-20 04:21 UTC
[PATCH -tip v3 00/23] kprobes: introduce NOKPROBE_SYMBOL() and general cleaning of kprobe blacklist
Hi, Here is the version 3 of NOKPORBE_SYMBOL series. Currently the blacklist is maintained by hand in kprobes.c which is separated from the function definition and is hard to catch up the kernel update. To solve this issue, I've introduced NOKPROBE_SYMBOL() macro for making kprobe blacklist at build time. Since the NOKPROBE_SYMBOL() macros can be placed right after the function is defined (as like as EXPORT_SYMBOL), it is easy to maintain. This series replaces __kprobes with NOKPROBE_SYMBOL() macro or apply __always_inline annotation for some cases, because NOKPROBE_SYMBOL() will inhibit inlining by referring the symbol address. :( At this point, I replaced all __kprobes under kernel/ and arch/x86. For future work, I'd like to replace all the __kprobes annotation for all archs too. Also, I decided to classify current __kprobes annotation users who misuse it. Most of the preparation, registration, optimization functions related to kprobes are not involved in the breakpoint or other exception handling. This means that those never cause problems such as infinite recursion if we put kprobes on it. This also reduces blacklist a lot. For easy to check the blacklist, you can see what address region/symbols are not allowed to probe via /sys/kernel/debug/kprobes/blacklist. Since the new blacklist can be populated/shrinked dynamically, the blacklist now also support modules. :) kprobes users can make a custom blacklisted functions which will be called from kprobes handlers. Example codes are also updated, so you can see how it works. This series also includes a change which prohibits probing on the address in .entry.text because the code is used for very low-level sensitive interrupt/syscall entries. Probing such code may cause unexpected result (actually most of that area is already in the kprobe blacklist). So I've decide to prohibit probing all of them. Finally, I got an empty .kprobes.text on x86 :) $ grep kprobes_text System.map ffffffff81604980 T __kprobes_text_end ffffffff81604980 T __kprobes_text_start Thank you, Changes from v2 to v3: - Introduce arch_within_kprobe_blacklist() which checks the address is within the .kprobes.text (generic,x86) or .entry.text (x86), for fixing build issue on !x86. - Rename in_nokprobes_functions to within_kprobe_blacklist and it returns a bool value istead of an error. - Fix the type of kprobe_blacklist_seq_stop(). - Use blacklist entry to check the blacklisted address ranges (.entry.text/.kprobes.text). This also eliminates arch_within_kprobe_blacklist(). :) Changes from v1 to v2: - Replace __kprobes with NOKPROBE_SYMBOL() and remove unneeded __kprobes on the files compiled on x86. - Add blacklist on modules support. - Add debugfs interface for blacklist. - Fix indent of the NOKPROBE_SYMBOL() by using tabs. - Fix NOKPROBE_SYMBOL() for expanding nested macro. - Update Documentations/kprobes.txt about blacklist. --- Masami Hiramatsu (23): kprobes: Prohibit probing on .entry.text code kprobes: Introduce NOKPROBE_SYMBOL() macro for blacklist kprobes: Show blacklist entries via debugfs kprobes: Support blacklist functions in module kprobes: Use NOKPROBE_SYMBOL() in sample modules kprobes/x86: Allow probe on some kprobe preparation functions kprobes/x86: Use NOKPROBE_SYMBOL instead of __kprobes kprobes: Allow probe on some kprobe functions kprobes: Use NOKPROBE_SYMBOL macro instead of __kprobes ftrace/kprobes: Allow probing on some preparation functions ftrace/kprobes: Use NOKPROBE_SYMBOL macro in ftrace x86/hw_breakpoint: Use NOKPROBE_SYMBOL macro in hw_breakpoint x86/trap: Use NOKPROBE_SYMBOL macro in trap.c x86/fault: Use NOKPROBE_SYMBOL macro in fault.c x86/alternative: Use NOKPROBE_SYMBOL macro in alternative.c x86/nmi: Use NOKPROBE_SYMBOL macro for nmi handlers x86/kvm: Use NOKPROBE_SYMBOL macro in kvm.c x86/dumpstack: Use NOKPROBE_SYMBOL macro in dumpstack.c [BUGFIX] kprobes/x86: Prohibit probing on debug_stack_* [BUGFIX] kprobes: Prohibit probing on func_ptr_is_kernel_text notifier: Use NOKPROBE_SYMBOL macro in notifier sched: Use NOKPROBE_SYMBOL macro in sched kprobes/x86: Use kprobe_blacklist for .kprobes.text and .entry.text Documentation/kprobes.txt | 24 ++ arch/x86/include/asm/traps.h | 2 arch/x86/kernel/alternative.c | 3 arch/x86/kernel/apic/hw_nmi.c | 3 arch/x86/kernel/cpu/common.c | 4 arch/x86/kernel/cpu/perf_event.c | 3 arch/x86/kernel/cpu/perf_event_amd_ibs.c | 3 arch/x86/kernel/dumpstack.c | 9 - arch/x86/kernel/entry_32.S | 33 -- arch/x86/kernel/entry_64.S | 20 - arch/x86/kernel/hw_breakpoint.c | 6 arch/x86/kernel/kprobes/core.c | 105 +++++-- arch/x86/kernel/kprobes/ftrace.c | 17 + arch/x86/kernel/kprobes/opt.c | 32 +- arch/x86/kernel/kvm.c | 4 arch/x86/kernel/nmi.c | 18 + arch/x86/kernel/paravirt.c | 4 arch/x86/kernel/traps.c | 20 + arch/x86/mm/fault.c | 28 +- include/asm-generic/vmlinux.lds.h | 9 + include/linux/kprobes.h | 22 ++ include/linux/module.h | 5 kernel/extable.c | 2 kernel/kprobes.c | 437 +++++++++++++++++++----------- kernel/module.c | 6 kernel/notifier.c | 22 +- kernel/sched/core.c | 7 kernel/trace/trace_event_perf.c | 5 kernel/trace/trace_kprobe.c | 53 ++-- kernel/trace/trace_probe.c | 78 +++-- kernel/trace/trace_probe.h | 4 samples/kprobes/jprobe_example.c | 1 samples/kprobes/kprobe_example.c | 3 samples/kprobes/kretprobe_example.c | 2 34 files changed, 612 insertions(+), 382 deletions(-) -- Masami HIRAMATSU IT Management Research Dept. Linux Technology Center Hitachi, Ltd., Yokohama Research Laboratory E-mail: masami.hiramatsu.pt at hitachi.com
Masami Hiramatsu
2013-Nov-20 04:21 UTC
[PATCH -tip v3 01/23] kprobes: Prohibit probing on .entry.text code
.entry.text is a code area which is used for interrupt/syscall entries, and there are many sensitive codes. Thus, it is better to prohibit probing on all of such codes instead of a part of that. Since some symbols are already registered on kprobe blacklist, this also removes them from the blacklist. Changes from previous: - Introduce arch_within_kprobe_blacklist() which checks the address is within the .kprobes.text (generic,x86) or .entry.text (x86), for fixing build issue on !x86. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Cc: Thomas Gleixner <tglx at linutronix.de> Cc: Ingo Molnar <mingo at redhat.com> Cc: "H. Peter Anvin" <hpa at zytor.com> Cc: Ananth N Mavinakayanahalli <ananth at in.ibm.com> Cc: Al Viro <viro at zeniv.linux.org.uk> Cc: Seiji Aguchi <seiji.aguchi at hds.com> Cc: Peter Zijlstra <peterz at infradead.org> Cc: Frederic Weisbecker <fweisbec at gmail.com> Cc: Geert Uytterhoeven <geert at linux-m68k.org> --- arch/x86/kernel/entry_32.S | 33 --------------------------------- arch/x86/kernel/entry_64.S | 20 -------------------- arch/x86/kernel/kprobes/core.c | 8 ++++++++ include/linux/kprobes.h | 1 + kernel/kprobes.c | 13 ++++++++----- 5 files changed, 17 insertions(+), 58 deletions(-) diff --git a/arch/x86/kernel/entry_32.S b/arch/x86/kernel/entry_32.S index 51e2988..02c2fef 100644 --- a/arch/x86/kernel/entry_32.S +++ b/arch/x86/kernel/entry_32.S @@ -315,10 +315,6 @@ ENTRY(ret_from_kernel_thread) ENDPROC(ret_from_kernel_thread) /* - * Interrupt exit functions should be protected against kprobes - */ - .pushsection .kprobes.text, "ax" -/* * Return to user mode is not as complex as all this looks, * but we want the default path for a system call return to * go as quickly as possible which is why some of this is @@ -372,10 +368,6 @@ need_resched: END(resume_kernel) #endif CFI_ENDPROC -/* - * End of kprobes section - */ - .popsection /* SYSENTER_RETURN points to after the "sysenter" instruction in the vsyscall page. See vsyscall-sysentry.S, which defines the symbol. */ @@ -495,10 +487,6 @@ sysexit_audit: PTGS_TO_GS_EX ENDPROC(ia32_sysenter_target) -/* - * syscall stub including irq exit should be protected against kprobes - */ - .pushsection .kprobes.text, "ax" # system call handler stub ENTRY(system_call) RING0_INT_FRAME # can't unwind into user space anyway @@ -691,10 +679,6 @@ syscall_badsys: jmp resume_userspace END(syscall_badsys) CFI_ENDPROC -/* - * End of kprobes section - */ - .popsection .macro FIXUP_ESPFIX_STACK /* @@ -781,10 +765,6 @@ common_interrupt: ENDPROC(common_interrupt) CFI_ENDPROC -/* - * Irq entries should be protected against kprobes - */ - .pushsection .kprobes.text, "ax" #define BUILD_INTERRUPT3(name, nr, fn) \ ENTRY(name) \ RING0_INT_FRAME; \ @@ -961,10 +941,6 @@ ENTRY(spurious_interrupt_bug) jmp error_code CFI_ENDPROC END(spurious_interrupt_bug) -/* - * End of kprobes section - */ - .popsection #ifdef CONFIG_XEN /* Xen doesn't set %esp to be precisely what the normal sysenter @@ -1239,11 +1215,6 @@ return_to_handler: jmp *%ecx #endif -/* - * Some functions should be protected against kprobes - */ - .pushsection .kprobes.text, "ax" - #ifdef CONFIG_TRACING ENTRY(trace_page_fault) RING0_EC_FRAME @@ -1453,7 +1424,3 @@ ENTRY(async_page_fault) END(async_page_fault) #endif -/* - * End of kprobes section - */ - .popsection diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S index e21b078..c48f8f9 100644 --- a/arch/x86/kernel/entry_64.S +++ b/arch/x86/kernel/entry_64.S @@ -487,8 +487,6 @@ ENDPROC(native_usergs_sysret64) TRACE_IRQS_OFF .endm -/* save complete stack frame */ - .pushsection .kprobes.text, "ax" ENTRY(save_paranoid) XCPT_FRAME 1 RDI+8 cld @@ -517,7 +515,6 @@ ENTRY(save_paranoid) 1: ret CFI_ENDPROC END(save_paranoid) - .popsection /* * A newly forked process directly context switches into this address. @@ -975,10 +972,6 @@ END(interrupt) call \func .endm -/* - * Interrupt entry/exit should be protected against kprobes - */ - .pushsection .kprobes.text, "ax" /* * The interrupt stubs push (~vector+0x80) onto the stack and * then jump to common_interrupt. @@ -1113,10 +1106,6 @@ ENTRY(retint_kernel) CFI_ENDPROC END(common_interrupt) -/* - * End of kprobes section - */ - .popsection /* * APIC interrupts. @@ -1477,11 +1466,6 @@ apicinterrupt3 HYPERVISOR_CALLBACK_VECTOR \ hyperv_callback_vector hyperv_vector_handler #endif /* CONFIG_HYPERV */ -/* - * Some functions should be protected against kprobes - */ - .pushsection .kprobes.text, "ax" - paranoidzeroentry_ist debug do_debug DEBUG_STACK paranoidzeroentry_ist int3 do_int3 DEBUG_STACK paranoiderrorentry stack_segment do_stack_segment @@ -1898,7 +1882,3 @@ ENTRY(ignore_sysret) CFI_ENDPROC END(ignore_sysret) -/* - * End of kprobes section - */ - .popsection diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index 79a3f96..349112e 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -1066,6 +1066,14 @@ int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) return 0; } +bool arch_within_kprobe_blacklist(unsigned long addr) +{ + return ((addr >= (unsigned long)__kprobes_text_start && + addr < (unsigned long)__kprobes_text_end) || + (addr >= (unsigned long)__entry_text_start && + addr < (unsigned long)__entry_text_end)); +} + int __init arch_init_kprobes(void) { return 0; diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h index 925eaf2..cdf9251 100644 --- a/include/linux/kprobes.h +++ b/include/linux/kprobes.h @@ -265,6 +265,7 @@ extern void arch_disarm_kprobe(struct kprobe *p); extern int arch_init_kprobes(void); extern void show_registers(struct pt_regs *regs); extern void kprobes_inc_nmissed_count(struct kprobe *p); +extern bool arch_within_kprobe_blacklist(unsigned long addr); struct kprobe_insn_cache { struct mutex mutex; diff --git a/kernel/kprobes.c b/kernel/kprobes.c index a0d367a..1756ecc 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -96,9 +96,6 @@ static raw_spinlock_t *kretprobe_table_lock_ptr(unsigned long hash) static struct kprobe_blackpoint kprobe_blacklist[] = { {"preempt_schedule",}, {"native_get_debugreg",}, - {"irq_entries_start",}, - {"common_interrupt",}, - {"mcount",}, /* mcount can be called from everywhere */ {NULL} /* Terminator */ }; @@ -1324,12 +1321,18 @@ out: return ret; } +bool __weak arch_within_kprobe_blacklist(unsigned long addr) +{ + /* The __kprobes marked functions and entry code must not be probed */ + return (addr >= (unsigned long)__kprobes_text_start && + addr < (unsigned long)__kprobes_text_end); +} + static int __kprobes in_kprobes_functions(unsigned long addr) { struct kprobe_blackpoint *kb; - if (addr >= (unsigned long)__kprobes_text_start && - addr < (unsigned long)__kprobes_text_end) + if (arch_within_kprobe_blacklist(addr)) return -EINVAL; /* * If there exists a kprobe_blacklist, verify and
Masami Hiramatsu
2013-Nov-20 04:21 UTC
[PATCH -tip v3 02/23] kprobes: Introduce NOKPROBE_SYMBOL() macro for blacklist
Introduce NOKPROBE_SYMBOL() macro which builds a kprobe blacklist in build time. The usage of this macro is similar to the EXPORT_SYMBOL, put the NOKPROBE_SYMBOL(function); just after the function definition. If CONFIG_KPROBES=y, the macro is expanded to the definition of a static data structure of kprobe_blackpoint which is initialized for the function and put the address of the data structure in the "_kprobe_blacklist" section. Since the data structures are not fully initialized by the macro (because there is no "size" information), those are re-initialized at boot time by using kallsyms. Changes from previous version: - Rename in_nokprobes_functions to within_kprobe_blacklist and it returns a bool value istead of an error. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Cc: Ananth N Mavinakayanahalli <ananth at in.ibm.com> Cc: "David S. Miller" <davem at davemloft.net> Cc: Rob Landley <rob at landley.net> Cc: Jeremy Fitzhardinge <jeremy at goop.org> Cc: Chris Wright <chrisw at sous-sol.org> Cc: Alok Kataria <akataria at vmware.com> Cc: Rusty Russell <rusty at rustcorp.com.au> Cc: Thomas Gleixner <tglx at linutronix.de> Cc: Ingo Molnar <mingo at redhat.com> Cc: "H. Peter Anvin" <hpa at zytor.com> Cc: Arnd Bergmann <arnd at arndb.de> Cc: Peter Zijlstra <peterz at infradead.org> --- Documentation/kprobes.txt | 16 ++++++ arch/x86/kernel/paravirt.c | 4 ++ include/asm-generic/vmlinux.lds.h | 9 ++++ include/linux/kprobes.h | 20 ++++++++ kernel/kprobes.c | 93 ++++++++++++++++++------------------- kernel/sched/core.c | 1 6 files changed, 94 insertions(+), 49 deletions(-) diff --git a/Documentation/kprobes.txt b/Documentation/kprobes.txt index 0cfb00f..7062631 100644 --- a/Documentation/kprobes.txt +++ b/Documentation/kprobes.txt @@ -22,8 +22,9 @@ Appendix B: The kprobes sysctl interface Kprobes enables you to dynamically break into any kernel routine and collect debugging and performance information non-disruptively. You -can trap at almost any kernel code address, specifying a handler +can trap at almost any kernel code address(*), specifying a handler routine to be invoked when the breakpoint is hit. +(*: at some part of kernel code can not be trapped, see 1.5 Blacklist) There are currently three types of probes: kprobes, jprobes, and kretprobes (also called return probes). A kprobe can be inserted @@ -273,6 +274,19 @@ using one of the following techniques: or - Execute 'sysctl -w debug.kprobes_optimization=n' +1.5 Blacklist + +Kprobes can probe almost of the kernel except itself. This means +that there are some functions where kprobes cannot probe. Probing +(trapping) such functions can cause recursive trap (e.g. double +fault) or at least the nested probe handler never be called. +Kprobes manages such functions as a blacklist. +If you want to add a function into the blacklist, you just need +to (1) include linux/kprobes.h and (2) use NOKPROBE_SYMBOL() macro +to specify a blacklisted function. +Kprobes checks given probe address with the blacklist and reject +registering if the given address is in the blacklist. + 2. Architectures Supported Kprobes, jprobes, and return probes are implemented on the following diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c index 1b10af8..4c785fd 100644 --- a/arch/x86/kernel/paravirt.c +++ b/arch/x86/kernel/paravirt.c @@ -23,6 +23,7 @@ #include <linux/efi.h> #include <linux/bcd.h> #include <linux/highmem.h> +#include <linux/kprobes.h> #include <asm/bug.h> #include <asm/paravirt.h> @@ -389,6 +390,9 @@ __visible struct pv_cpu_ops pv_cpu_ops = { .end_context_switch = paravirt_nop, }; +/* At this point, native_get_debugreg has real function entry */ +NOKPROBE_SYMBOL(native_get_debugreg); + struct pv_apic_ops pv_apic_ops = { #ifdef CONFIG_X86_LOCAL_APIC .startup_ipi_hook = paravirt_nop, diff --git a/include/asm-generic/vmlinux.lds.h b/include/asm-generic/vmlinux.lds.h index 83e2c31..294ea96 100644 --- a/include/asm-generic/vmlinux.lds.h +++ b/include/asm-generic/vmlinux.lds.h @@ -109,6 +109,14 @@ #define BRANCH_PROFILE() #endif +#ifdef CONFIG_KPROBES +#define KPROBE_BLACKLIST() VMLINUX_SYMBOL(__start_kprobe_blacklist) = .; \ + *(_kprobe_blacklist) \ + VMLINUX_SYMBOL(__stop_kprobe_blacklist) = .; +#else +#define KPROBE_BLACKLIST() +#endif + #ifdef CONFIG_EVENT_TRACING #define FTRACE_EVENTS() . = ALIGN(8); \ VMLINUX_SYMBOL(__start_ftrace_events) = .; \ @@ -487,6 +495,7 @@ *(.init.rodata) \ FTRACE_EVENTS() \ TRACE_SYSCALLS() \ + KPROBE_BLACKLIST() \ MEM_DISCARD(init.rodata) \ CLK_OF_TABLES() \ CLKSRC_OF_TABLES() \ diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h index cdf9251..641d009 100644 --- a/include/linux/kprobes.h +++ b/include/linux/kprobes.h @@ -206,6 +206,7 @@ struct kretprobe_blackpoint { }; struct kprobe_blackpoint { + struct list_head list; const char *name; unsigned long start_addr; unsigned long range; @@ -477,4 +478,23 @@ static inline int enable_jprobe(struct jprobe *jp) return enable_kprobe(&jp->kp); } +#ifdef CONFIG_KPROBES +/* + * Blacklist ganerating macro. Specify functions which is not probed + * by using this macro. + */ +#define __NOKPROBE_SYMBOL(fname) \ +static struct kprobe_blackpoint __used \ + _kprobe_bp_##fname = { \ + .name = #fname, \ + .start_addr = (unsigned long)fname, \ + }; \ +static struct kprobe_blackpoint __used \ + __attribute__((section("_kprobe_blacklist"))) \ + *_p_kprobe_bp_##fname = &_kprobe_bp_##fname; +#define NOKPROBE_SYMBOL(fname) __NOKPROBE_SYMBOL(fname) +#else +#define NOKPROBE_SYMBOL(fname) +#endif + #endif /* _LINUX_KPROBES_H */ diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 1756ecc..e04d8de 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -86,18 +86,8 @@ static raw_spinlock_t *kretprobe_table_lock_ptr(unsigned long hash) return &(kretprobe_table_locks[hash].lock); } -/* - * Normally, functions that we'd want to prohibit kprobes in, are marked - * __kprobes. But, there are cases where such functions already belong to - * a different section (__sched for preempt_schedule) - * - * For such cases, we now have a blacklist - */ -static struct kprobe_blackpoint kprobe_blacklist[] = { - {"preempt_schedule",}, - {"native_get_debugreg",}, - {NULL} /* Terminator */ -}; +/* Blacklist -- list of struct kprobe_blackpoint */ +static LIST_HEAD(kprobe_blacklist); #ifdef __ARCH_WANT_KPROBES_INSN_SLOT /* @@ -1328,24 +1318,23 @@ bool __weak arch_within_kprobe_blacklist(unsigned long addr) addr < (unsigned long)__kprobes_text_end); } -static int __kprobes in_kprobes_functions(unsigned long addr) +static bool __kprobes within_kprobe_blacklist(unsigned long addr) { - struct kprobe_blackpoint *kb; + struct kprobe_blackpoint *bp; if (arch_within_kprobe_blacklist(addr)) - return -EINVAL; + return true; /* * If there exists a kprobe_blacklist, verify and * fail any probe registration in the prohibited area */ - for (kb = kprobe_blacklist; kb->name != NULL; kb++) { - if (kb->start_addr) { - if (addr >= kb->start_addr && - addr < (kb->start_addr + kb->range)) - return -EINVAL; - } + list_for_each_entry(bp, &kprobe_blacklist, list) { + if (addr >= bp->start_addr && + addr < (bp->start_addr + bp->range)) + return true; } - return 0; + + return false; } /* @@ -1436,7 +1425,7 @@ static __kprobes int check_kprobe_address_safe(struct kprobe *p, /* Ensure it is not in reserved area nor out of text */ if (!kernel_text_address((unsigned long) p->addr) || - in_kprobes_functions((unsigned long) p->addr) || + within_kprobe_blacklist((unsigned long) p->addr) || jump_label_text_reserved(p->addr, p->addr)) { ret = -EINVAL; goto out; @@ -2065,14 +2054,41 @@ static struct notifier_block kprobe_module_nb = { .priority = 0 }; -static int __init init_kprobes(void) +/* + * Lookup and populate the kprobe_blacklist. + * + * Unlike the kretprobe blacklist, we'll need to determine + * the range of addresses that belong to the said functions, + * since a kprobe need not necessarily be at the beginning + * of a function. + */ +static void __init populate_kprobe_blacklist(struct kprobe_blackpoint **start, + struct kprobe_blackpoint **end) { - int i, err = 0; + struct kprobe_blackpoint **iter, *bp; unsigned long offset = 0, size = 0; char *modname, namebuf[128]; const char *symbol_name; - void *addr; - struct kprobe_blackpoint *kb; + + for (iter = start; (unsigned long)iter < (unsigned long)end; iter++) { + bp = *iter; + symbol_name = kallsyms_lookup(bp->start_addr, + &size, &offset, &modname, namebuf); + if (!symbol_name) + continue; + + bp->range = size; + INIT_LIST_HEAD(&bp->list); + list_add_tail(&bp->list, &kprobe_blacklist); + } +} + +extern struct kprobe_blackpoint *__start_kprobe_blacklist[]; +extern struct kprobe_blackpoint *__stop_kprobe_blacklist[]; + +static int __init init_kprobes(void) +{ + int i, err = 0; /* FIXME allocate the probe table, currently defined statically */ /* initialize all list heads */ @@ -2082,27 +2098,8 @@ static int __init init_kprobes(void) raw_spin_lock_init(&(kretprobe_table_locks[i].lock)); } - /* - * Lookup and populate the kprobe_blacklist. - * - * Unlike the kretprobe blacklist, we'll need to determine - * the range of addresses that belong to the said functions, - * since a kprobe need not necessarily be at the beginning - * of a function. - */ - for (kb = kprobe_blacklist; kb->name != NULL; kb++) { - kprobe_lookup_name(kb->name, addr); - if (!addr) - continue; - - kb->start_addr = (unsigned long)addr; - symbol_name = kallsyms_lookup(kb->start_addr, - &size, &offset, &modname, namebuf); - if (!symbol_name) - kb->range = 0; - else - kb->range = size; - } + populate_kprobe_blacklist(__start_kprobe_blacklist, + __stop_kprobe_blacklist); if (kretprobe_blacklist_size) { /* lookup the function address from its name */ diff --git a/kernel/sched/core.c b/kernel/sched/core.c index c180860..504fdbd 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2659,6 +2659,7 @@ asmlinkage void __sched notrace preempt_schedule(void) barrier(); } while (need_resched()); } +NOKPROBE_SYMBOL(preempt_schedule); EXPORT_SYMBOL(preempt_schedule); /*
Masami Hiramatsu
2013-Nov-20 04:21 UTC
[PATCH -tip v3 03/23] kprobes: Show blacklist entries via debugfs
Show blacklist entries (function names with the address range) via /sys/kernel/debug/kprobes/blacklist. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Cc: Ananth N Mavinakayanahalli <ananth at in.ibm.com> Cc: "David S. Miller" <davem at davemloft.net> --- kernel/kprobes.c | 61 +++++++++++++++++++++++++++++++++++++++++++++++------- 1 file changed, 53 insertions(+), 8 deletions(-) diff --git a/kernel/kprobes.c b/kernel/kprobes.c index e04d8de..d34744e 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -2228,6 +2228,46 @@ static const struct file_operations debugfs_kprobes_operations = { .release = seq_release, }; +/* kprobes/blacklist -- shows which functions can not be probed */ +static void *kprobe_blacklist_seq_start(struct seq_file *m, loff_t *pos) +{ + return seq_list_start(&kprobe_blacklist, *pos); +} + +static void *kprobe_blacklist_seq_next(struct seq_file *m, void *v, loff_t *pos) +{ + return seq_list_next(v, &kprobe_blacklist, pos); +} + +static int kprobe_blacklist_seq_show(struct seq_file *m, void *v) +{ + struct kprobe_blackpoint *bp + list_entry(v, struct kprobe_blackpoint, list); + + seq_printf(m, "0x%p-0x%p\t%s\n", (void *)bp->start_addr, + (void *)(bp->start_addr + bp->range), bp->name); + return 0; +} + +static const struct seq_operations kprobe_blacklist_seq_ops = { + .start = kprobe_blacklist_seq_start, + .next = kprobe_blacklist_seq_next, + .stop = kprobe_seq_stop, /* Reuse void function */ + .show = kprobe_blacklist_seq_show, +}; + +static int kprobe_blacklist_open(struct inode *inode, struct file *filp) +{ + return seq_open(filp, &kprobe_blacklist_seq_ops); +} + +static const struct file_operations debugfs_kprobe_blacklist_ops = { + .open = kprobe_blacklist_open, + .read = seq_read, + .llseek = seq_lseek, + .release = seq_release, +}; + static void __kprobes arm_all_kprobes(void) { struct hlist_head *head; @@ -2351,19 +2391,24 @@ static int __kprobes debugfs_kprobe_init(void) file = debugfs_create_file("list", 0444, dir, NULL, &debugfs_kprobes_operations); - if (!file) { - debugfs_remove(dir); - return -ENOMEM; - } + if (!file) + goto error; file = debugfs_create_file("enabled", 0600, dir, &value, &fops_kp); - if (!file) { - debugfs_remove(dir); - return -ENOMEM; - } + if (!file) + goto error; + + file = debugfs_create_file("blacklist", 0444, dir, NULL, + &debugfs_kprobe_blacklist_ops); + if (!file) + goto error; return 0; + +error: + debugfs_remove(dir); + return -ENOMEM; } late_initcall(debugfs_kprobe_init);
Masami Hiramatsu
2013-Nov-20 04:21 UTC
[PATCH -tip v3 04/23] kprobes: Support blacklist functions in module
To blacklist the functions in a module (e.g. user-defined kprobe handler and the functions invoked from it), expand blacklist support for modules. With this change, users can use NOKPROBE_SYMBOL() macro in their own modules. Changes from previous: - Fix the type of kprobe_blacklist_seq_stop() Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Cc: Ananth N Mavinakayanahalli <ananth at in.ibm.com> Cc: "David S. Miller" <davem at davemloft.net> Cc: Rob Landley <rob at landley.net> Cc: Rusty Russell <rusty at rustcorp.com.au> --- Documentation/kprobes.txt | 8 ++++++++ include/linux/module.h | 5 +++++ kernel/kprobes.c | 44 +++++++++++++++++++++++++++++++++++++++++--- kernel/module.c | 6 ++++++ 4 files changed, 60 insertions(+), 3 deletions(-) diff --git a/Documentation/kprobes.txt b/Documentation/kprobes.txt index 7062631..c6634b3 100644 --- a/Documentation/kprobes.txt +++ b/Documentation/kprobes.txt @@ -512,6 +512,14 @@ int enable_jprobe(struct jprobe *jp); Enables *probe which has been disabled by disable_*probe(). You must specify the probe which has been registered. +4.9 NOKPROBE_SYMBOL() + +#include <linux/kprobes.h> +NOKPROBE_SYMBOL(FUNCTION); + +Protects given FUNCTION from other kprobes. This is useful for handler +functions and functions called from the handlers. + 5. Kprobes Features and Limitations Kprobes allows multiple probes at the same address. Currently, diff --git a/include/linux/module.h b/include/linux/module.h index 05f2447..acb682b 100644 --- a/include/linux/module.h +++ b/include/linux/module.h @@ -16,6 +16,7 @@ #include <linux/kobject.h> #include <linux/moduleparam.h> #include <linux/tracepoint.h> +#include <linux/kprobes.h> #include <linux/export.h> #include <linux/percpu.h> @@ -360,6 +361,10 @@ struct module unsigned int num_ftrace_callsites; unsigned long *ftrace_callsites; #endif +#ifdef CONFIG_KPROBES + struct kprobe_blackpoint **kprobe_blacklist; + unsigned int num_kprobe_blacklist; +#endif #ifdef CONFIG_MODULE_UNLOAD /* What modules depend on me? */ diff --git a/kernel/kprobes.c b/kernel/kprobes.c index d34744e..eb9b938 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -88,6 +88,7 @@ static raw_spinlock_t *kretprobe_table_lock_ptr(unsigned long hash) /* Blacklist -- list of struct kprobe_blackpoint */ static LIST_HEAD(kprobe_blacklist); +static DEFINE_MUTEX(kprobe_blacklist_mutex); #ifdef __ARCH_WANT_KPROBES_INSN_SLOT /* @@ -1420,6 +1421,7 @@ static __kprobes int check_kprobe_address_safe(struct kprobe *p, #endif } + mutex_lock(&kprobe_blacklist_mutex); jump_label_lock(); preempt_disable(); @@ -1457,6 +1459,7 @@ static __kprobes int check_kprobe_address_safe(struct kprobe *p, out: preempt_enable(); jump_label_unlock(); + mutex_unlock(&kprobe_blacklist_mutex); return ret; } @@ -2011,6 +2014,11 @@ void __kprobes dump_kprobe(struct kprobe *kp) kp->symbol_name, kp->addr, kp->offset); } +static void populate_kprobe_blacklist(struct kprobe_blackpoint **start, + struct kprobe_blackpoint **end); +static void shrink_kprobe_blacklist(struct kprobe_blackpoint **start, + struct kprobe_blackpoint **end); + /* Module notifier call back, checking kprobes on the module */ static int __kprobes kprobes_module_callback(struct notifier_block *nb, unsigned long val, void *data) @@ -2021,6 +2029,16 @@ static int __kprobes kprobes_module_callback(struct notifier_block *nb, unsigned int i; int checkcore = (val == MODULE_STATE_GOING); + /* Add/remove module blacklist */ + if (val == MODULE_STATE_COMING) + populate_kprobe_blacklist(mod->kprobe_blacklist, + mod->kprobe_blacklist + + mod->num_kprobe_blacklist); + else if (val == MODULE_STATE_GOING) + shrink_kprobe_blacklist(mod->kprobe_blacklist, + mod->kprobe_blacklist + + mod->num_kprobe_blacklist); + if (val != MODULE_STATE_GOING && val != MODULE_STATE_LIVE) return NOTIFY_DONE; @@ -2054,6 +2072,18 @@ static struct notifier_block kprobe_module_nb = { .priority = 0 }; +/* Shrink the blacklist */ +static void shrink_kprobe_blacklist(struct kprobe_blackpoint **start, + struct kprobe_blackpoint **end) +{ + struct kprobe_blackpoint **iter; + + mutex_lock(&kprobe_blacklist_mutex); + for (iter = start; (unsigned long)iter < (unsigned long)end; iter++) + list_del(&(*iter)->list); + mutex_unlock(&kprobe_blacklist_mutex); +} + /* * Lookup and populate the kprobe_blacklist. * @@ -2062,14 +2092,15 @@ static struct notifier_block kprobe_module_nb = { * since a kprobe need not necessarily be at the beginning * of a function. */ -static void __init populate_kprobe_blacklist(struct kprobe_blackpoint **start, - struct kprobe_blackpoint **end) +static void populate_kprobe_blacklist(struct kprobe_blackpoint **start, + struct kprobe_blackpoint **end) { struct kprobe_blackpoint **iter, *bp; unsigned long offset = 0, size = 0; char *modname, namebuf[128]; const char *symbol_name; + mutex_lock(&kprobe_blacklist_mutex); for (iter = start; (unsigned long)iter < (unsigned long)end; iter++) { bp = *iter; symbol_name = kallsyms_lookup(bp->start_addr, @@ -2081,6 +2112,7 @@ static void __init populate_kprobe_blacklist(struct kprobe_blackpoint **start, INIT_LIST_HEAD(&bp->list); list_add_tail(&bp->list, &kprobe_blacklist); } + mutex_unlock(&kprobe_blacklist_mutex); } extern struct kprobe_blackpoint *__start_kprobe_blacklist[]; @@ -2231,6 +2263,7 @@ static const struct file_operations debugfs_kprobes_operations = { /* kprobes/blacklist -- shows which functions can not be probed */ static void *kprobe_blacklist_seq_start(struct seq_file *m, loff_t *pos) { + mutex_lock(&kprobe_blacklist_mutex); return seq_list_start(&kprobe_blacklist, *pos); } @@ -2239,6 +2272,11 @@ static void *kprobe_blacklist_seq_next(struct seq_file *m, void *v, loff_t *pos) return seq_list_next(v, &kprobe_blacklist, pos); } +static void kprobe_blacklist_seq_stop(struct seq_file *m, void *v) +{ + mutex_unlock(&kprobe_blacklist_mutex); +} + static int kprobe_blacklist_seq_show(struct seq_file *m, void *v) { struct kprobe_blackpoint *bp @@ -2252,7 +2290,7 @@ static int kprobe_blacklist_seq_show(struct seq_file *m, void *v) static const struct seq_operations kprobe_blacklist_seq_ops = { .start = kprobe_blacklist_seq_start, .next = kprobe_blacklist_seq_next, - .stop = kprobe_seq_stop, /* Reuse void function */ + .stop = kprobe_blacklist_seq_stop, .show = kprobe_blacklist_seq_show, }; diff --git a/kernel/module.c b/kernel/module.c index dc58274..4cc844c 100644 --- a/kernel/module.c +++ b/kernel/module.c @@ -58,6 +58,7 @@ #include <linux/percpu.h> #include <linux/kmemleak.h> #include <linux/jump_label.h> +#include <linux/kprobes.h> #include <linux/pfn.h> #include <linux/bsearch.h> #include <linux/fips.h> @@ -2796,6 +2797,11 @@ static void find_module_sections(struct module *mod, struct load_info *info) sizeof(*mod->ftrace_callsites), &mod->num_ftrace_callsites); #endif +#ifdef CONFIG_KPROBES + mod->kprobe_blacklist = section_objs(info, "_kprobe_blacklist", + sizeof(*mod->kprobe_blacklist), + &mod->num_kprobe_blacklist); +#endif mod->extable = section_objs(info, "__ex_table", sizeof(*mod->extable), &mod->num_exentries);
Masami Hiramatsu
2013-Nov-20 04:22 UTC
[PATCH -tip v3 05/23] kprobes: Use NOKPROBE_SYMBOL() in sample modules
Use NOKPROBE_SYMBOL() to protect handlers from kprobes in sample modules. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Ananth N Mavinakayanahalli <ananth at in.ibm.com> --- samples/kprobes/jprobe_example.c | 1 + samples/kprobes/kprobe_example.c | 3 +++ samples/kprobes/kretprobe_example.c | 2 ++ 3 files changed, 6 insertions(+) diff --git a/samples/kprobes/jprobe_example.c b/samples/kprobes/jprobe_example.c index b754135..40114ac 100644 --- a/samples/kprobes/jprobe_example.c +++ b/samples/kprobes/jprobe_example.c @@ -35,6 +35,7 @@ static long jdo_fork(unsigned long clone_flags, unsigned long stack_start, jprobe_return(); return 0; } +NOKPROBE_SYMBOL(jdo_fork); static struct jprobe my_jprobe = { .entry = jdo_fork, diff --git a/samples/kprobes/kprobe_example.c b/samples/kprobes/kprobe_example.c index 366db1a..462d90f 100644 --- a/samples/kprobes/kprobe_example.c +++ b/samples/kprobes/kprobe_example.c @@ -46,6 +46,7 @@ static int handler_pre(struct kprobe *p, struct pt_regs *regs) /* A dump_stack() here will give a stack backtrace */ return 0; } +NOKPROBE_SYMBOL(handler_pre); /* kprobe post_handler: called after the probed instruction is executed */ static void handler_post(struct kprobe *p, struct pt_regs *regs, @@ -68,6 +69,7 @@ static void handler_post(struct kprobe *p, struct pt_regs *regs, p->addr, regs->ex1); #endif } +NOKPROBE_SYMBOL(handler_post); /* * fault_handler: this is called if an exception is generated for any @@ -81,6 +83,7 @@ static int handler_fault(struct kprobe *p, struct pt_regs *regs, int trapnr) /* Return 0 because we don't handle the fault. */ return 0; } +NOKPROBE_SYMBOL(handler_fault); static int __init kprobe_init(void) { diff --git a/samples/kprobes/kretprobe_example.c b/samples/kprobes/kretprobe_example.c index 1041b67..d932c52 100644 --- a/samples/kprobes/kretprobe_example.c +++ b/samples/kprobes/kretprobe_example.c @@ -47,6 +47,7 @@ static int entry_handler(struct kretprobe_instance *ri, struct pt_regs *regs) data->entry_stamp = ktime_get(); return 0; } +NOKPROBE_SYMBOL(entry_handler); /* * Return-probe handler: Log the return value and duration. Duration may turn @@ -66,6 +67,7 @@ static int ret_handler(struct kretprobe_instance *ri, struct pt_regs *regs) func_name, retval, (long long)delta); return 0; } +NOKPROBE_SYMBOL(ret_handler); static struct kretprobe my_kretprobe = { .handler = ret_handler,
Masami Hiramatsu
2013-Nov-20 04:22 UTC
[PATCH -tip v3 06/23] kprobes/x86: Allow probe on some kprobe preparation functions
There is no need to prohibit probing on the functions used in preparation phase. Those are safely probed because those are not invoked from breakpoint/fault/debug handlers, there is no chance to cause recursive exceptions. Following functions are now removed from the kprobes blacklist. can_boost can_probe can_optimize is_IF_modifier __copy_instruction copy_optimized_instructions arch_copy_kprobe arch_prepare_kprobe arch_arm_kprobe arch_disarm_kprobe arch_remove_kprobe arch_trampoline_kprobe arch_prepare_kprobe_ftrace arch_prepare_optimized_kprobe arch_check_optimized_kprobe arch_within_optimized_kprobe __arch_remove_optimized_kprobe arch_remove_optimized_kprobe arch_optimize_kprobes arch_unoptimize_kprobe I tested the safety via kprobe-tracer as below; # cd /sys/kernel/debug/tracing # cat above-coverted-symbols-list | while read s; do echo "p $s"; done > kprobe_events (Note: some symbols are not found, those are inlined) # echo 1 > events/kprobes/enable # echo p:foo vfs_symlink >> kprobe_events # echo p:bar vfs_symlink+5 >> kprobe_events # echo p vfs_symlink+5 >> kprobe_events # echo 1 > events/kprobes/foo/enable # ln -sf /tmp/foo /tmp/bar # echo 0 > events/kprobes/foo/enable # echo -:foo >> kprobe_events # head -n 20 trace # echo 0 > events/kprobes/enable # echo > kprobe_events # echo > trace Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Cc: Thomas Gleixner <tglx at linutronix.de> Cc: Ingo Molnar <mingo at redhat.com> Cc: "H. Peter Anvin" <hpa at zytor.com> Cc: Steven Rostedt <rostedt at goodmis.org> Cc: Andrew Morton <akpm at linux-foundation.org> --- arch/x86/kernel/kprobes/core.c | 20 ++++++++++---------- arch/x86/kernel/kprobes/ftrace.c | 2 +- arch/x86/kernel/kprobes/opt.c | 24 ++++++++++++------------ 3 files changed, 23 insertions(+), 23 deletions(-) diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index 349112e..c2f7b1f 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -159,7 +159,7 @@ static kprobe_opcode_t *__kprobes skip_prefixes(kprobe_opcode_t *insn) * Returns non-zero if opcode is boostable. * RIP relative instructions are adjusted at copying time in 64 bits mode */ -int __kprobes can_boost(kprobe_opcode_t *opcodes) +int can_boost(kprobe_opcode_t *opcodes) { kprobe_opcode_t opcode; kprobe_opcode_t *orig_opcodes = opcodes; @@ -260,7 +260,7 @@ unsigned long recover_probed_instruction(kprobe_opcode_t *buf, unsigned long add } /* Check if paddr is at an instruction boundary */ -static int __kprobes can_probe(unsigned long paddr) +static int can_probe(unsigned long paddr) { unsigned long addr, __addr, offset = 0; struct insn insn; @@ -299,7 +299,7 @@ static int __kprobes can_probe(unsigned long paddr) /* * Returns non-zero if opcode modifies the interrupt flag. */ -static int __kprobes is_IF_modifier(kprobe_opcode_t *insn) +static int is_IF_modifier(kprobe_opcode_t *insn) { /* Skip prefixes */ insn = skip_prefixes(insn); @@ -322,7 +322,7 @@ static int __kprobes is_IF_modifier(kprobe_opcode_t *insn) * If not, return null. * Only applicable to 64-bit x86. */ -int __kprobes __copy_instruction(u8 *dest, u8 *src) +int __copy_instruction(u8 *dest, u8 *src) { struct insn insn; kprobe_opcode_t buf[MAX_INSN_SIZE]; @@ -365,7 +365,7 @@ int __kprobes __copy_instruction(u8 *dest, u8 *src) return insn.length; } -static int __kprobes arch_copy_kprobe(struct kprobe *p) +static int arch_copy_kprobe(struct kprobe *p) { int ret; @@ -392,7 +392,7 @@ static int __kprobes arch_copy_kprobe(struct kprobe *p) return 0; } -int __kprobes arch_prepare_kprobe(struct kprobe *p) +int arch_prepare_kprobe(struct kprobe *p) { if (alternatives_text_reserved(p->addr, p->addr)) return -EINVAL; @@ -407,17 +407,17 @@ int __kprobes arch_prepare_kprobe(struct kprobe *p) return arch_copy_kprobe(p); } -void __kprobes arch_arm_kprobe(struct kprobe *p) +void arch_arm_kprobe(struct kprobe *p) { text_poke(p->addr, ((unsigned char []){BREAKPOINT_INSTRUCTION}), 1); } -void __kprobes arch_disarm_kprobe(struct kprobe *p) +void arch_disarm_kprobe(struct kprobe *p) { text_poke(p->addr, &p->opcode, 1); } -void __kprobes arch_remove_kprobe(struct kprobe *p) +void arch_remove_kprobe(struct kprobe *p) { if (p->ainsn.insn) { free_insn_slot(p->ainsn.insn, (p->ainsn.boostable == 1)); @@ -1079,7 +1079,7 @@ int __init arch_init_kprobes(void) return 0; } -int __kprobes arch_trampoline_kprobe(struct kprobe *p) +int arch_trampoline_kprobe(struct kprobe *p) { return 0; } diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c index 23ef5c5..dcaa131 100644 --- a/arch/x86/kernel/kprobes/ftrace.c +++ b/arch/x86/kernel/kprobes/ftrace.c @@ -85,7 +85,7 @@ end: local_irq_restore(flags); } -int __kprobes arch_prepare_kprobe_ftrace(struct kprobe *p) +int arch_prepare_kprobe_ftrace(struct kprobe *p) { p->ainsn.insn = NULL; p->ainsn.boostable = -1; diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c index 898160b..fba7fb0 100644 --- a/arch/x86/kernel/kprobes/opt.c +++ b/arch/x86/kernel/kprobes/opt.c @@ -77,7 +77,7 @@ found: } /* Insert a move instruction which sets a pointer to eax/rdi (1st arg). */ -static void __kprobes synthesize_set_arg1(kprobe_opcode_t *addr, unsigned long val) +static void synthesize_set_arg1(kprobe_opcode_t *addr, unsigned long val) { #ifdef CONFIG_X86_64 *addr++ = 0x48; @@ -169,7 +169,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op, struct pt_ local_irq_restore(flags); } -static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src) +static int copy_optimized_instructions(u8 *dest, u8 *src) { int len = 0, ret; @@ -189,7 +189,7 @@ static int __kprobes copy_optimized_instructions(u8 *dest, u8 *src) } /* Check whether insn is indirect jump */ -static int __kprobes insn_is_indirect_jump(struct insn *insn) +static int insn_is_indirect_jump(struct insn *insn) { return ((insn->opcode.bytes[0] == 0xff && (X86_MODRM_REG(insn->modrm.value) & 6) == 4) || /* Jump */ @@ -224,7 +224,7 @@ static int insn_jump_into_range(struct insn *insn, unsigned long start, int len) } /* Decode whole function to ensure any instructions don't jump into target */ -static int __kprobes can_optimize(unsigned long paddr) +static int can_optimize(unsigned long paddr) { unsigned long addr, size = 0, offset = 0; struct insn insn; @@ -275,7 +275,7 @@ static int __kprobes can_optimize(unsigned long paddr) } /* Check optimized_kprobe can actually be optimized. */ -int __kprobes arch_check_optimized_kprobe(struct optimized_kprobe *op) +int arch_check_optimized_kprobe(struct optimized_kprobe *op) { int i; struct kprobe *p; @@ -290,15 +290,15 @@ int __kprobes arch_check_optimized_kprobe(struct optimized_kprobe *op) } /* Check the addr is within the optimized instructions. */ -int __kprobes -arch_within_optimized_kprobe(struct optimized_kprobe *op, unsigned long addr) +int arch_within_optimized_kprobe(struct optimized_kprobe *op, + unsigned long addr) { return ((unsigned long)op->kp.addr <= addr && (unsigned long)op->kp.addr + op->optinsn.size > addr); } /* Free optimized instruction slot */ -static __kprobes +static void __arch_remove_optimized_kprobe(struct optimized_kprobe *op, int dirty) { if (op->optinsn.insn) { @@ -308,7 +308,7 @@ void __arch_remove_optimized_kprobe(struct optimized_kprobe *op, int dirty) } } -void __kprobes arch_remove_optimized_kprobe(struct optimized_kprobe *op) +void arch_remove_optimized_kprobe(struct optimized_kprobe *op) { __arch_remove_optimized_kprobe(op, 1); } @@ -318,7 +318,7 @@ void __kprobes arch_remove_optimized_kprobe(struct optimized_kprobe *op) * Target instructions MUST be relocatable (checked inside) * This is called when new aggr(opt)probe is allocated or reused. */ -int __kprobes arch_prepare_optimized_kprobe(struct optimized_kprobe *op) +int arch_prepare_optimized_kprobe(struct optimized_kprobe *op) { u8 *buf; int ret; @@ -372,7 +372,7 @@ int __kprobes arch_prepare_optimized_kprobe(struct optimized_kprobe *op) * Replace breakpoints (int3) with relative jumps. * Caller must call with locking kprobe_mutex and text_mutex. */ -void __kprobes arch_optimize_kprobes(struct list_head *oplist) +void arch_optimize_kprobes(struct list_head *oplist) { struct optimized_kprobe *op, *tmp; u8 insn_buf[RELATIVEJUMP_SIZE]; @@ -398,7 +398,7 @@ void __kprobes arch_optimize_kprobes(struct list_head *oplist) } /* Replace a relative jump with a breakpoint (int3). */ -void __kprobes arch_unoptimize_kprobe(struct optimized_kprobe *op) +void arch_unoptimize_kprobe(struct optimized_kprobe *op) { u8 insn_buf[RELATIVEJUMP_SIZE];
Masami Hiramatsu
2013-Nov-20 04:22 UTC
[PATCH -tip v3 07/23] kprobes/x86: Use NOKPROBE_SYMBOL instead of __kprobes
Use NOKPROBE_SYMBOL macro for protecting functions from kprobes instead of __kprobes annotation in x86 kprobes code. This applies __always_inline annotation for some cases, because NOKPROBE_SYMBOL() will inhibit inlining by referring the symbol address. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Cc: Thomas Gleixner <tglx at linutronix.de> Cc: Ingo Molnar <mingo at redhat.com> Cc: "H. Peter Anvin" <hpa at zytor.com> Cc: Steven Rostedt <rostedt at goodmis.org> Cc: Andrew Morton <akpm at linux-foundation.org> --- arch/x86/kernel/kprobes/core.c | 77 ++++++++++++++++++++++++-------------- arch/x86/kernel/kprobes/ftrace.c | 15 ++++--- arch/x86/kernel/kprobes/opt.c | 8 ++-- 3 files changed, 63 insertions(+), 37 deletions(-) diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index c2f7b1f..54ada0b 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -112,7 +112,8 @@ struct kretprobe_blackpoint kretprobe_blacklist[] = { const int kretprobe_blacklist_size = ARRAY_SIZE(kretprobe_blacklist); -static void __kprobes __synthesize_relative_insn(void *from, void *to, u8 op) +static __always_inline +void __synthesize_relative_insn(void *from, void *to, u8 op) { struct __arch_relative_insn { u8 op; @@ -125,21 +126,23 @@ static void __kprobes __synthesize_relative_insn(void *from, void *to, u8 op) } /* Insert a jump instruction at address 'from', which jumps to address 'to'.*/ -void __kprobes synthesize_reljump(void *from, void *to) +void synthesize_reljump(void *from, void *to) { __synthesize_relative_insn(from, to, RELATIVEJUMP_OPCODE); } +NOKPROBE_SYMBOL(synthesize_reljump); /* Insert a call instruction at address 'from', which calls address 'to'.*/ -void __kprobes synthesize_relcall(void *from, void *to) +void synthesize_relcall(void *from, void *to) { __synthesize_relative_insn(from, to, RELATIVECALL_OPCODE); } +NOKPROBE_SYMBOL(synthesize_relcall); /* * Skip the prefixes of the instruction. */ -static kprobe_opcode_t *__kprobes skip_prefixes(kprobe_opcode_t *insn) +static kprobe_opcode_t *skip_prefixes(kprobe_opcode_t *insn) { insn_attr_t attr; @@ -154,6 +157,7 @@ static kprobe_opcode_t *__kprobes skip_prefixes(kprobe_opcode_t *insn) #endif return insn; } +NOKPROBE_SYMBOL(skip_prefixes); /* * Returns non-zero if opcode is boostable. @@ -425,7 +429,8 @@ void arch_remove_kprobe(struct kprobe *p) } } -static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb) +static __always_inline +void save_previous_kprobe(struct kprobe_ctlblk *kcb) { kcb->prev_kprobe.kp = kprobe_running(); kcb->prev_kprobe.status = kcb->kprobe_status; @@ -433,7 +438,8 @@ static void __kprobes save_previous_kprobe(struct kprobe_ctlblk *kcb) kcb->prev_kprobe.saved_flags = kcb->kprobe_saved_flags; } -static void __kprobes restore_previous_kprobe(struct kprobe_ctlblk *kcb) +static __always_inline +void restore_previous_kprobe(struct kprobe_ctlblk *kcb) { __this_cpu_write(current_kprobe, kcb->prev_kprobe.kp); kcb->kprobe_status = kcb->prev_kprobe.status; @@ -441,8 +447,9 @@ static void __kprobes restore_previous_kprobe(struct kprobe_ctlblk *kcb) kcb->kprobe_saved_flags = kcb->prev_kprobe.saved_flags; } -static void __kprobes set_current_kprobe(struct kprobe *p, struct pt_regs *regs, - struct kprobe_ctlblk *kcb) +static __always_inline +void set_current_kprobe(struct kprobe *p, struct pt_regs *regs, + struct kprobe_ctlblk *kcb) { __this_cpu_write(current_kprobe, p); kcb->kprobe_saved_flags = kcb->kprobe_old_flags @@ -451,7 +458,7 @@ static void __kprobes set_current_kprobe(struct kprobe *p, struct pt_regs *regs, kcb->kprobe_saved_flags &= ~X86_EFLAGS_IF; } -static void __kprobes clear_btf(void) +static __always_inline void clear_btf(void) { if (test_thread_flag(TIF_BLOCKSTEP)) { unsigned long debugctl = get_debugctlmsr(); @@ -461,7 +468,7 @@ static void __kprobes clear_btf(void) } } -static void __kprobes restore_btf(void) +static __always_inline void restore_btf(void) { if (test_thread_flag(TIF_BLOCKSTEP)) { unsigned long debugctl = get_debugctlmsr(); @@ -471,8 +478,7 @@ static void __kprobes restore_btf(void) } } -void __kprobes -arch_prepare_kretprobe(struct kretprobe_instance *ri, struct pt_regs *regs) +void arch_prepare_kretprobe(struct kretprobe_instance *ri, struct pt_regs *regs) { unsigned long *sara = stack_addr(regs); @@ -481,9 +487,10 @@ arch_prepare_kretprobe(struct kretprobe_instance *ri, struct pt_regs *regs) /* Replace the return addr with trampoline addr */ *sara = (unsigned long) &kretprobe_trampoline; } +NOKPROBE_SYMBOL(arch_prepare_kretprobe); -static void __kprobes -setup_singlestep(struct kprobe *p, struct pt_regs *regs, struct kprobe_ctlblk *kcb, int reenter) +static void setup_singlestep(struct kprobe *p, struct pt_regs *regs, + struct kprobe_ctlblk *kcb, int reenter) { if (setup_detour_execution(p, regs, reenter)) return; @@ -519,14 +526,15 @@ setup_singlestep(struct kprobe *p, struct pt_regs *regs, struct kprobe_ctlblk *k else regs->ip = (unsigned long)p->ainsn.insn; } +NOKPROBE_SYMBOL(setup_singlestep); /* * We have reentered the kprobe_handler(), since another probe was hit while * within the handler. We save the original kprobes variables and just single * step on the instruction of the new probe without calling any user handlers. */ -static int __kprobes -reenter_kprobe(struct kprobe *p, struct pt_regs *regs, struct kprobe_ctlblk *kcb) +static int reenter_kprobe(struct kprobe *p, struct pt_regs *regs, + struct kprobe_ctlblk *kcb) { switch (kcb->kprobe_status) { case KPROBE_HIT_SSDONE: @@ -553,12 +561,13 @@ reenter_kprobe(struct kprobe *p, struct pt_regs *regs, struct kprobe_ctlblk *kcb return 1; } +NOKPROBE_SYMBOL(reenter_kprobe); /* * Interrupts are disabled on entry as trap3 is an interrupt gate and they * remain disabled throughout this function. */ -static int __kprobes kprobe_handler(struct pt_regs *regs) +static int kprobe_handler(struct pt_regs *regs) { kprobe_opcode_t *addr; struct kprobe *p; @@ -621,12 +630,13 @@ static int __kprobes kprobe_handler(struct pt_regs *regs) preempt_enable_no_resched(); return 0; } +NOKPROBE_SYMBOL(kprobe_handler); /* * When a retprobed function returns, this code saves registers and * calls trampoline_handler() runs, which calls the kretprobe's handler. */ -static void __used __kprobes kretprobe_trampoline_holder(void) +static void __used kretprobe_trampoline_holder(void) { asm volatile ( ".global kretprobe_trampoline\n" @@ -657,11 +667,13 @@ static void __used __kprobes kretprobe_trampoline_holder(void) #endif " ret\n"); } +NOKPROBE_SYMBOL(kretprobe_trampoline_holder); +NOKPROBE_SYMBOL(kretprobe_trampoline); /* * Called from kretprobe_trampoline */ -__visible __used __kprobes void *trampoline_handler(struct pt_regs *regs) +__visible __used void *trampoline_handler(struct pt_regs *regs) { struct kretprobe_instance *ri = NULL; struct hlist_head *head, empty_rp; @@ -747,6 +759,7 @@ __visible __used __kprobes void *trampoline_handler(struct pt_regs *regs) } return (void *)orig_ret_address; } +NOKPROBE_SYMBOL(trampoline_handler); /* * Called after single-stepping. p->addr is the address of the @@ -775,8 +788,8 @@ __visible __used __kprobes void *trampoline_handler(struct pt_regs *regs) * jump instruction after the copied instruction, that jumps to the next * instruction after the probepoint. */ -static void __kprobes -resume_execution(struct kprobe *p, struct pt_regs *regs, struct kprobe_ctlblk *kcb) +static void resume_execution(struct kprobe *p, struct pt_regs *regs, + struct kprobe_ctlblk *kcb) { unsigned long *tos = stack_addr(regs); unsigned long copy_ip = (unsigned long)p->ainsn.insn; @@ -851,12 +864,13 @@ resume_execution(struct kprobe *p, struct pt_regs *regs, struct kprobe_ctlblk *k no_change: restore_btf(); } +NOKPROBE_SYMBOL(resume_execution); /* * Interrupts are disabled on entry as trap1 is an interrupt gate and they * remain disabled throughout this function. */ -static int __kprobes post_kprobe_handler(struct pt_regs *regs) +static int post_kprobe_handler(struct pt_regs *regs) { struct kprobe *cur = kprobe_running(); struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); @@ -891,8 +905,9 @@ out: return 1; } +NOKPROBE_SYMBOL(post_kprobe_handler); -int __kprobes kprobe_fault_handler(struct pt_regs *regs, int trapnr) +int kprobe_fault_handler(struct pt_regs *regs, int trapnr) { struct kprobe *cur = kprobe_running(); struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); @@ -951,12 +966,13 @@ int __kprobes kprobe_fault_handler(struct pt_regs *regs, int trapnr) } return 0; } +NOKPROBE_SYMBOL(kprobe_fault_handler); /* * Wrapper routine for handling exceptions. */ -int __kprobes -kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, void *data) +int kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, + void *data) { struct die_args *args = data; int ret = NOTIFY_DONE; @@ -994,8 +1010,9 @@ kprobe_exceptions_notify(struct notifier_block *self, unsigned long val, void *d } return ret; } +NOKPROBE_SYMBOL(kprobe_exceptions_notify); -int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) +int setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) { struct jprobe *jp = container_of(p, struct jprobe, kp); unsigned long addr; @@ -1019,8 +1036,9 @@ int __kprobes setjmp_pre_handler(struct kprobe *p, struct pt_regs *regs) regs->ip = (unsigned long)(jp->entry); return 1; } +NOKPROBE_SYMBOL(setjmp_pre_handler); -void __kprobes jprobe_return(void) +void jprobe_return(void) { struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); @@ -1036,8 +1054,10 @@ void __kprobes jprobe_return(void) " nop \n"::"b" (kcb->jprobe_saved_sp):"memory"); } +NOKPROBE_SYMBOL(jprobe_return); +NOKPROBE_SYMBOL(jprobe_return_end); -int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) +int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) { struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); u8 *addr = (u8 *) (regs->ip - 1); @@ -1065,6 +1085,7 @@ int __kprobes longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) } return 0; } +NOKPROBE_SYMBOL(longjmp_break_handler); bool arch_within_kprobe_blacklist(unsigned long addr) { diff --git a/arch/x86/kernel/kprobes/ftrace.c b/arch/x86/kernel/kprobes/ftrace.c index dcaa131..068ea83a 100644 --- a/arch/x86/kernel/kprobes/ftrace.c +++ b/arch/x86/kernel/kprobes/ftrace.c @@ -25,8 +25,9 @@ #include "common.h" -static int __skip_singlestep(struct kprobe *p, struct pt_regs *regs, - struct kprobe_ctlblk *kcb) +static __always_inline +int __skip_singlestep(struct kprobe *p, struct pt_regs *regs, + struct kprobe_ctlblk *kcb) { /* * Emulate singlestep (and also recover regs->ip) @@ -41,18 +42,19 @@ static int __skip_singlestep(struct kprobe *p, struct pt_regs *regs, return 1; } -int __kprobes skip_singlestep(struct kprobe *p, struct pt_regs *regs, - struct kprobe_ctlblk *kcb) +int skip_singlestep(struct kprobe *p, struct pt_regs *regs, + struct kprobe_ctlblk *kcb) { if (kprobe_ftrace(p)) return __skip_singlestep(p, regs, kcb); else return 0; } +NOKPROBE_SYMBOL(skip_singlestep); /* Ftrace callback handler for kprobes */ -void __kprobes kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip, - struct ftrace_ops *ops, struct pt_regs *regs) +void kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip, + struct ftrace_ops *ops, struct pt_regs *regs) { struct kprobe *p; struct kprobe_ctlblk *kcb; @@ -84,6 +86,7 @@ void __kprobes kprobe_ftrace_handler(unsigned long ip, unsigned long parent_ip, end: local_irq_restore(flags); } +NOKPROBE_SYMBOL(kprobe_ftrace_handler); int arch_prepare_kprobe_ftrace(struct kprobe *p) { diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c index fba7fb0..f304773 100644 --- a/arch/x86/kernel/kprobes/opt.c +++ b/arch/x86/kernel/kprobes/opt.c @@ -138,7 +138,8 @@ asm ( #define INT3_SIZE sizeof(kprobe_opcode_t) /* Optimized kprobe call back function: called from optinsn */ -static void __kprobes optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs) +static void +optimized_callback(struct optimized_kprobe *op, struct pt_regs *regs) { struct kprobe_ctlblk *kcb = get_kprobe_ctlblk(); unsigned long flags; @@ -168,6 +169,7 @@ static void __kprobes optimized_callback(struct optimized_kprobe *op, struct pt_ } local_irq_restore(flags); } +NOKPROBE_SYMBOL(optimized_callback); static int copy_optimized_instructions(u8 *dest, u8 *src) { @@ -424,8 +426,7 @@ extern void arch_unoptimize_kprobes(struct list_head *oplist, } } -int __kprobes -setup_detour_execution(struct kprobe *p, struct pt_regs *regs, int reenter) +int setup_detour_execution(struct kprobe *p, struct pt_regs *regs, int reenter) { struct optimized_kprobe *op; @@ -441,3 +442,4 @@ setup_detour_execution(struct kprobe *p, struct pt_regs *regs, int reenter) } return 0; } +NOKPROBE_SYMBOL(setup_detour_execution);
Masami Hiramatsu
2013-Nov-20 04:22 UTC
[PATCH -tip v3 08/23] kprobes: Allow probe on some kprobe functions
There is no need to prohibit probing on the functions used for preparation, registeration, optimization, controll etc. Those are safely probed because those are not invoked from breakpoint/fault/debug handlers, there is no chance to cause recursive exceptions. Following functions are now removed from the kprobes blacklist. add_new_kprobe aggr_kprobe_disabled alloc_aggr_kprobe alloc_aggr_kprobe arm_all_kprobes __arm_kprobe arm_kprobe arm_kprobe_ftrace check_kprobe_address_safe collect_garbage_slots collect_garbage_slots collect_one_slot debugfs_kprobe_init __disable_kprobe disable_kprobe disarm_all_kprobes __disarm_kprobe disarm_kprobe disarm_kprobe_ftrace do_free_cleaned_kprobes do_optimize_kprobes do_unoptimize_kprobes enable_kprobe force_unoptimize_kprobe free_aggr_kprobe free_aggr_kprobe __free_insn_slot __get_insn_slot get_optimized_kprobe __get_valid_kprobe init_aggr_kprobe init_aggr_kprobe in_nokprobe_functions kick_kprobe_optimizer kill_kprobe kill_optimized_kprobe kprobe_addr kprobe_optimizer kprobe_queued kprobe_seq_next kprobe_seq_start kprobe_seq_stop kprobes_module_callback kprobes_open optimize_all_kprobes optimize_kprobe prepare_kprobe prepare_optimized_kprobe register_aggr_kprobe register_jprobe register_jprobes register_kprobe register_kprobes register_kretprobe register_kretprobe register_kretprobes register_kretprobes report_probe show_kprobe_addr try_to_optimize_kprobe unoptimize_all_kprobes unoptimize_kprobe unregister_jprobe unregister_jprobes unregister_kprobe __unregister_kprobe_bottom unregister_kprobes __unregister_kprobe_top unregister_kretprobe unregister_kretprobe unregister_kretprobes unregister_kretprobes wait_for_kprobe_optimizer Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Cc: Ananth N Mavinakayanahalli <ananth at in.ibm.com> Cc: "David S. Miller" <davem at davemloft.net> --- kernel/kprobes.c | 153 +++++++++++++++++++++++++++--------------------------- 1 file changed, 76 insertions(+), 77 deletions(-) diff --git a/kernel/kprobes.c b/kernel/kprobes.c index eb9b938..fa68d83 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -139,13 +139,13 @@ struct kprobe_insn_cache kprobe_insn_slots = { .insn_size = MAX_INSN_SIZE, .nr_garbage = 0, }; -static int __kprobes collect_garbage_slots(struct kprobe_insn_cache *c); +static int collect_garbage_slots(struct kprobe_insn_cache *c); /** * __get_insn_slot() - Find a slot on an executable page for an instruction. * We allocate an executable page if there's no room on existing ones. */ -kprobe_opcode_t __kprobes *__get_insn_slot(struct kprobe_insn_cache *c) +kprobe_opcode_t *__get_insn_slot(struct kprobe_insn_cache *c) { struct kprobe_insn_page *kip; kprobe_opcode_t *slot = NULL; @@ -202,7 +202,7 @@ out: } /* Return 1 if all garbages are collected, otherwise 0. */ -static int __kprobes collect_one_slot(struct kprobe_insn_page *kip, int idx) +static int collect_one_slot(struct kprobe_insn_page *kip, int idx) { kip->slot_used[idx] = SLOT_CLEAN; kip->nused--; @@ -223,7 +223,7 @@ static int __kprobes collect_one_slot(struct kprobe_insn_page *kip, int idx) return 0; } -static int __kprobes collect_garbage_slots(struct kprobe_insn_cache *c) +static int collect_garbage_slots(struct kprobe_insn_cache *c) { struct kprobe_insn_page *kip, *next; @@ -245,8 +245,8 @@ static int __kprobes collect_garbage_slots(struct kprobe_insn_cache *c) return 0; } -void __kprobes __free_insn_slot(struct kprobe_insn_cache *c, - kprobe_opcode_t *slot, int dirty) +void __free_insn_slot(struct kprobe_insn_cache *c, + kprobe_opcode_t *slot, int dirty) { struct kprobe_insn_page *kip; @@ -362,7 +362,7 @@ void __kprobes opt_pre_handler(struct kprobe *p, struct pt_regs *regs) } /* Free optimized instructions and optimized_kprobe */ -static __kprobes void free_aggr_kprobe(struct kprobe *p) +static void free_aggr_kprobe(struct kprobe *p) { struct optimized_kprobe *op; @@ -400,7 +400,7 @@ static inline int kprobe_disarmed(struct kprobe *p) } /* Return true(!0) if the probe is queued on (un)optimizing lists */ -static int __kprobes kprobe_queued(struct kprobe *p) +static int kprobe_queued(struct kprobe *p) { struct optimized_kprobe *op; @@ -416,7 +416,7 @@ static int __kprobes kprobe_queued(struct kprobe *p) * Return an optimized kprobe whose optimizing code replaces * instructions including addr (exclude breakpoint). */ -static struct kprobe *__kprobes get_optimized_kprobe(unsigned long addr) +static struct kprobe *get_optimized_kprobe(unsigned long addr) { int i; struct kprobe *p = NULL; @@ -448,7 +448,7 @@ static DECLARE_DELAYED_WORK(optimizing_work, kprobe_optimizer); * Optimize (replace a breakpoint with a jump) kprobes listed on * optimizing_list. */ -static __kprobes void do_optimize_kprobes(void) +static void do_optimize_kprobes(void) { /* Optimization never be done when disarmed */ if (kprobes_all_disarmed || !kprobes_allow_optimization || @@ -476,7 +476,7 @@ static __kprobes void do_optimize_kprobes(void) * Unoptimize (replace a jump with a breakpoint and remove the breakpoint * if need) kprobes listed on unoptimizing_list. */ -static __kprobes void do_unoptimize_kprobes(void) +static void do_unoptimize_kprobes(void) { struct optimized_kprobe *op, *tmp; @@ -508,7 +508,7 @@ static __kprobes void do_unoptimize_kprobes(void) } /* Reclaim all kprobes on the free_list */ -static __kprobes void do_free_cleaned_kprobes(void) +static void do_free_cleaned_kprobes(void) { struct optimized_kprobe *op, *tmp; @@ -520,13 +520,13 @@ static __kprobes void do_free_cleaned_kprobes(void) } /* Start optimizer after OPTIMIZE_DELAY passed */ -static __kprobes void kick_kprobe_optimizer(void) +static void kick_kprobe_optimizer(void) { schedule_delayed_work(&optimizing_work, OPTIMIZE_DELAY); } /* Kprobe jump optimizer */ -static __kprobes void kprobe_optimizer(struct work_struct *work) +static void kprobe_optimizer(struct work_struct *work) { mutex_lock(&kprobe_mutex); /* Lock modules while optimizing kprobes */ @@ -562,7 +562,7 @@ static __kprobes void kprobe_optimizer(struct work_struct *work) } /* Wait for completing optimization and unoptimization */ -static __kprobes void wait_for_kprobe_optimizer(void) +static void wait_for_kprobe_optimizer(void) { mutex_lock(&kprobe_mutex); @@ -581,7 +581,7 @@ static __kprobes void wait_for_kprobe_optimizer(void) } /* Optimize kprobe if p is ready to be optimized */ -static __kprobes void optimize_kprobe(struct kprobe *p) +static void optimize_kprobe(struct kprobe *p) { struct optimized_kprobe *op; @@ -615,7 +615,7 @@ static __kprobes void optimize_kprobe(struct kprobe *p) } /* Short cut to direct unoptimizing */ -static __kprobes void force_unoptimize_kprobe(struct optimized_kprobe *op) +static void force_unoptimize_kprobe(struct optimized_kprobe *op) { get_online_cpus(); arch_unoptimize_kprobe(op); @@ -625,7 +625,7 @@ static __kprobes void force_unoptimize_kprobe(struct optimized_kprobe *op) } /* Unoptimize a kprobe if p is optimized */ -static __kprobes void unoptimize_kprobe(struct kprobe *p, bool force) +static void unoptimize_kprobe(struct kprobe *p, bool force) { struct optimized_kprobe *op; @@ -685,7 +685,7 @@ static void reuse_unused_kprobe(struct kprobe *ap) } /* Remove optimized instructions */ -static void __kprobes kill_optimized_kprobe(struct kprobe *p) +static void kill_optimized_kprobe(struct kprobe *p) { struct optimized_kprobe *op; @@ -711,7 +711,7 @@ static void __kprobes kill_optimized_kprobe(struct kprobe *p) } /* Try to prepare optimized instructions */ -static __kprobes void prepare_optimized_kprobe(struct kprobe *p) +static void prepare_optimized_kprobe(struct kprobe *p) { struct optimized_kprobe *op; @@ -720,7 +720,7 @@ static __kprobes void prepare_optimized_kprobe(struct kprobe *p) } /* Allocate new optimized_kprobe and try to prepare optimized instructions */ -static __kprobes struct kprobe *alloc_aggr_kprobe(struct kprobe *p) +static struct kprobe *alloc_aggr_kprobe(struct kprobe *p) { struct optimized_kprobe *op; @@ -735,13 +735,13 @@ static __kprobes struct kprobe *alloc_aggr_kprobe(struct kprobe *p) return &op->kp; } -static void __kprobes init_aggr_kprobe(struct kprobe *ap, struct kprobe *p); +static void init_aggr_kprobe(struct kprobe *ap, struct kprobe *p); /* * Prepare an optimized_kprobe and optimize it * NOTE: p must be a normal registered kprobe */ -static __kprobes void try_to_optimize_kprobe(struct kprobe *p) +static void try_to_optimize_kprobe(struct kprobe *p) { struct kprobe *ap; struct optimized_kprobe *op; @@ -775,7 +775,7 @@ out: } #ifdef CONFIG_SYSCTL -static void __kprobes optimize_all_kprobes(void) +static void optimize_all_kprobes(void) { struct hlist_head *head; struct kprobe *p; @@ -798,7 +798,7 @@ out: mutex_unlock(&kprobe_mutex); } -static void __kprobes unoptimize_all_kprobes(void) +static void unoptimize_all_kprobes(void) { struct hlist_head *head; struct kprobe *p; @@ -849,7 +849,7 @@ int proc_kprobes_optimization_handler(struct ctl_table *table, int write, #endif /* CONFIG_SYSCTL */ /* Put a breakpoint for a probe. Must be called with text_mutex locked */ -static void __kprobes __arm_kprobe(struct kprobe *p) +static void __arm_kprobe(struct kprobe *p) { struct kprobe *_p; @@ -864,7 +864,7 @@ static void __kprobes __arm_kprobe(struct kprobe *p) } /* Remove the breakpoint of a probe. Must be called with text_mutex locked */ -static void __kprobes __disarm_kprobe(struct kprobe *p, bool reopt) +static void __disarm_kprobe(struct kprobe *p, bool reopt) { struct kprobe *_p; @@ -899,13 +899,13 @@ static void reuse_unused_kprobe(struct kprobe *ap) BUG_ON(kprobe_unused(ap)); } -static __kprobes void free_aggr_kprobe(struct kprobe *p) +static void free_aggr_kprobe(struct kprobe *p) { arch_remove_kprobe(p); kfree(p); } -static __kprobes struct kprobe *alloc_aggr_kprobe(struct kprobe *p) +static struct kprobe *alloc_aggr_kprobe(struct kprobe *p) { return kzalloc(sizeof(struct kprobe), GFP_KERNEL); } @@ -919,7 +919,7 @@ static struct ftrace_ops kprobe_ftrace_ops __read_mostly = { static int kprobe_ftrace_enabled; /* Must ensure p->addr is really on ftrace */ -static int __kprobes prepare_kprobe(struct kprobe *p) +static int prepare_kprobe(struct kprobe *p) { if (!kprobe_ftrace(p)) return arch_prepare_kprobe(p); @@ -928,7 +928,7 @@ static int __kprobes prepare_kprobe(struct kprobe *p) } /* Caller must lock kprobe_mutex */ -static void __kprobes arm_kprobe_ftrace(struct kprobe *p) +static void arm_kprobe_ftrace(struct kprobe *p) { int ret; @@ -943,7 +943,7 @@ static void __kprobes arm_kprobe_ftrace(struct kprobe *p) } /* Caller must lock kprobe_mutex */ -static void __kprobes disarm_kprobe_ftrace(struct kprobe *p) +static void disarm_kprobe_ftrace(struct kprobe *p) { int ret; @@ -963,7 +963,7 @@ static void __kprobes disarm_kprobe_ftrace(struct kprobe *p) #endif /* Arm a kprobe with text_mutex */ -static void __kprobes arm_kprobe(struct kprobe *kp) +static void arm_kprobe(struct kprobe *kp) { if (unlikely(kprobe_ftrace(kp))) { arm_kprobe_ftrace(kp); @@ -980,7 +980,7 @@ static void __kprobes arm_kprobe(struct kprobe *kp) } /* Disarm a kprobe with text_mutex */ -static void __kprobes disarm_kprobe(struct kprobe *kp, bool reopt) +static void disarm_kprobe(struct kprobe *kp, bool reopt) { if (unlikely(kprobe_ftrace(kp))) { disarm_kprobe_ftrace(kp); @@ -1190,7 +1190,7 @@ static void __kprobes cleanup_rp_inst(struct kretprobe *rp) * Add the new probe to ap->list. Fail if this is the * second jprobe at the address - two jprobes can't coexist */ -static int __kprobes add_new_kprobe(struct kprobe *ap, struct kprobe *p) +static int add_new_kprobe(struct kprobe *ap, struct kprobe *p) { BUG_ON(kprobe_gone(ap) || kprobe_gone(p)); @@ -1214,7 +1214,7 @@ static int __kprobes add_new_kprobe(struct kprobe *ap, struct kprobe *p) * Fill in the required fields of the "manager kprobe". Replace the * earlier kprobe in the hlist with the manager kprobe */ -static void __kprobes init_aggr_kprobe(struct kprobe *ap, struct kprobe *p) +static void init_aggr_kprobe(struct kprobe *ap, struct kprobe *p) { /* Copy p's insn slot to ap */ copy_kprobe(p, ap); @@ -1240,8 +1240,7 @@ static void __kprobes init_aggr_kprobe(struct kprobe *ap, struct kprobe *p) * This is the second or subsequent kprobe at the address - handle * the intricacies */ -static int __kprobes register_aggr_kprobe(struct kprobe *orig_p, - struct kprobe *p) +static int register_aggr_kprobe(struct kprobe *orig_p, struct kprobe *p) { int ret = 0; struct kprobe *ap = orig_p; @@ -1319,7 +1318,7 @@ bool __weak arch_within_kprobe_blacklist(unsigned long addr) addr < (unsigned long)__kprobes_text_end); } -static bool __kprobes within_kprobe_blacklist(unsigned long addr) +static bool within_kprobe_blacklist(unsigned long addr) { struct kprobe_blackpoint *bp; @@ -1344,7 +1343,7 @@ static bool __kprobes within_kprobe_blacklist(unsigned long addr) * This returns encoded errors if it fails to look up symbol or invalid * combination of parameters. */ -static kprobe_opcode_t __kprobes *kprobe_addr(struct kprobe *p) +static kprobe_opcode_t *kprobe_addr(struct kprobe *p) { kprobe_opcode_t *addr = p->addr; @@ -1367,7 +1366,7 @@ invalid: } /* Check passed kprobe is valid and return kprobe in kprobe_table. */ -static struct kprobe * __kprobes __get_valid_kprobe(struct kprobe *p) +static struct kprobe *__get_valid_kprobe(struct kprobe *p) { struct kprobe *ap, *list_p; @@ -1399,8 +1398,8 @@ static inline int check_kprobe_rereg(struct kprobe *p) return ret; } -static __kprobes int check_kprobe_address_safe(struct kprobe *p, - struct module **probed_mod) +static int check_kprobe_address_safe(struct kprobe *p, + struct module **probed_mod) { int ret = 0; unsigned long ftrace_addr; @@ -1464,7 +1463,7 @@ out: return ret; } -int __kprobes register_kprobe(struct kprobe *p) +int register_kprobe(struct kprobe *p) { int ret; struct kprobe *old_p; @@ -1526,7 +1525,7 @@ out: EXPORT_SYMBOL_GPL(register_kprobe); /* Check if all probes on the aggrprobe are disabled */ -static int __kprobes aggr_kprobe_disabled(struct kprobe *ap) +static int aggr_kprobe_disabled(struct kprobe *ap) { struct kprobe *kp; @@ -1542,7 +1541,7 @@ static int __kprobes aggr_kprobe_disabled(struct kprobe *ap) } /* Disable one kprobe: Make sure called under kprobe_mutex is locked */ -static struct kprobe *__kprobes __disable_kprobe(struct kprobe *p) +static struct kprobe *__disable_kprobe(struct kprobe *p) { struct kprobe *orig_p; @@ -1569,7 +1568,7 @@ static struct kprobe *__kprobes __disable_kprobe(struct kprobe *p) /* * Unregister a kprobe without a scheduler synchronization. */ -static int __kprobes __unregister_kprobe_top(struct kprobe *p) +static int __unregister_kprobe_top(struct kprobe *p) { struct kprobe *ap, *list_p; @@ -1626,7 +1625,7 @@ disarmed: return 0; } -static void __kprobes __unregister_kprobe_bottom(struct kprobe *p) +static void __unregister_kprobe_bottom(struct kprobe *p) { struct kprobe *ap; @@ -1642,7 +1641,7 @@ static void __kprobes __unregister_kprobe_bottom(struct kprobe *p) /* Otherwise, do nothing. */ } -int __kprobes register_kprobes(struct kprobe **kps, int num) +int register_kprobes(struct kprobe **kps, int num) { int i, ret = 0; @@ -1660,13 +1659,13 @@ int __kprobes register_kprobes(struct kprobe **kps, int num) } EXPORT_SYMBOL_GPL(register_kprobes); -void __kprobes unregister_kprobe(struct kprobe *p) +void unregister_kprobe(struct kprobe *p) { unregister_kprobes(&p, 1); } EXPORT_SYMBOL_GPL(unregister_kprobe); -void __kprobes unregister_kprobes(struct kprobe **kps, int num) +void unregister_kprobes(struct kprobe **kps, int num) { int i; @@ -1695,7 +1694,7 @@ unsigned long __weak arch_deref_entry_point(void *entry) return (unsigned long)entry; } -int __kprobes register_jprobes(struct jprobe **jps, int num) +int register_jprobes(struct jprobe **jps, int num) { struct jprobe *jp; int ret = 0, i; @@ -1726,19 +1725,19 @@ int __kprobes register_jprobes(struct jprobe **jps, int num) } EXPORT_SYMBOL_GPL(register_jprobes); -int __kprobes register_jprobe(struct jprobe *jp) +int register_jprobe(struct jprobe *jp) { return register_jprobes(&jp, 1); } EXPORT_SYMBOL_GPL(register_jprobe); -void __kprobes unregister_jprobe(struct jprobe *jp) +void unregister_jprobe(struct jprobe *jp) { unregister_jprobes(&jp, 1); } EXPORT_SYMBOL_GPL(unregister_jprobe); -void __kprobes unregister_jprobes(struct jprobe **jps, int num) +void unregister_jprobes(struct jprobe **jps, int num) { int i; @@ -1803,7 +1802,7 @@ static int __kprobes pre_handler_kretprobe(struct kprobe *p, return 0; } -int __kprobes register_kretprobe(struct kretprobe *rp) +int register_kretprobe(struct kretprobe *rp) { int ret = 0; struct kretprobe_instance *inst; @@ -1856,7 +1855,7 @@ int __kprobes register_kretprobe(struct kretprobe *rp) } EXPORT_SYMBOL_GPL(register_kretprobe); -int __kprobes register_kretprobes(struct kretprobe **rps, int num) +int register_kretprobes(struct kretprobe **rps, int num) { int ret = 0, i; @@ -1874,13 +1873,13 @@ int __kprobes register_kretprobes(struct kretprobe **rps, int num) } EXPORT_SYMBOL_GPL(register_kretprobes); -void __kprobes unregister_kretprobe(struct kretprobe *rp) +void unregister_kretprobe(struct kretprobe *rp) { unregister_kretprobes(&rp, 1); } EXPORT_SYMBOL_GPL(unregister_kretprobe); -void __kprobes unregister_kretprobes(struct kretprobe **rps, int num) +void unregister_kretprobes(struct kretprobe **rps, int num) { int i; @@ -1903,24 +1902,24 @@ void __kprobes unregister_kretprobes(struct kretprobe **rps, int num) EXPORT_SYMBOL_GPL(unregister_kretprobes); #else /* CONFIG_KRETPROBES */ -int __kprobes register_kretprobe(struct kretprobe *rp) +int register_kretprobe(struct kretprobe *rp) { return -ENOSYS; } EXPORT_SYMBOL_GPL(register_kretprobe); -int __kprobes register_kretprobes(struct kretprobe **rps, int num) +int register_kretprobes(struct kretprobe **rps, int num) { return -ENOSYS; } EXPORT_SYMBOL_GPL(register_kretprobes); -void __kprobes unregister_kretprobe(struct kretprobe *rp) +void unregister_kretprobe(struct kretprobe *rp) { } EXPORT_SYMBOL_GPL(unregister_kretprobe); -void __kprobes unregister_kretprobes(struct kretprobe **rps, int num) +void unregister_kretprobes(struct kretprobe **rps, int num) { } EXPORT_SYMBOL_GPL(unregister_kretprobes); @@ -1934,7 +1933,7 @@ static int __kprobes pre_handler_kretprobe(struct kprobe *p, #endif /* CONFIG_KRETPROBES */ /* Set the kprobe gone and remove its instruction buffer. */ -static void __kprobes kill_kprobe(struct kprobe *p) +static void kill_kprobe(struct kprobe *p) { struct kprobe *kp; @@ -1958,7 +1957,7 @@ static void __kprobes kill_kprobe(struct kprobe *p) } /* Disable one kprobe */ -int __kprobes disable_kprobe(struct kprobe *kp) +int disable_kprobe(struct kprobe *kp) { int ret = 0; @@ -1974,7 +1973,7 @@ int __kprobes disable_kprobe(struct kprobe *kp) EXPORT_SYMBOL_GPL(disable_kprobe); /* Enable one kprobe */ -int __kprobes enable_kprobe(struct kprobe *kp) +int enable_kprobe(struct kprobe *kp) { int ret = 0; struct kprobe *p; @@ -2020,8 +2019,8 @@ static void shrink_kprobe_blacklist(struct kprobe_blackpoint **start, struct kprobe_blackpoint **end); /* Module notifier call back, checking kprobes on the module */ -static int __kprobes kprobes_module_callback(struct notifier_block *nb, - unsigned long val, void *data) +static int kprobes_module_callback(struct notifier_block *nb, + unsigned long val, void *data) { struct module *mod = data; struct hlist_head *head; @@ -2170,7 +2169,7 @@ static int __init init_kprobes(void) } #ifdef CONFIG_DEBUG_FS -static void __kprobes report_probe(struct seq_file *pi, struct kprobe *p, +static void report_probe(struct seq_file *pi, struct kprobe *p, const char *sym, int offset, char *modname, struct kprobe *pp) { char *kprobe_type; @@ -2199,12 +2198,12 @@ static void __kprobes report_probe(struct seq_file *pi, struct kprobe *p, (kprobe_ftrace(pp) ? "[FTRACE]" : "")); } -static void __kprobes *kprobe_seq_start(struct seq_file *f, loff_t *pos) +static void *kprobe_seq_start(struct seq_file *f, loff_t *pos) { return (*pos < KPROBE_TABLE_SIZE) ? pos : NULL; } -static void __kprobes *kprobe_seq_next(struct seq_file *f, void *v, loff_t *pos) +static void *kprobe_seq_next(struct seq_file *f, void *v, loff_t *pos) { (*pos)++; if (*pos >= KPROBE_TABLE_SIZE) @@ -2212,12 +2211,12 @@ static void __kprobes *kprobe_seq_next(struct seq_file *f, void *v, loff_t *pos) return pos; } -static void __kprobes kprobe_seq_stop(struct seq_file *f, void *v) +static void kprobe_seq_stop(struct seq_file *f, void *v) { /* Nothing to do */ } -static int __kprobes show_kprobe_addr(struct seq_file *pi, void *v) +static int show_kprobe_addr(struct seq_file *pi, void *v) { struct hlist_head *head; struct kprobe *p, *kp; @@ -2248,7 +2247,7 @@ static const struct seq_operations kprobes_seq_ops = { .show = show_kprobe_addr }; -static int __kprobes kprobes_open(struct inode *inode, struct file *filp) +static int kprobes_open(struct inode *inode, struct file *filp) { return seq_open(filp, &kprobes_seq_ops); } @@ -2306,7 +2305,7 @@ static const struct file_operations debugfs_kprobe_blacklist_ops = { .release = seq_release, }; -static void __kprobes arm_all_kprobes(void) +static void arm_all_kprobes(void) { struct hlist_head *head; struct kprobe *p; @@ -2334,7 +2333,7 @@ already_enabled: return; } -static void __kprobes disarm_all_kprobes(void) +static void disarm_all_kprobes(void) { struct hlist_head *head; struct kprobe *p; @@ -2418,7 +2417,7 @@ static const struct file_operations fops_kp = { .llseek = default_llseek, }; -static int __kprobes debugfs_kprobe_init(void) +static int __init debugfs_kprobe_init(void) { struct dentry *dir, *file; unsigned int value = 1;
Masami Hiramatsu
2013-Nov-20 04:22 UTC
[PATCH -tip v3 09/23] kprobes: Use NOKPROBE_SYMBOL macro instead of __kprobes
Use NOKPROBE_SYMBOL macro to protect functions from kprobes instead of __kprobes annotation. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Cc: Ananth N Mavinakayanahalli <ananth at in.ibm.com> Cc: "David S. Miller" <davem at davemloft.net> --- kernel/kprobes.c | 67 +++++++++++++++++++++++++++++++++--------------------- 1 file changed, 41 insertions(+), 26 deletions(-) diff --git a/kernel/kprobes.c b/kernel/kprobes.c index fa68d83..0a206ec 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -302,7 +302,7 @@ static inline void reset_kprobe_instance(void) * OR * - with preemption disabled - from arch/xxx/kernel/kprobes.c */ -struct kprobe __kprobes *get_kprobe(void *addr) +struct kprobe *get_kprobe(void *addr) { struct hlist_head *head; struct kprobe *p; @@ -315,8 +315,9 @@ struct kprobe __kprobes *get_kprobe(void *addr) return NULL; } +NOKPROBE_SYMBOL(get_kprobe); -static int __kprobes aggr_pre_handler(struct kprobe *p, struct pt_regs *regs); +static int aggr_pre_handler(struct kprobe *p, struct pt_regs *regs); /* Return true if the kprobe is an aggregator */ static inline int kprobe_aggrprobe(struct kprobe *p) @@ -348,7 +349,7 @@ static bool kprobes_allow_optimization; * Call all pre_handler on the list, but ignores its return value. * This must be called from arch-dep optimized caller. */ -void __kprobes opt_pre_handler(struct kprobe *p, struct pt_regs *regs) +void opt_pre_handler(struct kprobe *p, struct pt_regs *regs) { struct kprobe *kp; @@ -360,6 +361,7 @@ void __kprobes opt_pre_handler(struct kprobe *p, struct pt_regs *regs) reset_kprobe_instance(); } } +NOKPROBE_SYMBOL(opt_pre_handler); /* Free optimized instructions and optimized_kprobe */ static void free_aggr_kprobe(struct kprobe *p) @@ -996,7 +998,7 @@ static void disarm_kprobe(struct kprobe *kp, bool reopt) * Aggregate handlers for multiple kprobes support - these handlers * take care of invoking the individual kprobe handlers on p->list */ -static int __kprobes aggr_pre_handler(struct kprobe *p, struct pt_regs *regs) +static int aggr_pre_handler(struct kprobe *p, struct pt_regs *regs) { struct kprobe *kp; @@ -1010,9 +1012,10 @@ static int __kprobes aggr_pre_handler(struct kprobe *p, struct pt_regs *regs) } return 0; } +NOKPROBE_SYMBOL(aggr_pre_handler); -static void __kprobes aggr_post_handler(struct kprobe *p, struct pt_regs *regs, - unsigned long flags) +static void aggr_post_handler(struct kprobe *p, struct pt_regs *regs, + unsigned long flags) { struct kprobe *kp; @@ -1024,9 +1027,10 @@ static void __kprobes aggr_post_handler(struct kprobe *p, struct pt_regs *regs, } } } +NOKPROBE_SYMBOL(aggr_post_handler); -static int __kprobes aggr_fault_handler(struct kprobe *p, struct pt_regs *regs, - int trapnr) +static int aggr_fault_handler(struct kprobe *p, struct pt_regs *regs, + int trapnr) { struct kprobe *cur = __this_cpu_read(kprobe_instance); @@ -1040,8 +1044,9 @@ static int __kprobes aggr_fault_handler(struct kprobe *p, struct pt_regs *regs, } return 0; } +NOKPROBE_SYMBOL(aggr_fault_handler); -static int __kprobes aggr_break_handler(struct kprobe *p, struct pt_regs *regs) +static int aggr_break_handler(struct kprobe *p, struct pt_regs *regs) { struct kprobe *cur = __this_cpu_read(kprobe_instance); int ret = 0; @@ -1053,9 +1058,10 @@ static int __kprobes aggr_break_handler(struct kprobe *p, struct pt_regs *regs) reset_kprobe_instance(); return ret; } +NOKPROBE_SYMBOL(aggr_break_handler); /* Walks the list and increments nmissed count for multiprobe case */ -void __kprobes kprobes_inc_nmissed_count(struct kprobe *p) +void kprobes_inc_nmissed_count(struct kprobe *p) { struct kprobe *kp; if (!kprobe_aggrprobe(p)) { @@ -1066,9 +1072,10 @@ void __kprobes kprobes_inc_nmissed_count(struct kprobe *p) } return; } +NOKPROBE_SYMBOL(kprobes_inc_nmissed_count); -void __kprobes recycle_rp_inst(struct kretprobe_instance *ri, - struct hlist_head *head) +void recycle_rp_inst(struct kretprobe_instance *ri, + struct hlist_head *head) { struct kretprobe *rp = ri->rp; @@ -1083,8 +1090,9 @@ void __kprobes recycle_rp_inst(struct kretprobe_instance *ri, /* Unregistering */ hlist_add_head(&ri->hlist, head); } +NOKPROBE_SYMBOL(recycle_rp_inst); -void __kprobes kretprobe_hash_lock(struct task_struct *tsk, +void kretprobe_hash_lock(struct task_struct *tsk, struct hlist_head **head, unsigned long *flags) __acquires(hlist_lock) { @@ -1095,17 +1103,19 @@ __acquires(hlist_lock) hlist_lock = kretprobe_table_lock_ptr(hash); raw_spin_lock_irqsave(hlist_lock, *flags); } +NOKPROBE_SYMBOL(kretprobe_hash_lock); -static void __kprobes kretprobe_table_lock(unsigned long hash, - unsigned long *flags) +static void kretprobe_table_lock(unsigned long hash, + unsigned long *flags) __acquires(hlist_lock) { raw_spinlock_t *hlist_lock = kretprobe_table_lock_ptr(hash); raw_spin_lock_irqsave(hlist_lock, *flags); } +NOKPROBE_SYMBOL(kretprobe_table_lock); -void __kprobes kretprobe_hash_unlock(struct task_struct *tsk, - unsigned long *flags) +void kretprobe_hash_unlock(struct task_struct *tsk, + unsigned long *flags) __releases(hlist_lock) { unsigned long hash = hash_ptr(tsk, KPROBE_HASH_BITS); @@ -1114,14 +1124,16 @@ __releases(hlist_lock) hlist_lock = kretprobe_table_lock_ptr(hash); raw_spin_unlock_irqrestore(hlist_lock, *flags); } +NOKPROBE_SYMBOL(kretprobe_hash_unlock); -static void __kprobes kretprobe_table_unlock(unsigned long hash, - unsigned long *flags) +static void kretprobe_table_unlock(unsigned long hash, + unsigned long *flags) __releases(hlist_lock) { raw_spinlock_t *hlist_lock = kretprobe_table_lock_ptr(hash); raw_spin_unlock_irqrestore(hlist_lock, *flags); } +NOKPROBE_SYMBOL(kretprobe_table_unlock); /* * This function is called from finish_task_switch when task tk becomes dead, @@ -1129,7 +1141,7 @@ __releases(hlist_lock) * with this task. These left over instances represent probed functions * that have been called but will never return. */ -void __kprobes kprobe_flush_task(struct task_struct *tk) +void kprobe_flush_task(struct task_struct *tk) { struct kretprobe_instance *ri; struct hlist_head *head, empty_rp; @@ -1154,6 +1166,7 @@ void __kprobes kprobe_flush_task(struct task_struct *tk) kfree(ri); } } +NOKPROBE_SYMBOL(kprobe_flush_task); static inline void free_rp_inst(struct kretprobe *rp) { @@ -1166,7 +1179,7 @@ static inline void free_rp_inst(struct kretprobe *rp) } } -static void __kprobes cleanup_rp_inst(struct kretprobe *rp) +static void cleanup_rp_inst(struct kretprobe *rp) { unsigned long flags, hash; struct kretprobe_instance *ri; @@ -1185,6 +1198,7 @@ static void __kprobes cleanup_rp_inst(struct kretprobe *rp) } free_rp_inst(rp); } +NOKPROBE_SYMBOL(cleanup_rp_inst); /* * Add the new probe to ap->list. Fail if this is the @@ -1762,8 +1776,7 @@ EXPORT_SYMBOL_GPL(unregister_jprobes); * This kprobe pre_handler is registered with every kretprobe. When probe * hits it will set up the return probe. */ -static int __kprobes pre_handler_kretprobe(struct kprobe *p, - struct pt_regs *regs) +static int pre_handler_kretprobe(struct kprobe *p, struct pt_regs *regs) { struct kretprobe *rp = container_of(p, struct kretprobe, kp); unsigned long hash, flags = 0; @@ -1801,6 +1814,7 @@ static int __kprobes pre_handler_kretprobe(struct kprobe *p, } return 0; } +NOKPROBE_SYMBOL(pre_handler_kretprobe); int register_kretprobe(struct kretprobe *rp) { @@ -1924,11 +1938,11 @@ void unregister_kretprobes(struct kretprobe **rps, int num) } EXPORT_SYMBOL_GPL(unregister_kretprobes); -static int __kprobes pre_handler_kretprobe(struct kprobe *p, - struct pt_regs *regs) +static int pre_handler_kretprobe(struct kprobe *p, struct pt_regs *regs) { return 0; } +NOKPROBE_SYMBOL(pre_handler_kretprobe); #endif /* CONFIG_KRETPROBES */ @@ -2006,12 +2020,13 @@ out: } EXPORT_SYMBOL_GPL(enable_kprobe); -void __kprobes dump_kprobe(struct kprobe *kp) +void dump_kprobe(struct kprobe *kp) { printk(KERN_WARNING "Dumping kprobe:\n"); printk(KERN_WARNING "Name: %s\nAddress: %p\nOffset: %x\n", kp->symbol_name, kp->addr, kp->offset); } +NOKPROBE_SYMBOL(dump_kprobe); static void populate_kprobe_blacklist(struct kprobe_blackpoint **start, struct kprobe_blackpoint **end);
Masami Hiramatsu
2013-Nov-20 04:22 UTC
[PATCH -tip v3 10/23] ftrace/kprobes: Allow probing on some preparation functions
There is no need to prohibit probing on the functions used for preparation. Those are safely probed because those are not invoked from breakpoint/fault/debug handlers, there is no chance to cause recursive exceptions. Following functions are now removed from the kprobes blacklist. update_bitfield_fetch_param free_bitfield_fetch_param kprobe_register Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Cc: Steven Rostedt <rostedt at goodmis.org> Cc: Frederic Weisbecker <fweisbec at gmail.com> Cc: Ingo Molnar <mingo at redhat.com> --- kernel/trace/trace_kprobe.c | 2 +- kernel/trace/trace_probe.c | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c index 243f683..e0132b4 100644 --- a/kernel/trace/trace_kprobe.c +++ b/kernel/trace/trace_kprobe.c @@ -1151,7 +1151,7 @@ kretprobe_perf_func(struct trace_probe *tp, struct kretprobe_instance *ri, * kprobe_trace_self_tests_init() does enable_trace_probe/disable_trace_probe * lockless, but we can't race with this __init function. */ -static __kprobes +static int kprobe_register(struct ftrace_event_call *event, enum trace_reg type, void *data) { diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c index 412e959..43638a2 100644 --- a/kernel/trace/trace_probe.c +++ b/kernel/trace/trace_probe.c @@ -346,7 +346,7 @@ DEFINE_BASIC_FETCH_FUNCS(bitfield) #define fetch_bitfield_string NULL #define fetch_bitfield_string_size NULL -static __kprobes void +static void update_bitfield_fetch_param(struct bitfield_fetch_param *data) { /* @@ -359,7 +359,7 @@ update_bitfield_fetch_param(struct bitfield_fetch_param *data) update_symbol_cache(data->orig.data); } -static __kprobes void +static void free_bitfield_fetch_param(struct bitfield_fetch_param *data) { /*
Masami Hiramatsu
2013-Nov-20 04:22 UTC
[PATCH -tip v3 11/23] ftrace/kprobes: Use NOKPROBE_SYMBOL macro in ftrace
Use NOKPROBE_SYMBOL macro to protect functions from kprobes instead of __kprobes annotation in ftrace. This applies __always_inline annotation for some cases, because NOKPROBE_SYMBOL() will inhibit inlining by referring the symbol address. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Cc: Steven Rostedt <rostedt at goodmis.org> Cc: Frederic Weisbecker <fweisbec at gmail.com> Cc: Ingo Molnar <mingo at redhat.com> --- kernel/trace/trace_event_perf.c | 5 ++- kernel/trace/trace_kprobe.c | 51 +++++++++++++++------------ kernel/trace/trace_probe.c | 74 +++++++++++++++++++++++---------------- kernel/trace/trace_probe.h | 4 +- 4 files changed, 76 insertions(+), 58 deletions(-) diff --git a/kernel/trace/trace_event_perf.c b/kernel/trace/trace_event_perf.c index 78e27e3..25d8903 100644 --- a/kernel/trace/trace_event_perf.c +++ b/kernel/trace/trace_event_perf.c @@ -226,8 +226,8 @@ void perf_trace_del(struct perf_event *p_event, int flags) tp_event->class->reg(tp_event, TRACE_REG_PERF_DEL, p_event); } -__kprobes void *perf_trace_buf_prepare(int size, unsigned short type, - struct pt_regs *regs, int *rctxp) +void *perf_trace_buf_prepare(int size, unsigned short type, + struct pt_regs *regs, int *rctxp) { struct trace_entry *entry; unsigned long flags; @@ -259,6 +259,7 @@ __kprobes void *perf_trace_buf_prepare(int size, unsigned short type, return raw_data; } EXPORT_SYMBOL_GPL(perf_trace_buf_prepare); +NOKPROBE_SYMBOL(perf_trace_buf_prepare); #ifdef CONFIG_FUNCTION_TRACER static void diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c index e0132b4..2f19ea6 100644 --- a/kernel/trace/trace_kprobe.c +++ b/kernel/trace/trace_kprobe.c @@ -51,45 +51,45 @@ struct event_file_link { (sizeof(struct probe_arg) * (n))) -static __kprobes bool trace_probe_is_return(struct trace_probe *tp) +static __always_inline bool trace_probe_is_return(struct trace_probe *tp) { return tp->rp.handler != NULL; } -static __kprobes const char *trace_probe_symbol(struct trace_probe *tp) +static __always_inline const char *trace_probe_symbol(struct trace_probe *tp) { return tp->symbol ? tp->symbol : "unknown"; } -static __kprobes unsigned long trace_probe_offset(struct trace_probe *tp) +static __always_inline unsigned long trace_probe_offset(struct trace_probe *tp) { return tp->rp.kp.offset; } -static __kprobes bool trace_probe_is_enabled(struct trace_probe *tp) +static __always_inline bool trace_probe_is_enabled(struct trace_probe *tp) { return !!(tp->flags & (TP_FLAG_TRACE | TP_FLAG_PROFILE)); } -static __kprobes bool trace_probe_is_registered(struct trace_probe *tp) +static __always_inline bool trace_probe_is_registered(struct trace_probe *tp) { return !!(tp->flags & TP_FLAG_REGISTERED); } -static __kprobes bool trace_probe_has_gone(struct trace_probe *tp) +static __always_inline bool trace_probe_has_gone(struct trace_probe *tp) { return !!(kprobe_gone(&tp->rp.kp)); } -static __kprobes bool trace_probe_within_module(struct trace_probe *tp, - struct module *mod) +static __always_inline bool trace_probe_within_module(struct trace_probe *tp, + struct module *mod) { int len = strlen(mod->name); const char *name = trace_probe_symbol(tp); return strncmp(mod->name, name, len) == 0 && name[len] == ':'; } -static __kprobes bool trace_probe_is_on_module(struct trace_probe *tp) +static __always_inline bool trace_probe_is_on_module(struct trace_probe *tp) { return !!strchr(trace_probe_symbol(tp), ':'); } @@ -755,8 +755,8 @@ static const struct file_operations kprobe_profile_ops = { }; /* Sum up total data length for dynamic arraies (strings) */ -static __kprobes int __get_data_size(struct trace_probe *tp, - struct pt_regs *regs) +static __always_inline +int __get_data_size(struct trace_probe *tp, struct pt_regs *regs) { int i, ret = 0; u32 len; @@ -771,9 +771,9 @@ static __kprobes int __get_data_size(struct trace_probe *tp, } /* Store the value of each argument */ -static __kprobes void store_trace_args(int ent_size, struct trace_probe *tp, - struct pt_regs *regs, - u8 *data, int maxlen) +static __always_inline +void store_trace_args(int ent_size, struct trace_probe *tp, + struct pt_regs *regs, u8 *data, int maxlen) { int i; u32 end = tp->size; @@ -803,7 +803,7 @@ static __kprobes void store_trace_args(int ent_size, struct trace_probe *tp, } /* Kprobe handler */ -static __kprobes void +static __always_inline void __kprobe_trace_func(struct trace_probe *tp, struct pt_regs *regs, struct ftrace_event_file *ftrace_file) { @@ -840,7 +840,7 @@ __kprobe_trace_func(struct trace_probe *tp, struct pt_regs *regs, irq_flags, pc, regs); } -static __kprobes void +static void kprobe_trace_func(struct trace_probe *tp, struct pt_regs *regs) { struct event_file_link *link; @@ -848,9 +848,10 @@ kprobe_trace_func(struct trace_probe *tp, struct pt_regs *regs) list_for_each_entry_rcu(link, &tp->files, list) __kprobe_trace_func(tp, regs, link->file); } +NOKPROBE_SYMBOL(kprobe_trace_func); /* Kretprobe handler */ -static __kprobes void +static __always_inline void __kretprobe_trace_func(struct trace_probe *tp, struct kretprobe_instance *ri, struct pt_regs *regs, struct ftrace_event_file *ftrace_file) @@ -889,7 +890,7 @@ __kretprobe_trace_func(struct trace_probe *tp, struct kretprobe_instance *ri, irq_flags, pc, regs); } -static __kprobes void +static void kretprobe_trace_func(struct trace_probe *tp, struct kretprobe_instance *ri, struct pt_regs *regs) { @@ -898,6 +899,7 @@ kretprobe_trace_func(struct trace_probe *tp, struct kretprobe_instance *ri, list_for_each_entry_rcu(link, &tp->files, list) __kretprobe_trace_func(tp, ri, regs, link->file); } +NOKPROBE_SYMBOL(kretprobe_trace_func); /* Event entry printers */ static enum print_line_t @@ -1086,7 +1088,7 @@ static int set_print_fmt(struct trace_probe *tp) #ifdef CONFIG_PERF_EVENTS /* Kprobe profile handler */ -static __kprobes void +static void kprobe_perf_func(struct trace_probe *tp, struct pt_regs *regs) { struct ftrace_event_call *call = &tp->call; @@ -1113,9 +1115,10 @@ kprobe_perf_func(struct trace_probe *tp, struct pt_regs *regs) store_trace_args(sizeof(*entry), tp, regs, (u8 *)&entry[1], dsize); perf_trace_buf_submit(entry, size, rctx, 0, 1, regs, head, NULL); } +NOKPROBE_SYMBOL(kprobe_perf_func); /* Kretprobe profile handler */ -static __kprobes void +static void kretprobe_perf_func(struct trace_probe *tp, struct kretprobe_instance *ri, struct pt_regs *regs) { @@ -1143,6 +1146,7 @@ kretprobe_perf_func(struct trace_probe *tp, struct kretprobe_instance *ri, store_trace_args(sizeof(*entry), tp, regs, (u8 *)&entry[1], dsize); perf_trace_buf_submit(entry, size, rctx, 0, 1, regs, head, NULL); } +NOKPROBE_SYMBOL(kretprobe_perf_func); #endif /* CONFIG_PERF_EVENTS */ /* @@ -1179,8 +1183,7 @@ int kprobe_register(struct ftrace_event_call *event, return 0; } -static __kprobes -int kprobe_dispatcher(struct kprobe *kp, struct pt_regs *regs) +static int kprobe_dispatcher(struct kprobe *kp, struct pt_regs *regs) { struct trace_probe *tp = container_of(kp, struct trace_probe, rp.kp); @@ -1194,8 +1197,9 @@ int kprobe_dispatcher(struct kprobe *kp, struct pt_regs *regs) #endif return 0; /* We don't tweek kernel, so just return 0 */ } +NOKPROBE_SYMBOL(kprobe_dispatcher); -static __kprobes +static int kretprobe_dispatcher(struct kretprobe_instance *ri, struct pt_regs *regs) { struct trace_probe *tp = container_of(ri->rp, struct trace_probe, rp); @@ -1210,6 +1214,7 @@ int kretprobe_dispatcher(struct kretprobe_instance *ri, struct pt_regs *regs) #endif return 0; /* We don't tweek kernel, so just return 0 */ } +NOKPROBE_SYMBOL(kretprobe_dispatcher); static struct trace_event_functions kretprobe_funcs = { .trace = print_kretprobe_event diff --git a/kernel/trace/trace_probe.c b/kernel/trace/trace_probe.c index 43638a2..314bdc6 100644 --- a/kernel/trace/trace_probe.c +++ b/kernel/trace/trace_probe.c @@ -41,13 +41,14 @@ const char *reserved_field_names[] = { /* Printing in basic type function template */ #define DEFINE_BASIC_PRINT_TYPE_FUNC(type, fmt, cast) \ -static __kprobes int PRINT_TYPE_FUNC_NAME(type)(struct trace_seq *s, \ - const char *name, \ - void *data, void *ent)\ +static int PRINT_TYPE_FUNC_NAME(type)(struct trace_seq *s, \ + const char *name, \ + void *data, void *ent) \ { \ return trace_seq_printf(s, " %s=" fmt, name, (cast)*(type *)data);\ } \ -static const char PRINT_TYPE_FMT_NAME(type)[] = fmt; +static const char PRINT_TYPE_FMT_NAME(type)[] = fmt; \ +NOKPROBE_SYMBOL(PRINT_TYPE_FUNC_NAME(type)); DEFINE_BASIC_PRINT_TYPE_FUNC(u8, "%x", unsigned int) DEFINE_BASIC_PRINT_TYPE_FUNC(u16, "%x", unsigned int) @@ -74,9 +75,9 @@ typedef u32 string; typedef u32 string_size; /* Print type function for string type */ -static __kprobes int PRINT_TYPE_FUNC_NAME(string)(struct trace_seq *s, - const char *name, - void *data, void *ent) +static int PRINT_TYPE_FUNC_NAME(string)(struct trace_seq *s, + const char *name, + void *data, void *ent) { int len = *(u32 *)data >> 16; @@ -86,6 +87,7 @@ static __kprobes int PRINT_TYPE_FUNC_NAME(string)(struct trace_seq *s, return trace_seq_printf(s, " %s=\"%s\"", name, (const char *)get_loc_data(data, ent)); } +NOKPROBE_SYMBOL(PRINT_TYPE_FUNC_NAME(string)); static const char PRINT_TYPE_FMT_NAME(string)[] = "\\\"%s\\\""; @@ -111,42 +113,45 @@ DEFINE_FETCH_##method(u64) /* Data fetch function templates */ #define DEFINE_FETCH_reg(type) \ -static __kprobes void FETCH_FUNC_NAME(reg, type)(struct pt_regs *regs, \ +static void FETCH_FUNC_NAME(reg, type)(struct pt_regs *regs, \ void *offset, void *dest) \ { \ *(type *)dest = (type)regs_get_register(regs, \ (unsigned int)((unsigned long)offset)); \ -} +} \ +NOKPROBE_SYMBOL(FETCH_FUNC_NAME(reg, type)); DEFINE_BASIC_FETCH_FUNCS(reg) /* No string on the register */ #define fetch_reg_string NULL #define fetch_reg_string_size NULL #define DEFINE_FETCH_stack(type) \ -static __kprobes void FETCH_FUNC_NAME(stack, type)(struct pt_regs *regs,\ - void *offset, void *dest) \ +static void FETCH_FUNC_NAME(stack, type)(struct pt_regs *regs, \ + void *offset, void *dest) \ { \ *(type *)dest = (type)regs_get_kernel_stack_nth(regs, \ (unsigned int)((unsigned long)offset)); \ -} +} \ +NOKPROBE_SYMBOL(FETCH_FUNC_NAME(stack, type)); DEFINE_BASIC_FETCH_FUNCS(stack) /* No string on the stack entry */ #define fetch_stack_string NULL #define fetch_stack_string_size NULL #define DEFINE_FETCH_retval(type) \ -static __kprobes void FETCH_FUNC_NAME(retval, type)(struct pt_regs *regs,\ +static void FETCH_FUNC_NAME(retval, type)(struct pt_regs *regs, \ void *dummy, void *dest) \ { \ *(type *)dest = (type)regs_return_value(regs); \ -} +} \ +NOKPROBE_SYMBOL(FETCH_FUNC_NAME(retval, type)); DEFINE_BASIC_FETCH_FUNCS(retval) /* No string on the retval */ #define fetch_retval_string NULL #define fetch_retval_string_size NULL #define DEFINE_FETCH_memory(type) \ -static __kprobes void FETCH_FUNC_NAME(memory, type)(struct pt_regs *regs,\ +static void FETCH_FUNC_NAME(memory, type)(struct pt_regs *regs, \ void *addr, void *dest) \ { \ type retval; \ @@ -154,14 +159,15 @@ static __kprobes void FETCH_FUNC_NAME(memory, type)(struct pt_regs *regs,\ *(type *)dest = 0; \ else \ *(type *)dest = retval; \ -} +} \ +NOKPROBE_SYMBOL(FETCH_FUNC_NAME(memory, type)); DEFINE_BASIC_FETCH_FUNCS(memory) /* * Fetch a null-terminated string. Caller MUST set *(u32 *)dest with max * length and relative data location. */ -static __kprobes void FETCH_FUNC_NAME(memory, string)(struct pt_regs *regs, - void *addr, void *dest) +static void FETCH_FUNC_NAME(memory, string)(struct pt_regs *regs, + void *addr, void *dest) { long ret; int maxlen = get_rloc_len(*(u32 *)dest); @@ -195,10 +201,11 @@ static __kprobes void FETCH_FUNC_NAME(memory, string)(struct pt_regs *regs, get_rloc_offs(*(u32 *)dest)); } } +NOKPROBE_SYMBOL(FETCH_FUNC_NAME(memory, string)); /* Return the length of string -- including null terminal byte */ -static __kprobes void FETCH_FUNC_NAME(memory, string_size)(struct pt_regs *regs, - void *addr, void *dest) +static void FETCH_FUNC_NAME(memory, string_size)(struct pt_regs *regs, + void *addr, void *dest) { mm_segment_t old_fs; int ret, len = 0; @@ -221,6 +228,7 @@ static __kprobes void FETCH_FUNC_NAME(memory, string_size)(struct pt_regs *regs, else *(u32 *)dest = len; } +NOKPROBE_SYMBOL(FETCH_FUNC_NAME(memory, string_size)); /* Memory fetching by symbol */ struct symbol_cache { @@ -268,7 +276,7 @@ static struct symbol_cache *alloc_symbol_cache(const char *sym, long offset) } #define DEFINE_FETCH_symbol(type) \ -static __kprobes void FETCH_FUNC_NAME(symbol, type)(struct pt_regs *regs,\ +static void FETCH_FUNC_NAME(symbol, type)(struct pt_regs *regs, \ void *data, void *dest) \ { \ struct symbol_cache *sc = data; \ @@ -276,7 +284,8 @@ static __kprobes void FETCH_FUNC_NAME(symbol, type)(struct pt_regs *regs,\ fetch_memory_##type(regs, (void *)sc->addr, dest); \ else \ *(type *)dest = 0; \ -} +} \ +NOKPROBE_SYMBOL(FETCH_FUNC_NAME(symbol, type)); DEFINE_BASIC_FETCH_FUNCS(symbol) DEFINE_FETCH_symbol(string) DEFINE_FETCH_symbol(string_size) @@ -288,7 +297,7 @@ struct deref_fetch_param { }; #define DEFINE_FETCH_deref(type) \ -static __kprobes void FETCH_FUNC_NAME(deref, type)(struct pt_regs *regs,\ +static void FETCH_FUNC_NAME(deref, type)(struct pt_regs *regs, \ void *data, void *dest) \ { \ struct deref_fetch_param *dprm = data; \ @@ -299,20 +308,22 @@ static __kprobes void FETCH_FUNC_NAME(deref, type)(struct pt_regs *regs,\ fetch_memory_##type(regs, (void *)addr, dest); \ } else \ *(type *)dest = 0; \ -} +} \ +NOKPROBE_SYMBOL(FETCH_FUNC_NAME(deref, type)); DEFINE_BASIC_FETCH_FUNCS(deref) DEFINE_FETCH_deref(string) DEFINE_FETCH_deref(string_size) -static __kprobes void update_deref_fetch_param(struct deref_fetch_param *data) +static void update_deref_fetch_param(struct deref_fetch_param *data) { if (CHECK_FETCH_FUNCS(deref, data->orig.fn)) update_deref_fetch_param(data->orig.data); else if (CHECK_FETCH_FUNCS(symbol, data->orig.fn)) update_symbol_cache(data->orig.data); } +NOKPROBE_SYMBOL(update_deref_fetch_param); -static __kprobes void free_deref_fetch_param(struct deref_fetch_param *data) +static void free_deref_fetch_param(struct deref_fetch_param *data) { if (CHECK_FETCH_FUNCS(deref, data->orig.fn)) free_deref_fetch_param(data->orig.data); @@ -320,6 +331,7 @@ static __kprobes void free_deref_fetch_param(struct deref_fetch_param *data) free_symbol_cache(data->orig.data); kfree(data); } +NOKPROBE_SYMBOL(free_deref_fetch_param); /* Bitfield fetch function */ struct bitfield_fetch_param { @@ -329,7 +341,7 @@ struct bitfield_fetch_param { }; #define DEFINE_FETCH_bitfield(type) \ -static __kprobes void FETCH_FUNC_NAME(bitfield, type)(struct pt_regs *regs,\ +static void FETCH_FUNC_NAME(bitfield, type)(struct pt_regs *regs, \ void *data, void *dest) \ { \ struct bitfield_fetch_param *bprm = data; \ @@ -340,8 +352,8 @@ static __kprobes void FETCH_FUNC_NAME(bitfield, type)(struct pt_regs *regs,\ buf >>= bprm->low_shift; \ } \ *(type *)dest = buf; \ -} - +} \ +NOKPROBE_SYMBOL(FETCH_FUNC_NAME(bitfield, type)); DEFINE_BASIC_FETCH_FUNCS(bitfield) #define fetch_bitfield_string NULL #define fetch_bitfield_string_size NULL @@ -467,11 +479,11 @@ fail: } /* Special function : only accept unsigned long */ -static __kprobes void fetch_stack_address(struct pt_regs *regs, - void *dummy, void *dest) +static void fetch_stack_address(struct pt_regs *regs, void *dummy, void *dest) { *(unsigned long *)dest = kernel_stack_pointer(regs); } +NOKPROBE_SYMBOL(fetch_stack_address); static fetch_func_t get_fetch_size_function(const struct fetch_type *type, fetch_func_t orig_fn) diff --git a/kernel/trace/trace_probe.h b/kernel/trace/trace_probe.h index 5c7e09d..829dd5e 100644 --- a/kernel/trace/trace_probe.h +++ b/kernel/trace/trace_probe.h @@ -124,8 +124,8 @@ struct probe_arg { const struct fetch_type *type; /* Type of this argument */ }; -static inline __kprobes void call_fetch(struct fetch_param *fprm, - struct pt_regs *regs, void *dest) +static inline void call_fetch(struct fetch_param *fprm, + struct pt_regs *regs, void *dest) { return fprm->fn(regs, fprm->data, dest); }
Masami Hiramatsu
2013-Nov-20 04:22 UTC
[PATCH -tip v3 12/23] x86/hw_breakpoint: Use NOKPROBE_SYMBOL macro in hw_breakpoint
Use NOKPROBE_SYMBOL macro to protect functions from kprobes instead of __kprobe annotation in hw_breakpoint. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Cc: Thomas Gleixner <tglx at linutronix.de> Cc: Ingo Molnar <mingo at redhat.com> Cc: "H. Peter Anvin" <hpa at zytor.com> Cc: Andrew Morton <akpm at linux-foundation.org> Cc: Oleg Nesterov <oleg at redhat.com> --- arch/x86/kernel/hw_breakpoint.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/arch/x86/kernel/hw_breakpoint.c b/arch/x86/kernel/hw_breakpoint.c index f66ff16..cb4df84 100644 --- a/arch/x86/kernel/hw_breakpoint.c +++ b/arch/x86/kernel/hw_breakpoint.c @@ -425,7 +425,7 @@ EXPORT_SYMBOL_GPL(hw_breakpoint_restore); * NOTIFY_STOP returned for all other cases * */ -static int __kprobes hw_breakpoint_handler(struct die_args *args) +static int hw_breakpoint_handler(struct die_args *args) { int i, cpu, rc = NOTIFY_STOP; struct perf_event *bp; @@ -508,11 +508,12 @@ static int __kprobes hw_breakpoint_handler(struct die_args *args) return rc; } +NOKPROBE_SYMBOL(hw_breakpoint_handler); /* * Handle debug exception notifications. */ -int __kprobes hw_breakpoint_exceptions_notify( +int hw_breakpoint_exceptions_notify( struct notifier_block *unused, unsigned long val, void *data) { if (val != DIE_DEBUG) @@ -520,6 +521,7 @@ int __kprobes hw_breakpoint_exceptions_notify( return hw_breakpoint_handler(data); } +NOKPROBE_SYMBOL(hw_breakpoint_exceptions_notify); void hw_breakpoint_pmu_read(struct perf_event *bp) {
Masami Hiramatsu
2013-Nov-20 04:22 UTC
[PATCH -tip v3 13/23] x86/trap: Use NOKPROBE_SYMBOL macro in trap.c
Use NOKPROBE_SYMBOL macro to protect functions from kprobes instead of __kprobes annotation in trap.c. This also applies __always_inline annotation for some cases, because NOKPROBE_SYMBOL() will inhibit inlining by referring the symbol address. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Cc: Thomas Gleixner <tglx at linutronix.de> Cc: Ingo Molnar <mingo at redhat.com> Cc: "H. Peter Anvin" <hpa at zytor.com> Cc: Andi Kleen <ak at linux.intel.com> Cc: Seiji Aguchi <seiji.aguchi at hds.com> Cc: Frederic Weisbecker <fweisbec at gmail.com> --- arch/x86/include/asm/traps.h | 2 +- arch/x86/kernel/traps.c | 20 +++++++++++++------- 2 files changed, 14 insertions(+), 8 deletions(-) diff --git a/arch/x86/include/asm/traps.h b/arch/x86/include/asm/traps.h index 58d66fe..ca32508 100644 --- a/arch/x86/include/asm/traps.h +++ b/arch/x86/include/asm/traps.h @@ -68,7 +68,7 @@ dotraplinkage void do_segment_not_present(struct pt_regs *, long); dotraplinkage void do_stack_segment(struct pt_regs *, long); #ifdef CONFIG_X86_64 dotraplinkage void do_double_fault(struct pt_regs *, long); -asmlinkage __kprobes struct pt_regs *sync_regs(struct pt_regs *); +asmlinkage struct pt_regs *sync_regs(struct pt_regs *); #endif dotraplinkage void do_general_protection(struct pt_regs *, long); dotraplinkage void do_page_fault(struct pt_regs *, unsigned long); diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c index ce24c24..e751e3b 100644 --- a/arch/x86/kernel/traps.c +++ b/arch/x86/kernel/traps.c @@ -106,7 +106,7 @@ static inline void preempt_conditional_cli(struct pt_regs *regs) preempt_count_dec(); } -static int __kprobes +static __always_inline int do_trap_no_signal(struct task_struct *tsk, int trapnr, char *str, struct pt_regs *regs, long error_code) { @@ -136,7 +136,7 @@ do_trap_no_signal(struct task_struct *tsk, int trapnr, char *str, return -1; } -static void __kprobes +static void do_trap(int trapnr, int signr, char *str, struct pt_regs *regs, long error_code, siginfo_t *info) { @@ -173,6 +173,7 @@ do_trap(int trapnr, int signr, char *str, struct pt_regs *regs, else force_sig(signr, tsk); } +NOKPROBE_SYMBOL(do_trap); #define DO_ERROR(trapnr, signr, str, name) \ dotraplinkage void do_##name(struct pt_regs *regs, long error_code) \ @@ -267,7 +268,7 @@ dotraplinkage void do_double_fault(struct pt_regs *regs, long error_code) } #endif -dotraplinkage void __kprobes +dotraplinkage void do_general_protection(struct pt_regs *regs, long error_code) { struct task_struct *tsk; @@ -313,9 +314,10 @@ do_general_protection(struct pt_regs *regs, long error_code) exit: exception_exit(prev_state); } +NOKPROBE_SYMBOL(do_general_protection); /* May run on IST stack. */ -dotraplinkage void __kprobes notrace do_int3(struct pt_regs *regs, long error_code) +dotraplinkage void notrace do_int3(struct pt_regs *regs, long error_code) { enum ctx_state prev_state; @@ -354,6 +356,7 @@ dotraplinkage void __kprobes notrace do_int3(struct pt_regs *regs, long error_co exit: exception_exit(prev_state); } +NOKPROBE_SYMBOL(do_int3); #ifdef CONFIG_X86_64 /* @@ -361,7 +364,7 @@ exit: * for scheduling or signal handling. The actual stack switch is done in * entry.S */ -asmlinkage __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs) +asmlinkage struct pt_regs *sync_regs(struct pt_regs *eregs) { struct pt_regs *regs = eregs; /* Did already sync */ @@ -380,6 +383,7 @@ asmlinkage __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs) *regs = *eregs; return regs; } +NOKPROBE_SYMBOL(sync_regs); #endif /* @@ -406,7 +410,7 @@ asmlinkage __kprobes struct pt_regs *sync_regs(struct pt_regs *eregs) * * May run on IST stack. */ -dotraplinkage void __kprobes do_debug(struct pt_regs *regs, long error_code) +dotraplinkage void do_debug(struct pt_regs *regs, long error_code) { struct task_struct *tsk = current; enum ctx_state prev_state; @@ -486,6 +490,7 @@ dotraplinkage void __kprobes do_debug(struct pt_regs *regs, long error_code) exit: exception_exit(prev_state); } +NOKPROBE_SYMBOL(do_debug); /* * Note that we play around with the 'TS' bit in an attempt to get @@ -657,7 +662,7 @@ void math_state_restore(void) } EXPORT_SYMBOL_GPL(math_state_restore); -dotraplinkage void __kprobes +dotraplinkage void do_device_not_available(struct pt_regs *regs, long error_code) { enum ctx_state prev_state; @@ -683,6 +688,7 @@ do_device_not_available(struct pt_regs *regs, long error_code) #endif exception_exit(prev_state); } +NOKPROBE_SYMBOL(do_device_not_available); #ifdef CONFIG_X86_32 dotraplinkage void do_iret_error(struct pt_regs *regs, long error_code)
Masami Hiramatsu
2013-Nov-20 04:22 UTC
[PATCH -tip v3 14/23] x86/fault: Use NOKPROBE_SYMBOL macro in fault.c
Use NOKPROBE_SYMBOL macro to protect functions from kprobes instead of __kprobes annotation in fault.c. This applies __always_inline annotation for some cases, because NOKPROBE_SYMBOL() will inhibit inlining by referring the symbol address. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Cc: Thomas Gleixner <tglx at linutronix.de> Cc: Ingo Molnar <mingo at redhat.com> Cc: "H. Peter Anvin" <hpa at zytor.com> Cc: Andrew Morton <akpm at linux-foundation.org> Cc: Michal Hocko <mhocko at suse.cz> Cc: Seiji Aguchi <seiji.aguchi at hds.com> --- arch/x86/mm/fault.c | 28 +++++++++++++++++----------- 1 file changed, 17 insertions(+), 11 deletions(-) diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c index 9ff85bb..7c9305c 100644 --- a/arch/x86/mm/fault.c +++ b/arch/x86/mm/fault.c @@ -8,7 +8,7 @@ #include <linux/kdebug.h> /* oops_begin/end, ... */ #include <linux/module.h> /* search_exception_table */ #include <linux/bootmem.h> /* max_low_pfn */ -#include <linux/kprobes.h> /* __kprobes, ... */ +#include <linux/kprobes.h> /* NOKPROBE_SYMBOL, ... */ #include <linux/mmiotrace.h> /* kmmio_handler, ... */ #include <linux/perf_event.h> /* perf_sw_event */ #include <linux/hugetlb.h> /* hstate_index_to_shift */ @@ -45,7 +45,7 @@ enum x86_pf_error_code { * Returns 0 if mmiotrace is disabled, or if the fault is not * handled by mmiotrace: */ -static inline int __kprobes +static __always_inline int kmmio_fault(struct pt_regs *regs, unsigned long addr) { if (unlikely(is_kmmio_active())) @@ -54,7 +54,7 @@ kmmio_fault(struct pt_regs *regs, unsigned long addr) return 0; } -static inline int __kprobes kprobes_fault(struct pt_regs *regs) +static __always_inline int kprobes_fault(struct pt_regs *regs) { int ret = 0; @@ -261,7 +261,7 @@ void vmalloc_sync_all(void) * * Handle a fault on the vmalloc or module mapping area */ -static noinline __kprobes int vmalloc_fault(unsigned long address) +static noinline int vmalloc_fault(unsigned long address) { unsigned long pgd_paddr; pmd_t *pmd_k; @@ -291,6 +291,7 @@ static noinline __kprobes int vmalloc_fault(unsigned long address) return 0; } +NOKPROBE_SYMBOL(vmalloc_fault); /* * Did it hit the DOS screen memory VA from vm86 mode? @@ -358,7 +359,7 @@ void vmalloc_sync_all(void) * * This assumes no large pages in there. */ -static noinline __kprobes int vmalloc_fault(unsigned long address) +static noinline int vmalloc_fault(unsigned long address) { pgd_t *pgd, *pgd_ref; pud_t *pud, *pud_ref; @@ -425,6 +426,7 @@ static noinline __kprobes int vmalloc_fault(unsigned long address) return 0; } +NOKPROBE_SYMBOL(vmalloc_fault); #ifdef CONFIG_CPU_SUP_AMD static const char errata93_warning[] @@ -904,7 +906,7 @@ static int spurious_fault_check(unsigned long error_code, pte_t *pte) * There are no security implications to leaving a stale TLB when * increasing the permissions on a page. */ -static noinline __kprobes int +static noinline int spurious_fault(unsigned long error_code, unsigned long address) { pgd_t *pgd; @@ -952,6 +954,7 @@ spurious_fault(unsigned long error_code, unsigned long address) return ret; } +NOKPROBE_SYMBOL(spurious_fault); int show_unhandled_signals = 1; @@ -997,7 +1000,7 @@ static inline bool smap_violation(int error_code, struct pt_regs *regs) * and the problem, and then passes it off to one of the appropriate * routines. */ -static void __kprobes +static void __do_page_fault(struct pt_regs *regs, unsigned long error_code) { struct vm_area_struct *vma; @@ -1225,8 +1228,9 @@ good_area: up_read(&mm->mmap_sem); } +NOKPROBE_SYMBOL(__do_page_fault); -dotraplinkage void __kprobes +dotraplinkage void do_page_fault(struct pt_regs *regs, unsigned long error_code) { enum ctx_state prev_state; @@ -1235,9 +1239,10 @@ do_page_fault(struct pt_regs *regs, unsigned long error_code) __do_page_fault(regs, error_code); exception_exit(prev_state); } +NOKPROBE_SYMBOL(do_page_fault); -static void trace_page_fault_entries(struct pt_regs *regs, - unsigned long error_code) +static __always_inline void +trace_page_fault_entries(struct pt_regs *regs, unsigned long error_code) { if (user_mode(regs)) trace_page_fault_user(read_cr2(), regs, error_code); @@ -1245,7 +1250,7 @@ static void trace_page_fault_entries(struct pt_regs *regs, trace_page_fault_kernel(read_cr2(), regs, error_code); } -dotraplinkage void __kprobes +dotraplinkage void trace_do_page_fault(struct pt_regs *regs, unsigned long error_code) { enum ctx_state prev_state; @@ -1255,3 +1260,4 @@ trace_do_page_fault(struct pt_regs *regs, unsigned long error_code) __do_page_fault(regs, error_code); exception_exit(prev_state); } +NOKPROBE_SYMBOL(trace_do_page_fault);
Masami Hiramatsu
2013-Nov-20 04:22 UTC
[PATCH -tip v3 15/23] x86/alternative: Use NOKPROBE_SYMBOL macro in alternative.c
Use NOKPROBE_SYMBOL macro to protect functions from kprobes instead of __kprobes annotation in alternative.c. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Cc: Thomas Gleixner <tglx at linutronix.de> Cc: Ingo Molnar <mingo at redhat.com> Cc: "H. Peter Anvin" <hpa at zytor.com> Cc: Jiri Kosina <jkosina at suse.cz> Cc: Borislav Petkov <bp at suse.de> --- arch/x86/kernel/alternative.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c index df94598..7cfd6d7 100644 --- a/arch/x86/kernel/alternative.c +++ b/arch/x86/kernel/alternative.c @@ -551,7 +551,7 @@ void *__init_or_module text_poke_early(void *addr, const void *opcode, * * Note: Must be called under text_mutex. */ -void *__kprobes text_poke(void *addr, const void *opcode, size_t len) +void *text_poke(void *addr, const void *opcode, size_t len) { unsigned long flags; char *vaddr; @@ -585,6 +585,7 @@ void *__kprobes text_poke(void *addr, const void *opcode, size_t len) local_irq_restore(flags); return addr; } +NOKPROBE_SYMBOL(text_poke); static void do_sync_core(void *info) {
Masami Hiramatsu
2013-Nov-20 04:22 UTC
[PATCH -tip v3 16/23] x86/nmi: Use NOKPROBE_SYMBOL macro for nmi handlers
Use NOKPROBE_SYMBOL macro to protect functions from kprobes instead of __kprobes annotation for nmi handlers. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Cc: Thomas Gleixner <tglx at linutronix.de> Cc: Ingo Molnar <mingo at redhat.com> Cc: "H. Peter Anvin" <hpa at zytor.com> Cc: Peter Zijlstra <a.p.zijlstra at chello.nl> Cc: Paul Mackerras <paulus at samba.org> Cc: Arnaldo Carvalho de Melo <acme at ghostprotocols.net> Cc: Michel Lespinasse <walken at google.com> Cc: Dave Hansen <dave.hansen at linux.intel.com> Cc: Zhang Rui <rui.zhang at intel.com> --- arch/x86/kernel/apic/hw_nmi.c | 3 ++- arch/x86/kernel/cpu/perf_event.c | 3 ++- arch/x86/kernel/cpu/perf_event_amd_ibs.c | 3 ++- arch/x86/kernel/nmi.c | 18 ++++++++++++------ 4 files changed, 18 insertions(+), 9 deletions(-) diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c index a698d71..73eb5b3 100644 --- a/arch/x86/kernel/apic/hw_nmi.c +++ b/arch/x86/kernel/apic/hw_nmi.c @@ -60,7 +60,7 @@ void arch_trigger_all_cpu_backtrace(void) smp_mb__after_clear_bit(); } -static int __kprobes +static int arch_trigger_all_cpu_backtrace_handler(unsigned int cmd, struct pt_regs *regs) { int cpu; @@ -80,6 +80,7 @@ arch_trigger_all_cpu_backtrace_handler(unsigned int cmd, struct pt_regs *regs) return NMI_DONE; } +NOKPROBE_SYMBOL(arch_trigger_all_cpu_backtrace_handler); static int __init register_trigger_all_cpu_backtrace(void) { diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c index 98f845b..396c1a2 100644 --- a/arch/x86/kernel/cpu/perf_event.c +++ b/arch/x86/kernel/cpu/perf_event.c @@ -1273,7 +1273,7 @@ void perf_events_lapic_init(void) apic_write(APIC_LVTPC, APIC_DM_NMI); } -static int __kprobes +static int perf_event_nmi_handler(unsigned int cmd, struct pt_regs *regs) { u64 start_clock; @@ -1291,6 +1291,7 @@ perf_event_nmi_handler(unsigned int cmd, struct pt_regs *regs) return ret; } +NOKPROBE_SYMBOL(perf_event_nmi_handler); struct event_constraint emptyconstraint; struct event_constraint unconstrained; diff --git a/arch/x86/kernel/cpu/perf_event_amd_ibs.c b/arch/x86/kernel/cpu/perf_event_amd_ibs.c index e09f0bf..c668309 100644 --- a/arch/x86/kernel/cpu/perf_event_amd_ibs.c +++ b/arch/x86/kernel/cpu/perf_event_amd_ibs.c @@ -592,7 +592,7 @@ out: return 1; } -static int __kprobes +static int perf_ibs_nmi_handler(unsigned int cmd, struct pt_regs *regs) { int handled = 0; @@ -605,6 +605,7 @@ perf_ibs_nmi_handler(unsigned int cmd, struct pt_regs *regs) return handled; } +NOKPROBE_SYMBOL(perf_ibs_nmi_handler); static __init int perf_ibs_pmu_init(struct perf_ibs *perf_ibs, char *name) { diff --git a/arch/x86/kernel/nmi.c b/arch/x86/kernel/nmi.c index 6fcb49c..38ce829 100644 --- a/arch/x86/kernel/nmi.c +++ b/arch/x86/kernel/nmi.c @@ -95,7 +95,7 @@ static int __init nmi_warning_debugfs(void) } fs_initcall(nmi_warning_debugfs); -static int __kprobes nmi_handle(unsigned int type, struct pt_regs *regs, bool b2b) +static int nmi_handle(unsigned int type, struct pt_regs *regs, bool b2b) { struct nmi_desc *desc = nmi_to_desc(type); struct nmiaction *a; @@ -137,6 +137,7 @@ static int __kprobes nmi_handle(unsigned int type, struct pt_regs *regs, bool b2 /* return total number of NMI events handled */ return handled; } +NOKPROBE_SYMBOL(nmi_handle); int __register_nmi_handler(unsigned int type, struct nmiaction *action) { @@ -197,7 +198,7 @@ void unregister_nmi_handler(unsigned int type, const char *name) } EXPORT_SYMBOL_GPL(unregister_nmi_handler); -static __kprobes void +static void pci_serr_error(unsigned char reason, struct pt_regs *regs) { /* check to see if anyone registered against these types of errors */ @@ -227,8 +228,9 @@ pci_serr_error(unsigned char reason, struct pt_regs *regs) reason = (reason & NMI_REASON_CLEAR_MASK) | NMI_REASON_CLEAR_SERR; outb(reason, NMI_REASON_PORT); } +NOKPROBE_SYMBOL(pci_serr_error); -static __kprobes void +static void io_check_error(unsigned char reason, struct pt_regs *regs) { unsigned long i; @@ -258,8 +260,9 @@ io_check_error(unsigned char reason, struct pt_regs *regs) reason &= ~NMI_REASON_CLEAR_IOCHK; outb(reason, NMI_REASON_PORT); } +NOKPROBE_SYMBOL(io_check_error); -static __kprobes void +static void unknown_nmi_error(unsigned char reason, struct pt_regs *regs) { int handled; @@ -287,11 +290,12 @@ unknown_nmi_error(unsigned char reason, struct pt_regs *regs) pr_emerg("Dazed and confused, but trying to continue\n"); } +NOKPROBE_SYMBOL(unknown_nmi_error); static DEFINE_PER_CPU(bool, swallow_nmi); static DEFINE_PER_CPU(unsigned long, last_nmi_rip); -static __kprobes void default_do_nmi(struct pt_regs *regs) +static void default_do_nmi(struct pt_regs *regs) { unsigned char reason = 0; int handled; @@ -390,6 +394,7 @@ static __kprobes void default_do_nmi(struct pt_regs *regs) else unknown_nmi_error(reason, regs); } +NOKPROBE_SYMBOL(default_do_nmi); /* * NMIs can hit breakpoints which will cause it to lose its @@ -509,7 +514,7 @@ static inline void nmi_nesting_postprocess(void) } #endif -dotraplinkage notrace __kprobes void +dotraplinkage notrace void do_nmi(struct pt_regs *regs, long error_code) { nmi_nesting_preprocess(regs); @@ -526,6 +531,7 @@ do_nmi(struct pt_regs *regs, long error_code) /* On i386, may loop back to preprocess */ nmi_nesting_postprocess(); } +NOKPROBE_SYMBOL(do_nmi); void stop_nmi(void) {
Masami Hiramatsu
2013-Nov-20 04:22 UTC
[PATCH -tip v3 17/23] x86/kvm: Use NOKPROBE_SYMBOL macro in kvm.c
Use NOKPROBE_SYMBOL macro for protecting functions from kprobes instead of __kprobes annotation in kvm.c. This also adds kvm_read_and_reset_pf_reason in the blacklist because it can be called before do_page_fault. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Cc: Thomas Gleixner <tglx at linutronix.de> Cc: Ingo Molnar <mingo at redhat.com> Cc: "H. Peter Anvin" <hpa at zytor.com> Cc: Gleb Natapov <gleb at redhat.com> Cc: Raghavendra K T <raghavendra.kt at linux.vnet.ibm.com> Cc: Marcelo Tosatti <mtosatti at redhat.com> --- arch/x86/kernel/kvm.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c index 6dd802c..fb95987 100644 --- a/arch/x86/kernel/kvm.c +++ b/arch/x86/kernel/kvm.c @@ -251,8 +251,9 @@ u32 kvm_read_and_reset_pf_reason(void) return reason; } EXPORT_SYMBOL_GPL(kvm_read_and_reset_pf_reason); +NOKPROBE_SYMBOL(kvm_read_and_reset_pf_reason); -dotraplinkage void __kprobes +dotraplinkage void do_async_page_fault(struct pt_regs *regs, unsigned long error_code) { enum ctx_state prev_state; @@ -276,6 +277,7 @@ do_async_page_fault(struct pt_regs *regs, unsigned long error_code) break; } } +NOKPROBE_SYMBOL(do_async_page_fault); static void __init paravirt_ops_setup(void) {
Masami Hiramatsu
2013-Nov-20 04:22 UTC
[PATCH -tip v3 18/23] x86/dumpstack: Use NOKPROBE_SYMBOL macro in dumpstack.c
Use NOKPROBE_SYMBOL macro for protecting functions from kprobes instead of __kprobes annotation in dumpstack.c. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Cc: Thomas Gleixner <tglx at linutronix.de> Cc: Ingo Molnar <mingo at redhat.com> Cc: "H. Peter Anvin" <hpa at zytor.com> Cc: Andrew Morton <akpm at linux-foundation.org> Cc: Jiri Slaby <jslaby at suse.cz> Cc: Tejun Heo <tj at kernel.org> Cc: Vineet Gupta <vgupta at synopsys.com> --- arch/x86/kernel/dumpstack.c | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c index d9c12d3..b74ebc7 100644 --- a/arch/x86/kernel/dumpstack.c +++ b/arch/x86/kernel/dumpstack.c @@ -200,7 +200,7 @@ static arch_spinlock_t die_lock = __ARCH_SPIN_LOCK_UNLOCKED; static int die_owner = -1; static unsigned int die_nest_count; -unsigned __kprobes long oops_begin(void) +unsigned long oops_begin(void) { int cpu; unsigned long flags; @@ -223,8 +223,9 @@ unsigned __kprobes long oops_begin(void) return flags; } EXPORT_SYMBOL_GPL(oops_begin); +NOKPROBE_SYMBOL(oops_begin); -void __kprobes oops_end(unsigned long flags, struct pt_regs *regs, int signr) +void oops_end(unsigned long flags, struct pt_regs *regs, int signr) { if (regs && kexec_should_crash(current)) crash_kexec(regs); @@ -247,8 +248,9 @@ void __kprobes oops_end(unsigned long flags, struct pt_regs *regs, int signr) panic("Fatal exception"); do_exit(signr); } +NOKPROBE_SYMBOL(oops_end); -int __kprobes __die(const char *str, struct pt_regs *regs, long err) +int __die(const char *str, struct pt_regs *regs, long err) { #ifdef CONFIG_X86_32 unsigned short ss; @@ -291,6 +293,7 @@ int __kprobes __die(const char *str, struct pt_regs *regs, long err) #endif return 0; } +NOKPROBE_SYMBOL(__die); /* * This is gone through when something in the kernel has done something bad
Masami Hiramatsu
2013-Nov-20 04:22 UTC
[PATCH -tip v3 19/23] [BUGFIX] kprobes/x86: Prohibit probing on debug_stack_*
Prohibit probing on debug_stack_reset and debug_stack_set_zero. Since the both functions are called from TRACE_IRQS_ON/OFF_DEBUG macros which run in int3 ist entry, probing it may cause a soft lockup. This happens when the kernel built with CONFIG_DYNAMIC_FTRACE=y and CONFIG_TRACE_IRQFLAGS=y. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Cc: Thomas Gleixner <tglx at linutronix.de> Cc: Ingo Molnar <mingo at redhat.com> Cc: "H. Peter Anvin" <hpa at zytor.com> Cc: Borislav Petkov <bp at suse.de> Cc: Fenghua Yu <fenghua.yu at intel.com> Cc: Seiji Aguchi <seiji.aguchi at hds.com> --- arch/x86/kernel/cpu/common.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c index 1789b06..d0a802a 100644 --- a/arch/x86/kernel/cpu/common.c +++ b/arch/x86/kernel/cpu/common.c @@ -8,6 +8,7 @@ #include <linux/delay.h> #include <linux/sched.h> #include <linux/init.h> +#include <linux/kprobes.h> #include <linux/kgdb.h> #include <linux/smp.h> #include <linux/io.h> @@ -1163,6 +1164,7 @@ int is_debug_stack(unsigned long addr) (addr <= __get_cpu_var(debug_stack_addr) && addr > (__get_cpu_var(debug_stack_addr) - DEBUG_STKSZ)); } +NOKPROBE_SYMBOL(is_debug_stack); DEFINE_PER_CPU(u32, debug_idt_ctr); @@ -1171,6 +1173,7 @@ void debug_stack_set_zero(void) this_cpu_inc(debug_idt_ctr); load_current_idt(); } +NOKPROBE_SYMBOL(debug_stack_set_zero); void debug_stack_reset(void) { @@ -1179,6 +1182,7 @@ void debug_stack_reset(void) if (this_cpu_dec_return(debug_idt_ctr) == 0) load_current_idt(); } +NOKPROBE_SYMBOL(debug_stack_reset); #else /* CONFIG_X86_64 */
Masami Hiramatsu
2013-Nov-20 04:22 UTC
[PATCH -tip v3 20/23] [BUGFIX] kprobes: Prohibit probing on func_ptr_is_kernel_text
Prohibit probing on func_ptr_is_kernel_text() by adding it to the kprobe_blacklist. Since the func_ptr_is_kernel_text() is called from notifier_call_chain() which is called from int3 handler, probing it may cause double int3 fault and kernel will reboot. This happenes when the kernel built with CONFIG_DEBUG_NOTIFIERS=y. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Cc: Andrew Morton <akpm at linux-foundation.org> Cc: "Uwe Kleine-K?nig" <u.kleine-koenig at pengutronix.de> Cc: Borislav Petkov <bp at suse.de> Cc: Ingo Molnar <mingo at kernel.org> --- kernel/extable.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/kernel/extable.c b/kernel/extable.c index 832cb28..885c877 100644 --- a/kernel/extable.c +++ b/kernel/extable.c @@ -20,6 +20,7 @@ #include <linux/module.h> #include <linux/mutex.h> #include <linux/init.h> +#include <linux/kprobes.h> #include <asm/sections.h> #include <asm/uaccess.h> @@ -137,3 +138,4 @@ int func_ptr_is_kernel_text(void *ptr) return 1; return is_module_text_address(addr); } +NOKPROBE_SYMBOL(func_ptr_is_kernel_text);
Masami Hiramatsu
2013-Nov-20 04:22 UTC
[PATCH -tip v3 21/23] notifier: Use NOKPROBE_SYMBOL macro in notifier
Use NOKPROBE_SYMBOL macro to protect functions from kprobes instead of __kprobes annotation in notifier. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> --- kernel/notifier.c | 22 +++++++++++++--------- 1 file changed, 13 insertions(+), 9 deletions(-) diff --git a/kernel/notifier.c b/kernel/notifier.c index 2d5cc4c..61fc78a 100644 --- a/kernel/notifier.c +++ b/kernel/notifier.c @@ -71,9 +71,9 @@ static int notifier_chain_unregister(struct notifier_block **nl, * @returns: notifier_call_chain returns the value returned by the * last notifier function called. */ -static int __kprobes notifier_call_chain(struct notifier_block **nl, - unsigned long val, void *v, - int nr_to_call, int *nr_calls) +static int notifier_call_chain(struct notifier_block **nl, + unsigned long val, void *v, + int nr_to_call, int *nr_calls) { int ret = NOTIFY_DONE; struct notifier_block *nb, *next_nb; @@ -102,6 +102,7 @@ static int __kprobes notifier_call_chain(struct notifier_block **nl, } return ret; } +NOKPROBE_SYMBOL(notifier_call_chain); /* * Atomic notifier chain routines. Registration and unregistration @@ -172,9 +173,9 @@ EXPORT_SYMBOL_GPL(atomic_notifier_chain_unregister); * Otherwise the return value is the return value * of the last notifier function called. */ -int __kprobes __atomic_notifier_call_chain(struct atomic_notifier_head *nh, - unsigned long val, void *v, - int nr_to_call, int *nr_calls) +int __atomic_notifier_call_chain(struct atomic_notifier_head *nh, + unsigned long val, void *v, + int nr_to_call, int *nr_calls) { int ret; @@ -184,13 +185,15 @@ int __kprobes __atomic_notifier_call_chain(struct atomic_notifier_head *nh, return ret; } EXPORT_SYMBOL_GPL(__atomic_notifier_call_chain); +NOKPROBE_SYMBOL(__atomic_notifier_call_chain); -int __kprobes atomic_notifier_call_chain(struct atomic_notifier_head *nh, - unsigned long val, void *v) +int atomic_notifier_call_chain(struct atomic_notifier_head *nh, + unsigned long val, void *v) { return __atomic_notifier_call_chain(nh, val, v, -1, NULL); } EXPORT_SYMBOL_GPL(atomic_notifier_call_chain); +NOKPROBE_SYMBOL(atomic_notifier_call_chain); /* * Blocking notifier chain routines. All access to the chain is @@ -527,7 +530,7 @@ EXPORT_SYMBOL_GPL(srcu_init_notifier_head); static ATOMIC_NOTIFIER_HEAD(die_chain); -int notrace __kprobes notify_die(enum die_val val, const char *str, +int notrace notify_die(enum die_val val, const char *str, struct pt_regs *regs, long err, int trap, int sig) { struct die_args args = { @@ -540,6 +543,7 @@ int notrace __kprobes notify_die(enum die_val val, const char *str, }; return atomic_notifier_call_chain(&die_chain, val, &args); } +NOKPROBE_SYMBOL(notify_die); int register_die_notifier(struct notifier_block *nb) {
Masami Hiramatsu
2013-Nov-20 04:22 UTC
[PATCH -tip v3 22/23] sched: Use NOKPROBE_SYMBOL macro in sched
Use NOKPROBE_SYMBOL macro to protect functions from kprobes instead of __kprobes annotation in sched/core.c. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Cc: Ingo Molnar <mingo at redhat.com> Cc: Peter Zijlstra <peterz at infradead.org> --- kernel/sched/core.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 504fdbd..fece2e3 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -2342,7 +2342,7 @@ notrace unsigned long get_parent_ip(unsigned long addr) #if defined(CONFIG_PREEMPT) && (defined(CONFIG_DEBUG_PREEMPT) || \ defined(CONFIG_PREEMPT_TRACER)) -void __kprobes preempt_count_add(int val) +void preempt_count_add(int val) { #ifdef CONFIG_DEBUG_PREEMPT /* @@ -2363,8 +2363,9 @@ void __kprobes preempt_count_add(int val) trace_preempt_off(CALLER_ADDR0, get_parent_ip(CALLER_ADDR1)); } EXPORT_SYMBOL(preempt_count_add); +NOKPROBE_SYMBOL(preempt_count_add); -void __kprobes preempt_count_sub(int val) +void preempt_count_sub(int val) { #ifdef CONFIG_DEBUG_PREEMPT /* @@ -2385,6 +2386,7 @@ void __kprobes preempt_count_sub(int val) __preempt_count_sub(val); } EXPORT_SYMBOL(preempt_count_sub); +NOKPROBE_SYMBOL(preempt_count_sub); #endif
Masami Hiramatsu
2013-Nov-20 04:22 UTC
[PATCH -tip v3 23/23] kprobes/x86: Use kprobe_blacklist for .kprobes.text and .entry.text
Use kprobe_blackpoint for blacklisting .entry.text and .kprobes.text instead of arch_within_kprobe_blacklist. This also makes them visible via (debugfs)/kprobes/blacklist. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> Cc: Thomas Gleixner <tglx at linutronix.de> Cc: Ingo Molnar <mingo at redhat.com> Cc: "H. Peter Anvin" <hpa at zytor.com> Cc: Ananth N Mavinakayanahalli <ananth at in.ibm.com> Cc: "David S. Miller" <davem at davemloft.net> Cc: Steven Rostedt <rostedt at goodmis.org> Cc: Andrew Morton <akpm at linux-foundation.org> --- arch/x86/kernel/kprobes/core.c | 14 +++++++------- include/linux/kprobes.h | 1 + kernel/kprobes.c | 40 +++++++++++++++++++++++++++++----------- 3 files changed, 37 insertions(+), 18 deletions(-) diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c index 54ada0b..adb0e26 100644 --- a/arch/x86/kernel/kprobes/core.c +++ b/arch/x86/kernel/kprobes/core.c @@ -1087,16 +1087,16 @@ int longjmp_break_handler(struct kprobe *p, struct pt_regs *regs) } NOKPROBE_SYMBOL(longjmp_break_handler); -bool arch_within_kprobe_blacklist(unsigned long addr) -{ - return ((addr >= (unsigned long)__kprobes_text_start && - addr < (unsigned long)__kprobes_text_end) || - (addr >= (unsigned long)__entry_text_start && - addr < (unsigned long)__entry_text_end)); -} +static struct kprobe_blackpoint kbp_entry_text = { + .name = ".entry.text", +}; int __init arch_init_kprobes(void) { + kbp_entry_text.start_addr = (unsigned long)__entry_text_start; + kbp_entry_text.range = (unsigned long)__entry_text_end - + (unsigned long)__entry_text_start; + add_kprobe_blacklist(&kbp_entry_text); return 0; } diff --git a/include/linux/kprobes.h b/include/linux/kprobes.h index 641d009..19be202 100644 --- a/include/linux/kprobes.h +++ b/include/linux/kprobes.h @@ -267,6 +267,7 @@ extern int arch_init_kprobes(void); extern void show_registers(struct pt_regs *regs); extern void kprobes_inc_nmissed_count(struct kprobe *p); extern bool arch_within_kprobe_blacklist(unsigned long addr); +extern void add_kprobe_blacklist(struct kprobe_blackpoint *bp); struct kprobe_insn_cache { struct mutex mutex; diff --git a/kernel/kprobes.c b/kernel/kprobes.c index 0a206ec..895cc8a 100644 --- a/kernel/kprobes.c +++ b/kernel/kprobes.c @@ -1325,19 +1325,10 @@ out: return ret; } -bool __weak arch_within_kprobe_blacklist(unsigned long addr) -{ - /* The __kprobes marked functions and entry code must not be probed */ - return (addr >= (unsigned long)__kprobes_text_start && - addr < (unsigned long)__kprobes_text_end); -} - static bool within_kprobe_blacklist(unsigned long addr) { struct kprobe_blackpoint *bp; - if (arch_within_kprobe_blacklist(addr)) - return true; /* * If there exists a kprobe_blacklist, verify and * fail any probe registration in the prohibited area @@ -2098,6 +2089,19 @@ static void shrink_kprobe_blacklist(struct kprobe_blackpoint **start, mutex_unlock(&kprobe_blacklist_mutex); } +static void __add_kprobe_blacklist(struct kprobe_blackpoint *bp) +{ + INIT_LIST_HEAD(&bp->list); + list_add_tail(&bp->list, &kprobe_blacklist); +} + +void add_kprobe_blacklist(struct kprobe_blackpoint *bp) +{ + mutex_lock(&kprobe_blacklist_mutex); + __add_kprobe_blacklist(bp); + mutex_unlock(&kprobe_blacklist_mutex); +} + /* * Lookup and populate the kprobe_blacklist. * @@ -2123,8 +2127,7 @@ static void populate_kprobe_blacklist(struct kprobe_blackpoint **start, continue; bp->range = size; - INIT_LIST_HEAD(&bp->list); - list_add_tail(&bp->list, &kprobe_blacklist); + __add_kprobe_blacklist(bp); } mutex_unlock(&kprobe_blacklist_mutex); } @@ -2134,6 +2137,7 @@ extern struct kprobe_blackpoint *__stop_kprobe_blacklist[]; static int __init init_kprobes(void) { + struct kprobe_blackpoint *bp; int i, err = 0; /* FIXME allocate the probe table, currently defined statically */ @@ -2147,6 +2151,20 @@ static int __init init_kprobes(void) populate_kprobe_blacklist(__start_kprobe_blacklist, __stop_kprobe_blacklist); + if (__kprobes_text_start != __kprobes_text_end) { + /* The __kprobes marked functions must not be probed */ + bp = kmalloc(sizeof(*bp), GFP_KERNEL); + if (!bp) { + pr_err("Kprobes: Failed to allocate memory\n"); + return -ENOMEM; + } + bp->name = ".kprobes.text"; + bp->start_addr = (unsigned long)__kprobes_text_start; + bp->range = (unsigned long)__kprobes_text_end - + (unsigned long)__kprobes_text_start; + add_kprobe_blacklist(bp); + } + if (kretprobe_blacklist_size) { /* lookup the function address from its name */ for (i = 0; kretprobe_blacklist[i].name != NULL; i++) {
Frank Ch. Eigler
2013-Nov-20 14:26 UTC
[PATCH -tip v3 00/23] kprobes: introduce NOKPROBE_SYMBOL() and general cleaning of kprobe blacklist
masami.hiramatsu.pt wrote:> [...] This series also includes a change which prohibits probing on > the address in .entry.text because the code is used for very > low-level sensitive interrupt/syscall entries. Probing such code may > cause unexpected result (actually most of that area is already in > the kprobe blacklist). So I've decide to prohibit probing all of > them. [...]Does this new blacklist cover enough that the kernel now survives a broadly wildcarded perf-probe, e.g. over e.g. all of its kallsyms? - FChE
Ingo Molnar
2013-Nov-20 15:38 UTC
[PATCH -tip v3 00/23] kprobes: introduce NOKPROBE_SYMBOL() and general cleaning of kprobe blacklist
* Frank Ch. Eigler <fche at redhat.com> wrote:> masami.hiramatsu.pt wrote: > > > [...] This series also includes a change which prohibits probing > > on the address in .entry.text because the code is used for very > > low-level sensitive interrupt/syscall entries. Probing such code > > may cause unexpected result (actually most of that area is already > > in the kprobe blacklist). So I've decide to prohibit probing all > > of them. [...] > > Does this new blacklist cover enough that the kernel now survives a > broadly wildcarded perf-probe, e.g. over e.g. all of its kallsyms?That's generally the purpose of the annotations - if it doesn't then that's a bug. Thanks, Ingo
Ingo Molnar
2013-Nov-21 11:30 UTC
[PATCH -tip v3 18/23] x86/dumpstack: Use NOKPROBE_SYMBOL macro in dumpstack.c
* Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> wrote:> Use NOKPROBE_SYMBOL macro for protecting functions > from kprobes instead of __kprobes annotation in > dumpstack.c. > > Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> > Cc: Thomas Gleixner <tglx at linutronix.de> > Cc: Ingo Molnar <mingo at redhat.com> > Cc: "H. Peter Anvin" <hpa at zytor.com> > Cc: Andrew Morton <akpm at linux-foundation.org> > Cc: Jiri Slaby <jslaby at suse.cz> > Cc: Tejun Heo <tj at kernel.org> > Cc: Vineet Gupta <vgupta at synopsys.com> > --- > arch/x86/kernel/dumpstack.c | 9 ++++++--- > 1 file changed, 6 insertions(+), 3 deletions(-)Btw., all these mechanic changes of the __kprobes annotation can be merged into a single patch. That will cut down on the size of the series substantially. Thanks, Ingo
Andi Kleen
2013-Nov-22 21:21 UTC
[PATCH -tip v3 13/23] x86/trap: Use NOKPROBE_SYMBOL macro in trap.c
On Wed, Nov 20, 2013 at 04:22:21AM +0000, Masami Hiramatsu wrote:> Use NOKPROBE_SYMBOL macro to protect functions from kprobes > instead of __kprobes annotation in trap.c. > This also applies __always_inline annotation for some cases, > because NOKPROBE_SYMBOL() will inhibit inlining by referring > the symbol address.NOKPROBE_SYMBOL seems to add a reference from some variable to the function? With LTO we can optimize away unused functions, but not when there are references to the symbol. So this would likely prevent optimizations with LTO. I prefer a simpler "__kprobe" annotation. -Andi
Ingo Molnar
2013-Nov-27 13:32 UTC
[PATCH -tip v3 02/23] kprobes: Introduce NOKPROBE_SYMBOL() macro for blacklist
* Masami Hiramatsu <masami.hiramatsu.pt at hitachi.com> wrote:> +#ifdef CONFIG_KPROBES > +/* > + * Blacklist ganerating macro. Specify functions which is not probed > + * by using this macro. > + */ > +#define __NOKPROBE_SYMBOL(fname) \ > +static struct kprobe_blackpoint __used \ > + _kprobe_bp_##fname = { \ > + .name = #fname, \ > + .start_addr = (unsigned long)fname, \ > + }; \ > +static struct kprobe_blackpoint __used \ > + __attribute__((section("_kprobe_blacklist"))) \ > + *_p_kprobe_bp_##fname = &_kprobe_bp_##fname;'kprobe_blackpoint' sounds a bit weird - how about 'kprobe_blacklist_entry' ? also, _kprobe_blacklist probably wants to be _kprobes_blacklist, right? Thanks, Ingo
Apparently Analagous Threads
- [PATCH -tip v3 00/23] kprobes: introduce NOKPROBE_SYMBOL() and general cleaning of kprobe blacklist
- [PATCH -tip v3 00/23] kprobes: introduce NOKPROBE_SYMBOL() and general cleaning of kprobe blacklist
- [PATCH -tip v3 00/23] kprobes: introduce NOKPROBE_SYMBOL() and general cleaning of kprobe blacklist
- [PATCH -tip v3 00/23] kprobes: introduce NOKPROBE_SYMBOL() and general cleaning of kprobe blacklist
- [PATCH -tip v3 00/23] kprobes: introduce NOKPROBE_SYMBOL() and general cleaning of kprobe blacklist