Will Deacon
2020-Jul-10 16:51 UTC
[PATCH 00/18] Allow architectures to override __READ_ONCE()
Hi all, This is version three of the patches I previously posted here: v1: https://lore.kernel.org/lkml/20191108170120.22331-1-will at kernel.org/ v2: https://lore.kernel.org/r/20200630173734.14057-1-will at kernel.org Changes since v2 include: * Actually add the barrier in READ_ONCE() for Alpha! * Implement Alpha's smp_load_acquire() using __READ_ONCE(), rather than the other way around. * Further untangling of header files * Use CONFIG_LTO instead of CONFIG_CLANG_LTO I have booted this on arm64, and build-tested as follows: - arm64 allnoconfig, defconfig (also bisected) and allmodconfig - arm32 allnoconfig, defconfig and allmodconfig - x86_64 allnoconfig, defconfig and allmodcofig - alpha defconfig, defconfig+CONFIG_SMP=y - riscv64 defconfig - powerpc64 defconfig - s390 defconfig - sparc32 defconfig, defconfig+CONFIG_SMP=y - sparc64 defconfig Cheers, Will Cc: Joel Fernandes <joelaf at google.com> Cc: Sami Tolvanen <samitolvanen at google.com> Cc: Nick Desaulniers <ndesaulniers at google.com> Cc: Kees Cook <keescook at chromium.org> Cc: Marco Elver <elver at google.com> Cc: "Paul E. McKenney" <paulmck at kernel.org> Cc: Matt Turner <mattst88 at gmail.com> Cc: Ivan Kokshaysky <ink at jurassic.park.msu.ru> Cc: Richard Henderson <rth at twiddle.net> Cc: Peter Zijlstra <peterz at infradead.org> Cc: Alan Stern <stern at rowland.harvard.edu> Cc: "Michael S. Tsirkin" <mst at redhat.com> Cc: Jason Wang <jasowang at redhat.com> Cc: Arnd Bergmann <arnd at arndb.de> Cc: Boqun Feng <boqun.feng at gmail.com> Cc: Catalin Marinas <catalin.marinas at arm.com> Cc: Mark Rutland <mark.rutland at arm.com> Cc: linux-arm-kernel at lists.infradead.org> Cc: linux-alpha at vger.kernel.org Cc: virtualization at lists.linux-foundation.org Cc: kernel-team at android.com --->8 SeongJae Park (1): Documentation/barriers/kokr: Remove references to [smp_]read_barrier_depends() Will Deacon (18): tools: bpf: Use local copy of headers including uapi/linux/filter.h compiler.h: Split {READ,WRITE}_ONCE definitions out into rwonce.h asm/rwonce: Allow __READ_ONCE to be overridden by the architecture alpha: Override READ_ONCE() with barriered implementation asm/rwonce: Remove smp_read_barrier_depends() invocation asm/rwonce: Don't pull <asm/barrier.h> into 'asm-generic/rwonce.h' vhost: Remove redundant use of read_barrier_depends() barrier alpha: Replace smp_read_barrier_depends() usage with smp_[r]mb() locking/barriers: Remove definitions for [smp_]read_barrier_depends() Documentation/barriers: Remove references to [smp_]read_barrier_depends() tools/memory-model: Remove smp_read_barrier_depends() from informal doc include/linux: Remove smp_read_barrier_depends() from comments checkpatch: Remove checks relating to [smp_]read_barrier_depends() arm64: Reduce the number of header files pulled into vmlinux.lds.S arm64: alternatives: Split up alternative.h arm64: cpufeatures: Add capability for LDAPR instruction arm64: alternatives: Remove READ_ONCE() usage during patch operation arm64: lto: Strengthen READ_ONCE() to acquire when CONFIG_LTO=y .../RCU/Design/Requirements/Requirements.rst | 2 +- Documentation/memory-barriers.txt | 156 +--------- .../translations/ko_KR/memory-barriers.txt | 146 +-------- arch/alpha/include/asm/atomic.h | 16 +- arch/alpha/include/asm/barrier.h | 59 +--- arch/alpha/include/asm/pgtable.h | 10 +- arch/alpha/include/asm/rwonce.h | 35 +++ arch/arm/include/asm/vdso/gettimeofday.h | 1 + arch/arm64/Kconfig | 3 + arch/arm64/include/asm/alternative-macros.h | 276 ++++++++++++++++++ arch/arm64/include/asm/alternative.h | 267 +---------------- arch/arm64/include/asm/cpucaps.h | 3 +- arch/arm64/include/asm/insn.h | 3 +- arch/arm64/include/asm/kernel-pgtable.h | 2 +- arch/arm64/include/asm/memory.h | 11 +- arch/arm64/include/asm/rwonce.h | 63 ++++ arch/arm64/include/asm/uaccess.h | 1 + .../include/asm/vdso/compat_gettimeofday.h | 1 + arch/arm64/include/asm/vdso/gettimeofday.h | 1 + arch/arm64/kernel/alternative.c | 7 +- arch/arm64/kernel/cpufeature.c | 10 + arch/arm64/kernel/entry.S | 1 + arch/arm64/kernel/vdso/Makefile | 2 +- arch/arm64/kernel/vdso32/Makefile | 2 +- arch/arm64/kernel/vmlinux.lds.S | 1 - arch/arm64/kvm/hyp-init.S | 1 + arch/riscv/include/asm/vdso/gettimeofday.h | 1 + drivers/vhost/vhost.c | 5 - include/asm-generic/Kbuild | 1 + include/asm-generic/barrier.h | 19 +- include/asm-generic/rwonce.h | 80 +++++ include/linux/compiler.h | 83 +----- include/linux/nospec.h | 2 + include/linux/percpu-refcount.h | 2 +- include/linux/ptr_ring.h | 2 +- mm/memory.c | 2 +- scripts/checkpatch.pl | 9 +- tools/bpf/Makefile | 3 +- tools/include/uapi/linux/filter.h | 90 ++++++ .../Documentation/explanation.txt | 26 +- 40 files changed, 636 insertions(+), 769 deletions(-) create mode 100644 arch/alpha/include/asm/rwonce.h create mode 100644 arch/arm64/include/asm/alternative-macros.h create mode 100644 arch/arm64/include/asm/rwonce.h create mode 100644 include/asm-generic/rwonce.h create mode 100644 tools/include/uapi/linux/filter.h -- 2.27.0.383.g050319c2ae-goog
Will Deacon
2020-Jul-10 16:51 UTC
[PATCH v3 01/19] tools: bpf: Use local copy of headers including uapi/linux/filter.h
Pulling header files directly out of the kernel sources for inclusion in userspace programs is highly error prone, not least because it bypasses the kbuild infrastructure entirely and so may end up referencing other header files that have not been generated. Subsequent patches will cause compiler.h to pull in the ungenerated asm/rwonce.h file via filter.h, breaking the build for tools/bpf: | $ make -C tools/bpf | make: Entering directory '/linux/tools/bpf' | CC bpf_jit_disasm.o | LINK bpf_jit_disasm | CC bpf_dbg.o | In file included from /linux/include/uapi/linux/filter.h:9, | from /linux/tools/bpf/bpf_dbg.c:41: | /linux/include/linux/compiler.h:247:10: fatal error: asm/rwonce.h: No such file or directory | #include <asm/rwonce.h> | ^~~~~~~~~~~~~~ | compilation terminated. | make: *** [Makefile:61: bpf_dbg.o] Error 1 | make: Leaving directory '/linux/tools/bpf' Take a copy of the installed version of linux/filter.h (i.e. the one created by the 'headers_install' target) into tools/include/uapi/linux/ and adjust the BPF tool Makefile to reference the local include directories instead of those in the main source tree. Cc: Masahiro Yamada <yamada.masahiro at socionext.com> Acked-by: Alexei Starovoitov <ast at kernel.org> Suggested-by: Daniel Borkmann <daniel at iogearbox.net> Reported-by: Xiao Yang <ice_yangxiao at 163.com> Signed-off-by: Will Deacon <will at kernel.org> --- tools/bpf/Makefile | 3 +- tools/include/uapi/linux/filter.h | 90 +++++++++++++++++++++++++++++++ 2 files changed, 92 insertions(+), 1 deletion(-) create mode 100644 tools/include/uapi/linux/filter.h diff --git a/tools/bpf/Makefile b/tools/bpf/Makefile index 6df1850f8353..8a69258fd8aa 100644 --- a/tools/bpf/Makefile +++ b/tools/bpf/Makefile @@ -9,7 +9,8 @@ MAKE = make INSTALL ?= install CFLAGS += -Wall -O2 -CFLAGS += -D__EXPORTED_HEADERS__ -I$(srctree)/include/uapi -I$(srctree)/include +CFLAGS += -D__EXPORTED_HEADERS__ -I$(srctree)/tools/include/uapi \ + -I$(srctree)/tools/include # This will work when bpf is built in tools env. where srctree # isn't set and when invoked from selftests build, where srctree diff --git a/tools/include/uapi/linux/filter.h b/tools/include/uapi/linux/filter.h new file mode 100644 index 000000000000..eaef459e7bd4 --- /dev/null +++ b/tools/include/uapi/linux/filter.h @@ -0,0 +1,90 @@ +/* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ +/* + * Linux Socket Filter Data Structures + */ + +#ifndef __LINUX_FILTER_H__ +#define __LINUX_FILTER_H__ + + +#include <linux/types.h> +#include <linux/bpf_common.h> + +/* + * Current version of the filter code architecture. + */ +#define BPF_MAJOR_VERSION 1 +#define BPF_MINOR_VERSION 1 + +/* + * Try and keep these values and structures similar to BSD, especially + * the BPF code definitions which need to match so you can share filters + */ + +struct sock_filter { /* Filter block */ + __u16 code; /* Actual filter code */ + __u8 jt; /* Jump true */ + __u8 jf; /* Jump false */ + __u32 k; /* Generic multiuse field */ +}; + +struct sock_fprog { /* Required for SO_ATTACH_FILTER. */ + unsigned short len; /* Number of filter blocks */ + struct sock_filter *filter; +}; + +/* ret - BPF_K and BPF_X also apply */ +#define BPF_RVAL(code) ((code) & 0x18) +#define BPF_A 0x10 + +/* misc */ +#define BPF_MISCOP(code) ((code) & 0xf8) +#define BPF_TAX 0x00 +#define BPF_TXA 0x80 + +/* + * Macros for filter block array initializers. + */ +#ifndef BPF_STMT +#define BPF_STMT(code, k) { (unsigned short)(code), 0, 0, k } +#endif +#ifndef BPF_JUMP +#define BPF_JUMP(code, k, jt, jf) { (unsigned short)(code), jt, jf, k } +#endif + +/* + * Number of scratch memory words for: BPF_ST and BPF_STX + */ +#define BPF_MEMWORDS 16 + +/* RATIONALE. Negative offsets are invalid in BPF. + We use them to reference ancillary data. + Unlike introduction new instructions, it does not break + existing compilers/optimizers. + */ +#define SKF_AD_OFF (-0x1000) +#define SKF_AD_PROTOCOL 0 +#define SKF_AD_PKTTYPE 4 +#define SKF_AD_IFINDEX 8 +#define SKF_AD_NLATTR 12 +#define SKF_AD_NLATTR_NEST 16 +#define SKF_AD_MARK 20 +#define SKF_AD_QUEUE 24 +#define SKF_AD_HATYPE 28 +#define SKF_AD_RXHASH 32 +#define SKF_AD_CPU 36 +#define SKF_AD_ALU_XOR_X 40 +#define SKF_AD_VLAN_TAG 44 +#define SKF_AD_VLAN_TAG_PRESENT 48 +#define SKF_AD_PAY_OFFSET 52 +#define SKF_AD_RANDOM 56 +#define SKF_AD_VLAN_TPID 60 +#define SKF_AD_MAX 64 + +#define SKF_NET_OFF (-0x100000) +#define SKF_LL_OFF (-0x200000) + +#define BPF_NET_OFF SKF_NET_OFF +#define BPF_LL_OFF SKF_LL_OFF + +#endif /* __LINUX_FILTER_H__ */ -- 2.27.0.383.g050319c2ae-goog
Will Deacon
2020-Jul-10 16:51 UTC
[PATCH v3 02/19] compiler.h: Split {READ, WRITE}_ONCE definitions out into rwonce.h
In preparation for allowing architectures to define their own implementation of the READ_ONCE() macro, move the generic {READ,WRITE}_ONCE() definitions out of the unwieldy 'linux/compiler.h' file and into a new 'rwonce.h' header under 'asm-generic'. Acked-by: Paul E. McKenney <paulmck at kernel.org> Signed-off-by: Will Deacon <will at kernel.org> --- include/asm-generic/Kbuild | 1 + include/asm-generic/barrier.h | 2 +- include/asm-generic/rwonce.h | 91 +++++++++++++++++++++++++++++++++++ include/linux/compiler.h | 83 +------------------------------- 4 files changed, 95 insertions(+), 82 deletions(-) create mode 100644 include/asm-generic/rwonce.h diff --git a/include/asm-generic/Kbuild b/include/asm-generic/Kbuild index 44ec80e70518..74b0612601dd 100644 --- a/include/asm-generic/Kbuild +++ b/include/asm-generic/Kbuild @@ -45,6 +45,7 @@ mandatory-y += pci.h mandatory-y += percpu.h mandatory-y += pgalloc.h mandatory-y += preempt.h +mandatory-y += rwonce.h mandatory-y += sections.h mandatory-y += serial.h mandatory-y += shmparam.h diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h index 2eacaf7d62f6..8116744bb82c 100644 --- a/include/asm-generic/barrier.h +++ b/include/asm-generic/barrier.h @@ -13,7 +13,7 @@ #ifndef __ASSEMBLY__ -#include <linux/compiler.h> +#include <asm/rwonce.h> #ifndef nop #define nop() asm volatile ("nop") diff --git a/include/asm-generic/rwonce.h b/include/asm-generic/rwonce.h new file mode 100644 index 000000000000..92cc2f223cb3 --- /dev/null +++ b/include/asm-generic/rwonce.h @@ -0,0 +1,91 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Prevent the compiler from merging or refetching reads or writes. The + * compiler is also forbidden from reordering successive instances of + * READ_ONCE and WRITE_ONCE, but only when the compiler is aware of some + * particular ordering. One way to make the compiler aware of ordering is to + * put the two invocations of READ_ONCE or WRITE_ONCE in different C + * statements. + * + * These two macros will also work on aggregate data types like structs or + * unions. + * + * Their two major use cases are: (1) Mediating communication between + * process-level code and irq/NMI handlers, all running on the same CPU, + * and (2) Ensuring that the compiler does not fold, spindle, or otherwise + * mutilate accesses that either do not require ordering or that interact + * with an explicit memory barrier or atomic instruction that provides the + * required ordering. + */ +#ifndef __ASM_GENERIC_RWONCE_H +#define __ASM_GENERIC_RWONCE_H + +#ifndef __ASSEMBLY__ + +#include <linux/compiler_types.h> +#include <linux/kasan-checks.h> +#include <linux/kcsan-checks.h> + +#include <asm/barrier.h> + +/* + * Use __READ_ONCE() instead of READ_ONCE() if you do not require any + * atomicity or dependency ordering guarantees. Note that this may result + * in tears! + */ +#define __READ_ONCE(x) (*(const volatile __unqual_scalar_typeof(x) *)&(x)) + +#define __READ_ONCE_SCALAR(x) \ +({ \ + __unqual_scalar_typeof(x) __x = __READ_ONCE(x); \ + smp_read_barrier_depends(); \ + (typeof(x))__x; \ +}) + +#define READ_ONCE(x) \ +({ \ + compiletime_assert_rwonce_type(x); \ + __READ_ONCE_SCALAR(x); \ +}) + +#define __WRITE_ONCE(x, val) \ +do { \ + *(volatile typeof(x) *)&(x) = (val); \ +} while (0) + +#define WRITE_ONCE(x, val) \ +do { \ + compiletime_assert_rwonce_type(x); \ + __WRITE_ONCE(x, val); \ +} while (0) + +static __no_sanitize_or_inline +unsigned long __read_once_word_nocheck(const void *addr) +{ + return __READ_ONCE(*(unsigned long *)addr); +} + +/* + * Use READ_ONCE_NOCHECK() instead of READ_ONCE() if you need to load a + * word from memory atomically but without telling KASAN/KCSAN. This is + * usually used by unwinding code when walking the stack of a running process. + */ +#define READ_ONCE_NOCHECK(x) \ +({ \ + unsigned long __x; \ + compiletime_assert(sizeof(x) == sizeof(__x), \ + "Unsupported access size for READ_ONCE_NOCHECK()."); \ + __x = __read_once_word_nocheck(&(x)); \ + smp_read_barrier_depends(); \ + (typeof(x))__x; \ +}) + +static __no_kasan_or_inline +unsigned long read_word_at_a_time(const void *addr) +{ + kasan_check_read(addr, 1); + return *(unsigned long *)addr; +} + +#endif /* __ASSEMBLY__ */ +#endif /* __ASM_GENERIC_RWONCE_H */ diff --git a/include/linux/compiler.h b/include/linux/compiler.h index 204e76856435..718b4357af32 100644 --- a/include/linux/compiler.h +++ b/include/linux/compiler.h @@ -230,28 +230,6 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val, # define __UNIQUE_ID(prefix) __PASTE(__PASTE(__UNIQUE_ID_, prefix), __LINE__) #endif -/* - * Prevent the compiler from merging or refetching reads or writes. The - * compiler is also forbidden from reordering successive instances of - * READ_ONCE and WRITE_ONCE, but only when the compiler is aware of some - * particular ordering. One way to make the compiler aware of ordering is to - * put the two invocations of READ_ONCE or WRITE_ONCE in different C - * statements. - * - * These two macros will also work on aggregate data types like structs or - * unions. - * - * Their two major use cases are: (1) Mediating communication between - * process-level code and irq/NMI handlers, all running on the same CPU, - * and (2) Ensuring that the compiler does not fold, spindle, or otherwise - * mutilate accesses that either do not require ordering or that interact - * with an explicit memory barrier or atomic instruction that provides the - * required ordering. - */ -#include <asm/barrier.h> -#include <linux/kasan-checks.h> -#include <linux/kcsan-checks.h> - /** * data_race - mark an expression as containing intentional data races * @@ -272,65 +250,6 @@ void ftrace_likely_update(struct ftrace_likely_data *f, int val, __v; \ }) -/* - * Use __READ_ONCE() instead of READ_ONCE() if you do not require any - * atomicity or dependency ordering guarantees. Note that this may result - * in tears! - */ -#define __READ_ONCE(x) (*(const volatile __unqual_scalar_typeof(x) *)&(x)) - -#define __READ_ONCE_SCALAR(x) \ -({ \ - __unqual_scalar_typeof(x) __x = __READ_ONCE(x); \ - smp_read_barrier_depends(); \ - (typeof(x))__x; \ -}) - -#define READ_ONCE(x) \ -({ \ - compiletime_assert_rwonce_type(x); \ - __READ_ONCE_SCALAR(x); \ -}) - -#define __WRITE_ONCE(x, val) \ -do { \ - *(volatile typeof(x) *)&(x) = (val); \ -} while (0) - -#define WRITE_ONCE(x, val) \ -do { \ - compiletime_assert_rwonce_type(x); \ - __WRITE_ONCE(x, val); \ -} while (0) - -static __no_sanitize_or_inline -unsigned long __read_once_word_nocheck(const void *addr) -{ - return __READ_ONCE(*(unsigned long *)addr); -} - -/* - * Use READ_ONCE_NOCHECK() instead of READ_ONCE() if you need to load a - * word from memory atomically but without telling KASAN/KCSAN. This is - * usually used by unwinding code when walking the stack of a running process. - */ -#define READ_ONCE_NOCHECK(x) \ -({ \ - unsigned long __x; \ - compiletime_assert(sizeof(x) == sizeof(__x), \ - "Unsupported access size for READ_ONCE_NOCHECK()."); \ - __x = __read_once_word_nocheck(&(x)); \ - smp_read_barrier_depends(); \ - (typeof(x))__x; \ -}) - -static __no_kasan_or_inline -unsigned long read_word_at_a_time(const void *addr) -{ - kasan_check_read(addr, 1); - return *(unsigned long *)addr; -} - #endif /* __KERNEL__ */ /* @@ -414,4 +333,6 @@ static inline void *offset_to_ptr(const int *off) */ #define prevent_tail_call_optimization() mb() +#include <asm/rwonce.h> + #endif /* __LINUX_COMPILER_H */ -- 2.27.0.383.g050319c2ae-goog
Will Deacon
2020-Jul-10 16:51 UTC
[PATCH v3 03/19] asm/rwonce: Allow __READ_ONCE to be overridden by the architecture
The meat and potatoes of READ_ONCE() is defined by the __READ_ONCE() macro, which uses a volatile casts in an attempt to avoid tearing of byte, halfword, word and double-word accesses. Allow this to be overridden by the architecture code in the case that things like memory barriers are also required. Acked-by: Paul E. McKenney <paulmck at kernel.org> Signed-off-by: Will Deacon <will at kernel.org> --- include/asm-generic/rwonce.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/include/asm-generic/rwonce.h b/include/asm-generic/rwonce.h index 92cc2f223cb3..f9dfa88fc04d 100644 --- a/include/asm-generic/rwonce.h +++ b/include/asm-generic/rwonce.h @@ -33,7 +33,9 @@ * atomicity or dependency ordering guarantees. Note that this may result * in tears! */ +#ifndef __READ_ONCE #define __READ_ONCE(x) (*(const volatile __unqual_scalar_typeof(x) *)&(x)) +#endif #define __READ_ONCE_SCALAR(x) \ ({ \ -- 2.27.0.383.g050319c2ae-goog
Will Deacon
2020-Jul-10 16:51 UTC
[PATCH v3 04/19] alpha: Override READ_ONCE() with barriered implementation
Rather then relying on the core code to use smp_read_barrier_depends() as part of the READ_ONCE() definition, instead override __READ_ONCE() in the Alpha code so that it generates the required mb() and then implement smp_load_acquire() using the new macro to avoid redundant back-to-back barriers from the generic implementation. Acked-by: Paul E. McKenney <paulmck at kernel.org> Signed-off-by: Will Deacon <will at kernel.org> --- arch/alpha/include/asm/barrier.h | 59 +++----------------------------- arch/alpha/include/asm/rwonce.h | 35 +++++++++++++++++++ 2 files changed, 40 insertions(+), 54 deletions(-) create mode 100644 arch/alpha/include/asm/rwonce.h diff --git a/arch/alpha/include/asm/barrier.h b/arch/alpha/include/asm/barrier.h index 92ec486a4f9e..c56bfffc9918 100644 --- a/arch/alpha/include/asm/barrier.h +++ b/arch/alpha/include/asm/barrier.h @@ -2,64 +2,15 @@ #ifndef __BARRIER_H #define __BARRIER_H -#include <asm/compiler.h> - #define mb() __asm__ __volatile__("mb": : :"memory") #define rmb() __asm__ __volatile__("mb": : :"memory") #define wmb() __asm__ __volatile__("wmb": : :"memory") -/** - * read_barrier_depends - Flush all pending reads that subsequents reads - * depend on. - * - * No data-dependent reads from memory-like regions are ever reordered - * over this barrier. All reads preceding this primitive are guaranteed - * to access memory (but not necessarily other CPUs' caches) before any - * reads following this primitive that depend on the data return by - * any of the preceding reads. This primitive is much lighter weight than - * rmb() on most CPUs, and is never heavier weight than is - * rmb(). - * - * These ordering constraints are respected by both the local CPU - * and the compiler. - * - * Ordering is not guaranteed by anything other than these primitives, - * not even by data dependencies. See the documentation for - * memory_barrier() for examples and URLs to more information. - * - * For example, the following code would force ordering (the initial - * value of "a" is zero, "b" is one, and "p" is "&a"): - * - * <programlisting> - * CPU 0 CPU 1 - * - * b = 2; - * memory_barrier(); - * p = &b; q = p; - * read_barrier_depends(); - * d = *q; - * </programlisting> - * - * because the read of "*q" depends on the read of "p" and these - * two reads are separated by a read_barrier_depends(). However, - * the following code, with the same initial values for "a" and "b": - * - * <programlisting> - * CPU 0 CPU 1 - * - * a = 2; - * memory_barrier(); - * b = 3; y = b; - * read_barrier_depends(); - * x = a; - * </programlisting> - * - * does not enforce ordering, since there is no data dependency between - * the read of "a" and the read of "b". Therefore, on some CPUs, such - * as Alpha, "y" could be set to 3 and "x" to 0. Use rmb() - * in cases like this where there are no data dependencies. - */ -#define read_barrier_depends() __asm__ __volatile__("mb": : :"memory") +#define __smp_load_acquire(p) \ +({ \ + compiletime_assert_atomic_type(*p); \ + __READ_ONCE(*p); \ +}) #ifdef CONFIG_SMP #define __ASM_SMP_MB "\tmb\n" diff --git a/arch/alpha/include/asm/rwonce.h b/arch/alpha/include/asm/rwonce.h new file mode 100644 index 000000000000..35542bcf92b3 --- /dev/null +++ b/arch/alpha/include/asm/rwonce.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2019 Google LLC. + */ +#ifndef __ASM_RWONCE_H +#define __ASM_RWONCE_H + +#ifdef CONFIG_SMP + +#include <asm/barrier.h> + +/* + * Alpha is apparently daft enough to reorder address-dependent loads + * on some CPU implementations. Knock some common sense into it with + * a memory barrier in READ_ONCE(). + * + * For the curious, more information about this unusual reordering is + * available in chapter 15 of the "perfbook": + * + * https://kernel.org/pub/linux/kernel/people/paulmck/perfbook/perfbook.html + * + */ +#define __READ_ONCE(x) \ +({ \ + __unqual_scalar_typeof(x) __x = \ + (*(volatile typeof(__x) *)(&(x))); \ + mb(); \ + (typeof(x))__x; \ +}) + +#endif /* CONFIG_SMP */ + +#include <asm-generic/rwonce.h> + +#endif /* __ASM_RWONCE_H */ -- 2.27.0.383.g050319c2ae-goog
Will Deacon
2020-Jul-10 16:51 UTC
[PATCH v3 05/19] asm/rwonce: Remove smp_read_barrier_depends() invocation
Alpha overrides __READ_ONCE() directly, so there's no need to use smp_read_barrier_depends() in the core code. This also means that __READ_ONCE() can be relied upon to provide dependency ordering. Acked-by: Paul E. McKenney <paulmck at kernel.org> Signed-off-by: Will Deacon <will at kernel.org> --- include/asm-generic/rwonce.h | 19 ++++--------------- 1 file changed, 4 insertions(+), 15 deletions(-) diff --git a/include/asm-generic/rwonce.h b/include/asm-generic/rwonce.h index f9dfa88fc04d..cc810f1f18ca 100644 --- a/include/asm-generic/rwonce.h +++ b/include/asm-generic/rwonce.h @@ -30,24 +30,16 @@ /* * Use __READ_ONCE() instead of READ_ONCE() if you do not require any - * atomicity or dependency ordering guarantees. Note that this may result - * in tears! + * atomicity. Note that this may result in tears! */ #ifndef __READ_ONCE #define __READ_ONCE(x) (*(const volatile __unqual_scalar_typeof(x) *)&(x)) #endif -#define __READ_ONCE_SCALAR(x) \ -({ \ - __unqual_scalar_typeof(x) __x = __READ_ONCE(x); \ - smp_read_barrier_depends(); \ - (typeof(x))__x; \ -}) - #define READ_ONCE(x) \ ({ \ compiletime_assert_rwonce_type(x); \ - __READ_ONCE_SCALAR(x); \ + __READ_ONCE(x); \ }) #define __WRITE_ONCE(x, val) \ @@ -74,12 +66,9 @@ unsigned long __read_once_word_nocheck(const void *addr) */ #define READ_ONCE_NOCHECK(x) \ ({ \ - unsigned long __x; \ - compiletime_assert(sizeof(x) == sizeof(__x), \ + compiletime_assert(sizeof(x) == sizeof(unsigned long), \ "Unsupported access size for READ_ONCE_NOCHECK()."); \ - __x = __read_once_word_nocheck(&(x)); \ - smp_read_barrier_depends(); \ - (typeof(x))__x; \ + (typeof(x))__read_once_word_nocheck(&(x)); \ }) static __no_kasan_or_inline -- 2.27.0.383.g050319c2ae-goog
Will Deacon
2020-Jul-10 16:51 UTC
[PATCH v3 06/19] asm/rwonce: Don't pull <asm/barrier.h> into 'asm-generic/rwonce.h'
Now that 'smp_read_barrier_depends()' has gone the way of the Norwegian Blue, drop the inclusion of <asm/barrier.h> in 'asm-generic/rwonce.h'. This requires fixups to some architecture vdso headers which were previously relying on 'asm/barrier.h' coming in via 'linux/compiler.h'. Signed-off-by: Will Deacon <will at kernel.org> --- arch/arm/include/asm/vdso/gettimeofday.h | 1 + arch/arm64/include/asm/vdso/compat_gettimeofday.h | 1 + arch/arm64/include/asm/vdso/gettimeofday.h | 1 + arch/riscv/include/asm/vdso/gettimeofday.h | 1 + include/asm-generic/rwonce.h | 2 -- include/linux/nospec.h | 2 ++ 6 files changed, 6 insertions(+), 2 deletions(-) diff --git a/arch/arm/include/asm/vdso/gettimeofday.h b/arch/arm/include/asm/vdso/gettimeofday.h index 36dc18553ed8..1b207cf07697 100644 --- a/arch/arm/include/asm/vdso/gettimeofday.h +++ b/arch/arm/include/asm/vdso/gettimeofday.h @@ -7,6 +7,7 @@ #ifndef __ASSEMBLY__ +#include <asm/barrier.h> #include <asm/errno.h> #include <asm/unistd.h> #include <asm/vdso/cp15.h> diff --git a/arch/arm64/include/asm/vdso/compat_gettimeofday.h b/arch/arm64/include/asm/vdso/compat_gettimeofday.h index b6907ae78e53..bcf7649999a4 100644 --- a/arch/arm64/include/asm/vdso/compat_gettimeofday.h +++ b/arch/arm64/include/asm/vdso/compat_gettimeofday.h @@ -7,6 +7,7 @@ #ifndef __ASSEMBLY__ +#include <asm/barrier.h> #include <asm/unistd.h> #include <asm/errno.h> diff --git a/arch/arm64/include/asm/vdso/gettimeofday.h b/arch/arm64/include/asm/vdso/gettimeofday.h index afba6ba332f8..127fa63893e2 100644 --- a/arch/arm64/include/asm/vdso/gettimeofday.h +++ b/arch/arm64/include/asm/vdso/gettimeofday.h @@ -7,6 +7,7 @@ #ifndef __ASSEMBLY__ +#include <asm/barrier.h> #include <asm/unistd.h> #define VDSO_HAS_CLOCK_GETRES 1 diff --git a/arch/riscv/include/asm/vdso/gettimeofday.h b/arch/riscv/include/asm/vdso/gettimeofday.h index c8e818688ec1..3099362d9f26 100644 --- a/arch/riscv/include/asm/vdso/gettimeofday.h +++ b/arch/riscv/include/asm/vdso/gettimeofday.h @@ -4,6 +4,7 @@ #ifndef __ASSEMBLY__ +#include <asm/barrier.h> #include <asm/unistd.h> #include <asm/csr.h> #include <uapi/linux/time.h> diff --git a/include/asm-generic/rwonce.h b/include/asm-generic/rwonce.h index cc810f1f18ca..cd0302746fb4 100644 --- a/include/asm-generic/rwonce.h +++ b/include/asm-generic/rwonce.h @@ -26,8 +26,6 @@ #include <linux/kasan-checks.h> #include <linux/kcsan-checks.h> -#include <asm/barrier.h> - /* * Use __READ_ONCE() instead of READ_ONCE() if you do not require any * atomicity. Note that this may result in tears! diff --git a/include/linux/nospec.h b/include/linux/nospec.h index 0c5ef54fd416..c1e79f72cd89 100644 --- a/include/linux/nospec.h +++ b/include/linux/nospec.h @@ -5,6 +5,8 @@ #ifndef _LINUX_NOSPEC_H #define _LINUX_NOSPEC_H + +#include <linux/compiler.h> #include <asm/barrier.h> struct task_struct; -- 2.27.0.383.g050319c2ae-goog
Will Deacon
2020-Jul-10 16:51 UTC
[PATCH v3 07/19] vhost: Remove redundant use of read_barrier_depends() barrier
Since commit 76ebbe78f739 ("locking/barriers: Add implicit smp_read_barrier_depends() to READ_ONCE()"), there is no need to use smp_read_barrier_depends() outside of the Alpha architecture code. Unfortunately, there is precisely _one_ user in the vhost code, and there isn't an obvious READ_ONCE() access making the barrier redundant. However, on closer inspection (thanks, Jason), it appears that vring synchronisation between the producer and consumer occurs via the 'avail_idx' field, which is followed up by an rmb() in vhost_get_vq_desc(), making the read_barrier_depends() redundant on Alpha. Jason says: | I'm also confused about the barrier here, basically in driver side | we did: | | 1) allocate pages | 2) store pages in indirect->addr | 3) smp_wmb() | 4) increase the avail idx (somehow a tail pointer of vring) | | in vhost we did: | | 1) read avail idx | 2) smp_rmb() | 3) read indirect->addr | 4) read from indirect->addr | | It looks to me even the data dependency barrier is not necessary | since we have rmb() which is sufficient for us to the correct | indirect->addr and driver are not expected to do any writing to | indirect->addr after avail idx is increased Remove the redundant barrier invocation. Suggested-by: Jason Wang <jasowang at redhat.com> Acked-by: Paul E. McKenney <paulmck at kernel.org> Signed-off-by: Will Deacon <will at kernel.org> --- drivers/vhost/vhost.c | 5 ----- 1 file changed, 5 deletions(-) diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c index d7b8df3edffc..74d135ee7e26 100644 --- a/drivers/vhost/vhost.c +++ b/drivers/vhost/vhost.c @@ -2092,11 +2092,6 @@ static int get_indirect(struct vhost_virtqueue *vq, return ret; } iov_iter_init(&from, READ, vq->indirect, ret, len); - - /* We will use the result as an address to read from, so most - * architectures only need a compiler barrier here. */ - read_barrier_depends(); - count = len / sizeof desc; /* Buffers are chained via a 16 bit next field, so * we can have at most 2^16 of these. */ -- 2.27.0.383.g050319c2ae-goog
Will Deacon
2020-Jul-10 16:51 UTC
[PATCH v3 08/19] alpha: Replace smp_read_barrier_depends() usage with smp_[r]mb()
In preparation for removing smp_read_barrier_depends() altogether, move the Alpha code over to using smp_rmb() and smp_mb() directly. Acked-by: Paul E. McKenney <paulmck at kernel.org> Signed-off-by: Will Deacon <will at kernel.org> --- arch/alpha/include/asm/atomic.h | 16 ++++++++-------- arch/alpha/include/asm/pgtable.h | 10 +++++----- mm/memory.c | 2 +- 3 files changed, 14 insertions(+), 14 deletions(-) diff --git a/arch/alpha/include/asm/atomic.h b/arch/alpha/include/asm/atomic.h index 2144530d1428..2f8f7e54792f 100644 --- a/arch/alpha/include/asm/atomic.h +++ b/arch/alpha/include/asm/atomic.h @@ -16,10 +16,10 @@ /* * To ensure dependency ordering is preserved for the _relaxed and - * _release atomics, an smp_read_barrier_depends() is unconditionally - * inserted into the _relaxed variants, which are used to build the - * barriered versions. Avoid redundant back-to-back fences in the - * _acquire and _fence versions. + * _release atomics, an smp_mb() is unconditionally inserted into the + * _relaxed variants, which are used to build the barriered versions. + * Avoid redundant back-to-back fences in the _acquire and _fence + * versions. */ #define __atomic_acquire_fence() #define __atomic_post_full_fence() @@ -70,7 +70,7 @@ static inline int atomic_##op##_return_relaxed(int i, atomic_t *v) \ ".previous" \ :"=&r" (temp), "=m" (v->counter), "=&r" (result) \ :"Ir" (i), "m" (v->counter) : "memory"); \ - smp_read_barrier_depends(); \ + smp_mb(); \ return result; \ } @@ -88,7 +88,7 @@ static inline int atomic_fetch_##op##_relaxed(int i, atomic_t *v) \ ".previous" \ :"=&r" (temp), "=m" (v->counter), "=&r" (result) \ :"Ir" (i), "m" (v->counter) : "memory"); \ - smp_read_barrier_depends(); \ + smp_mb(); \ return result; \ } @@ -123,7 +123,7 @@ static __inline__ s64 atomic64_##op##_return_relaxed(s64 i, atomic64_t * v) \ ".previous" \ :"=&r" (temp), "=m" (v->counter), "=&r" (result) \ :"Ir" (i), "m" (v->counter) : "memory"); \ - smp_read_barrier_depends(); \ + smp_mb(); \ return result; \ } @@ -141,7 +141,7 @@ static __inline__ s64 atomic64_fetch_##op##_relaxed(s64 i, atomic64_t * v) \ ".previous" \ :"=&r" (temp), "=m" (v->counter), "=&r" (result) \ :"Ir" (i), "m" (v->counter) : "memory"); \ - smp_read_barrier_depends(); \ + smp_mb(); \ return result; \ } diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h index 162c17b2631f..660b14ce1317 100644 --- a/arch/alpha/include/asm/pgtable.h +++ b/arch/alpha/include/asm/pgtable.h @@ -277,9 +277,9 @@ extern inline pte_t pte_mkdirty(pte_t pte) { pte_val(pte) |= __DIRTY_BITS; retur extern inline pte_t pte_mkyoung(pte_t pte) { pte_val(pte) |= __ACCESS_BITS; return pte; } /* - * The smp_read_barrier_depends() in the following functions are required to - * order the load of *dir (the pointer in the top level page table) with any - * subsequent load of the returned pmd_t *ret (ret is data dependent on *dir). + * The smp_rmb() in the following functions are required to order the load of + * *dir (the pointer in the top level page table) with any subsequent load of + * the returned pmd_t *ret (ret is data dependent on *dir). * * If this ordering is not enforced, the CPU might load an older value of * *ret, which may be uninitialized data. See mm/memory.c:__pte_alloc for @@ -293,7 +293,7 @@ extern inline pte_t pte_mkyoung(pte_t pte) { pte_val(pte) |= __ACCESS_BITS; retu extern inline pmd_t * pmd_offset(pud_t * dir, unsigned long address) { pmd_t *ret = (pmd_t *) pud_page_vaddr(*dir) + ((address >> PMD_SHIFT) & (PTRS_PER_PAGE - 1)); - smp_read_barrier_depends(); /* see above */ + smp_rmb(); /* see above */ return ret; } #define pmd_offset pmd_offset @@ -303,7 +303,7 @@ extern inline pte_t * pte_offset_kernel(pmd_t * dir, unsigned long address) { pte_t *ret = (pte_t *) pmd_page_vaddr(*dir) + ((address >> PAGE_SHIFT) & (PTRS_PER_PAGE - 1)); - smp_read_barrier_depends(); /* see above */ + smp_rmb(); /* see above */ return ret; } #define pte_offset_kernel pte_offset_kernel diff --git a/mm/memory.c b/mm/memory.c index 87ec87cdc1ff..e1f2c730d8bb 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -437,7 +437,7 @@ int __pte_alloc(struct mm_struct *mm, pmd_t *pmd) * of a chain of data-dependent loads, meaning most CPUs (alpha * being the notable exception) will already guarantee loads are * seen in-order. See the alpha page table accessors for the - * smp_read_barrier_depends() barriers in page table walking code. + * smp_rmb() barriers in page table walking code. */ smp_wmb(); /* Could be smp_wmb__xxx(before|after)_spin_lock */ -- 2.27.0.383.g050319c2ae-goog
Will Deacon
2020-Jul-10 16:51 UTC
[PATCH v3 09/19] locking/barriers: Remove definitions for [smp_]read_barrier_depends()
There are no remaining users of [smp_]read_barrier_depends(), so remove it from the generic implementation of 'barrier.h'. Acked-by: Paul E. McKenney <paulmck at kernel.org> Signed-off-by: Will Deacon <will at kernel.org> --- include/asm-generic/barrier.h | 17 ----------------- 1 file changed, 17 deletions(-) diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h index 8116744bb82c..fec97dc34de7 100644 --- a/include/asm-generic/barrier.h +++ b/include/asm-generic/barrier.h @@ -46,10 +46,6 @@ #define dma_wmb() wmb() #endif -#ifndef read_barrier_depends -#define read_barrier_depends() do { } while (0) -#endif - #ifndef __smp_mb #define __smp_mb() mb() #endif @@ -62,10 +58,6 @@ #define __smp_wmb() wmb() #endif -#ifndef __smp_read_barrier_depends -#define __smp_read_barrier_depends() read_barrier_depends() -#endif - #ifdef CONFIG_SMP #ifndef smp_mb @@ -80,10 +72,6 @@ #define smp_wmb() __smp_wmb() #endif -#ifndef smp_read_barrier_depends -#define smp_read_barrier_depends() __smp_read_barrier_depends() -#endif - #else /* !CONFIG_SMP */ #ifndef smp_mb @@ -98,10 +86,6 @@ #define smp_wmb() barrier() #endif -#ifndef smp_read_barrier_depends -#define smp_read_barrier_depends() do { } while (0) -#endif - #endif /* CONFIG_SMP */ #ifndef __smp_store_mb @@ -196,7 +180,6 @@ do { \ #define virt_mb() __smp_mb() #define virt_rmb() __smp_rmb() #define virt_wmb() __smp_wmb() -#define virt_read_barrier_depends() __smp_read_barrier_depends() #define virt_store_mb(var, value) __smp_store_mb(var, value) #define virt_mb__before_atomic() __smp_mb__before_atomic() #define virt_mb__after_atomic() __smp_mb__after_atomic() -- 2.27.0.383.g050319c2ae-goog
Will Deacon
2020-Jul-10 16:51 UTC
[PATCH v3 10/19] Documentation/barriers: Remove references to [smp_]read_barrier_depends()
The [smp_]read_barrier_depends() barrier macros no longer exist as part of the Linux memory model, so remove all references to them from the Documentation/ directory. Although this is fairly mechanical on the whole, we drop the "CACHE COHERENCY" section entirely from 'memory-barriers.txt' as it doesn't make any sense now that the dependency barriers have been removed. Acked-by: Paul E. McKenney <paulmck at kernel.org> Signed-off-by: Will Deacon <will at kernel.org> --- .../RCU/Design/Requirements/Requirements.rst | 2 +- Documentation/memory-barriers.txt | 156 +----------------- 2 files changed, 9 insertions(+), 149 deletions(-) diff --git a/Documentation/RCU/Design/Requirements/Requirements.rst b/Documentation/RCU/Design/Requirements/Requirements.rst index 75b8ca007a11..50d5c43c48b0 100644 --- a/Documentation/RCU/Design/Requirements/Requirements.rst +++ b/Documentation/RCU/Design/Requirements/Requirements.rst @@ -463,7 +463,7 @@ again without disrupting RCU readers. This guarantee was only partially premeditated. DYNIX/ptx used an explicit memory barrier for publication, but had nothing resembling ``rcu_dereference()`` for subscription, nor did it have anything -resembling the ``smp_read_barrier_depends()`` that was later subsumed +resembling the dependency-ordering barrier that was later subsumed into ``rcu_dereference()`` and later still into ``READ_ONCE()``. The need for these operations made itself known quite suddenly at a late-1990s meeting with the DEC Alpha architects, back in the days when diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt index eaabc3134294..4e55aba3eb4a 100644 --- a/Documentation/memory-barriers.txt +++ b/Documentation/memory-barriers.txt @@ -553,12 +553,12 @@ There are certain things that the Linux kernel memory barriers do not guarantee: DATA DEPENDENCY BARRIERS (HISTORICAL) ------------------------------------- -As of v4.15 of the Linux kernel, an smp_read_barrier_depends() was -added to READ_ONCE(), which means that about the only people who -need to pay attention to this section are those working on DEC Alpha -architecture-specific code and those working on READ_ONCE() itself. -For those who need it, and for those who are interested in the history, -here is the story of data-dependency barriers. +As of v4.15 of the Linux kernel, an smp_mb() was added to READ_ONCE() for +DEC Alpha, which means that about the only people who need to pay attention +to this section are those working on DEC Alpha architecture-specific code +and those working on READ_ONCE() itself. For those who need it, and for +those who are interested in the history, here is the story of +data-dependency barriers. The usage requirements of data dependency barriers are a little subtle, and it's not always obvious that they're needed. To illustrate, consider the @@ -2708,144 +2708,6 @@ the properties of the memory window through which devices are accessed and/or the use of any special device communication instructions the CPU may have. -CACHE COHERENCY ---------------- - -Life isn't quite as simple as it may appear above, however: for while the -caches are expected to be coherent, there's no guarantee that that coherency -will be ordered. This means that while changes made on one CPU will -eventually become visible on all CPUs, there's no guarantee that they will -become apparent in the same order on those other CPUs. - - -Consider dealing with a system that has a pair of CPUs (1 & 2), each of which -has a pair of parallel data caches (CPU 1 has A/B, and CPU 2 has C/D): - - : - : +--------+ - : +---------+ | | - +--------+ : +--->| Cache A |<------->| | - | | : | +---------+ | | - | CPU 1 |<---+ | | - | | : | +---------+ | | - +--------+ : +--->| Cache B |<------->| | - : +---------+ | | - : | Memory | - : +---------+ | System | - +--------+ : +--->| Cache C |<------->| | - | | : | +---------+ | | - | CPU 2 |<---+ | | - | | : | +---------+ | | - +--------+ : +--->| Cache D |<------->| | - : +---------+ | | - : +--------+ - : - -Imagine the system has the following properties: - - (*) an odd-numbered cache line may be in cache A, cache C or it may still be - resident in memory; - - (*) an even-numbered cache line may be in cache B, cache D or it may still be - resident in memory; - - (*) while the CPU core is interrogating one cache, the other cache may be - making use of the bus to access the rest of the system - perhaps to - displace a dirty cacheline or to do a speculative load; - - (*) each cache has a queue of operations that need to be applied to that cache - to maintain coherency with the rest of the system; - - (*) the coherency queue is not flushed by normal loads to lines already - present in the cache, even though the contents of the queue may - potentially affect those loads. - -Imagine, then, that two writes are made on the first CPU, with a write barrier -between them to guarantee that they will appear to reach that CPU's caches in -the requisite order: - - CPU 1 CPU 2 COMMENT - =============== =============== ======================================- u == 0, v == 1 and p == &u, q == &u - v = 2; - smp_wmb(); Make sure change to v is visible before - change to p - <A:modify v=2> v is now in cache A exclusively - p = &v; - <B:modify p=&v> p is now in cache B exclusively - -The write memory barrier forces the other CPUs in the system to perceive that -the local CPU's caches have apparently been updated in the correct order. But -now imagine that the second CPU wants to read those values: - - CPU 1 CPU 2 COMMENT - =============== =============== ======================================- ... - q = p; - x = *q; - -The above pair of reads may then fail to happen in the expected order, as the -cacheline holding p may get updated in one of the second CPU's caches while -the update to the cacheline holding v is delayed in the other of the second -CPU's caches by some other cache event: - - CPU 1 CPU 2 COMMENT - =============== =============== ======================================- u == 0, v == 1 and p == &u, q == &u - v = 2; - smp_wmb(); - <A:modify v=2> <C:busy> - <C:queue v=2> - p = &v; q = p; - <D:request p> - <B:modify p=&v> <D:commit p=&v> - <D:read p> - x = *q; - <C:read *q> Reads from v before v updated in cache - <C:unbusy> - <C:commit v=2> - -Basically, while both cachelines will be updated on CPU 2 eventually, there's -no guarantee that, without intervention, the order of update will be the same -as that committed on CPU 1. - - -To intervene, we need to interpolate a data dependency barrier or a read -barrier between the loads (which as of v4.15 is supplied unconditionally -by the READ_ONCE() macro). This will force the cache to commit its -coherency queue before processing any further requests: - - CPU 1 CPU 2 COMMENT - =============== =============== ======================================- u == 0, v == 1 and p == &u, q == &u - v = 2; - smp_wmb(); - <A:modify v=2> <C:busy> - <C:queue v=2> - p = &v; q = p; - <D:request p> - <B:modify p=&v> <D:commit p=&v> - <D:read p> - smp_read_barrier_depends() - <C:unbusy> - <C:commit v=2> - x = *q; - <C:read *q> Reads from v after v updated in cache - - -This sort of problem can be encountered on DEC Alpha processors as they have a -split cache that improves performance by making better use of the data bus. -While most CPUs do imply a data dependency barrier on the read when a memory -access depends on a read, not all do, so it may not be relied on. - -Other CPUs may also have split caches, but must coordinate between the various -cachelets for normal memory accesses. The semantics of the Alpha removes the -need for hardware coordination in the absence of memory barriers, which -permitted Alpha to sport higher CPU clock rates back in the day. However, -please note that (again, as of v4.15) smp_read_barrier_depends() should not -be used except in Alpha arch-specific code and within the READ_ONCE() macro. - - CACHE COHERENCY VS DMA ---------------------- @@ -3009,10 +2871,8 @@ caches with the memory coherence system, thus making it seem like pointer changes vs new data occur in the right order. The Alpha defines the Linux kernel's memory model, although as of v4.15 -the Linux kernel's addition of smp_read_barrier_depends() to READ_ONCE() -greatly reduced Alpha's impact on the memory model. - -See the subsection on "Cache Coherency" above. +the Linux kernel's addition of smp_mb() to READ_ONCE() on Alpha greatly +reduced its impact on the memory model. VIRTUAL MACHINE GUESTS -- 2.27.0.383.g050319c2ae-goog
Will Deacon
2020-Jul-10 16:51 UTC
[PATCH v3 11/19] Documentation/barriers/kokr: Remove references to [smp_]read_barrier_depends()
From: SeongJae Park <sj38.park at gmail.com> This commit translates commit ("Documentation/barriers: Remove references to [smp_]read_barrier_depends()") into Korean. Signed-off-by: SeongJae Park <sjpark at amazon.de> Reviewed-by: Yunjae Lee <lyj7694 at gmail.com> Signed-off-by: Will Deacon <will at kernel.org> --- .../translations/ko_KR/memory-barriers.txt | 146 +----------------- 1 file changed, 3 insertions(+), 143 deletions(-) diff --git a/Documentation/translations/ko_KR/memory-barriers.txt b/Documentation/translations/ko_KR/memory-barriers.txt index 34d041d68f78..a1f772ef622c 100644 --- a/Documentation/translations/ko_KR/memory-barriers.txt +++ b/Documentation/translations/ko_KR/memory-barriers.txt @@ -577,7 +577,7 @@ ACQUIRE ? ?? ?????? ?? ???? ???? RELEASE ??? ??? ??? (???) ----------------------------- -??? ?? v4.15 ????, smp_read_barrier_depends() ? READ_ONCE() ? +??? ?? v4.15 ????, smp_mb() ? DEC Alpha ? READ_ONCE() ??? ??????, ?? ? ??? ??? ???? ?? ???? DEC Alpha ???? ?? ??? ??? ???? READ_ONCE() ??? ??? ??? ??? ?????. ?? ??? ??, ??? ??? ?? ?? ??? ??, ?? ??? ??? @@ -2664,144 +2664,6 @@ CPU ??? ????? ???? ?????? ????? ?? ????. -?? ??? ------------ - -??? ?? ??? ???? ??? ???? ????: ???? ???? ??? -?????, ? ???? ???? ??? ??? ??? ????. ? CPU ?? -???? ?? ??? ?????? ???? ?? CPU ?? ???? ???, ?? -CPU ???? ?? ??? ??? ? ??? ??? ??? ????. - - -??? CPU (1 & 2) ? ?? ??, ? CPU ? ??? ??? ??(CPU 1 ? A/B ?, -CPU 2 ? C/D ? ????)? ??? ???? ?? ???? ???? ??? -???: - - : - : +--------+ - : +---------+ | | - +--------+ : +--->| Cache A |<------->| | - | | : | +---------+ | | - | CPU 1 |<---+ | | - | | : | +---------+ | | - +--------+ : +--->| Cache B |<------->| | - : +---------+ | | - : | Memory | - : +---------+ | System | - +--------+ : +--->| Cache C |<------->| | - | | : | +---------+ | | - | CPU 2 |<---+ | | - | | : | +---------+ | | - +--------+ : +--->| Cache D |<------->| | - : +---------+ | | - : +--------+ - : - -? ???? ??? ?? ??? ??? ??? ???: - - (*) ??? ????? ?? A, ?? C ?? ???? ??? ? ??; - - (*) ??? ????? ?? B, ?? D ?? ???? ??? ? ??; - - (*) CPU ??? ??? ??? ???? ??, ?? ??? - ?? ????? - ???? ???? ??? ??? ??? ?? ?? - ???? ?? ??? - ??? ?? ?? ??? ??? ? ??; - - (*) ? ??? ???? ??? ???? ???? ??? ?? ?? ??? - ????? ? ??????? ?? ??; - - (*) ? ??? ?? ??? ?? ???? ??? ???? ??? ??? ???? - ???? ???, ?? ??????? ? ??? ??? ??? ?? ? ?? - ???? ???. - -??, ??? CPU ?? ??? ?? ?????? ????, ?? CPU ? ??? -??? ??? ?????? ???? ???? ?? ? ????? ??? ?? -???? ???? ??? ??? ???: - - CPU 1 CPU 2 COMMENT - =============== =============== ======================================- u == 0, v == 1 and p == &u, q == &u - v = 2; - smp_wmb(); v ? ??? p ? ?? ?? ?? ?? - ??? ? - <A:modify v=2> v ? ?? ?? A ? ????? ??? - p = &v; - <B:modify p=&v> p ? ?? ?? B ? ????? ??? - -???? ?? ??? ???? CPU 1 ? ??? ??? ??? ???? ? ??? -???? ?? CPU ?? ???? ????. ???, ?? ??? CPU ? ? ??? -??? ?? ??? ??? ???: - - CPU 1 CPU 2 COMMENT - =============== =============== ======================================- ... - q = p; - x = *q; - -?? ??? ?? ?????? ??? ??? ???? ?? ? ???, ??? CPU -? ? ??? ?? ?? ???? ??? v ? ?? ?? ????? ?? ???? -????? ???? ??, p ? ?? ?? ????? ??? CPU ? ?? ??? -???? ????? ? ?? ?????. - - CPU 1 CPU 2 COMMENT - =============== =============== ======================================- u == 0, v == 1 and p == &u, q == &u - v = 2; - smp_wmb(); - <A:modify v=2> <C:busy> - <C:queue v=2> - p = &v; q = p; - <D:request p> - <B:modify p=&v> <D:commit p=&v> - <D:read p> - x = *q; - <C:read *q> ??? ???? ?? ?? v ? ?? - <C:unbusy> - <C:commit v=2> - -?????, ??? ???? ?? CPU 2 ? ?????? ???? ? ????, -??? ?? ???, ????? ??? CPU 1 ?? ???? ??? ??? -???? ??? ????. - - -??? ???? ???, ??? ??? ???? ?? ???? ?? ?????? -??? ??? ??? (v4.15 ??? READ_ONCE() ???? ?? ?????? -??? ???). ??? ???? ??? ?? ??? ???? ?? ??? ?? -????? ???? ???. - - CPU 1 CPU 2 COMMENT - =============== =============== ======================================- u == 0, v == 1 and p == &u, q == &u - v = 2; - smp_wmb(); - <A:modify v=2> <C:busy> - <C:queue v=2> - p = &v; q = p; - <D:request p> - <B:modify p=&v> <D:commit p=&v> - <D:read p> - smp_read_barrier_depends() - <C:unbusy> - <C:commit v=2> - x = *q; - <C:read *q> ??? ???? ? v ? ?? - - -?? ??? ??? DEC Alpha ?? ??????? ??? ? ???, ??? -??? ??? ? ? ? ??? ??? ??? ? ??, ??? ??? ??? ?? -?????. ???? CPU ? ??? ?? ?????? ??? ???? ?? ?? -?????? ?????? ??? ??? ???? ???????, ??? ??? -??? ??? ??? ???? ????. - -?? CPU ?? ??? ??? ??? ?? ? ???, ?? CPU ?? ??? ??? -???? ???? ? ??? ??? ??? ??? ??? ???. Alpha ? ?? -?? ??? ?? ??? (semantic) ? ?????? ??? ???? ????? -???? ??? ??? ?? ??? ???? ?? ???, ?? Alpha ? ??? -? ?? CPU ?? ??? ?? ? ?? ????. ???, (?? ????, v4.15 -?????) Alpha ???? ?? ??? READ_ONCE() ??? ????? ????? -smp_read_barrier_depends() ? ???? ??? ?? ????? ????. - - ?? ??? VS DMA ------------------ @@ -2962,10 +2824,8 @@ Alpha CPU ? ?? ??? ??? ??? ??? ??? ??? ???? ??? ??? ??? ???? ?? ?????. ??? ??? ??? ??? ??? Alpha ? ???? ????????, v4.15 -??? ??? ??? READ_ONCE() ?? smp_read_barrier_depends() ? ???? -Alpha ? ??? ???? ???? ?? ???? ????. - -?? "?? ???" ????? ?????. +??? Alpha ? READ_ONCE() ?? ?? smp_mb() ? ????? ??? ???? +Alpha ? ???? ?? ???????. ?? ?? ??? -- 2.27.0.383.g050319c2ae-goog
Will Deacon
2020-Jul-10 16:51 UTC
[PATCH v3 12/19] tools/memory-model: Remove smp_read_barrier_depends() from informal doc
smp_read_barrier_depends() has gone the way of mmiowb() and so many esoteric memory barriers before it. Drop the two mentions of this deceased barrier from the LKMM informal explanation document. Acked-by: Alan Stern <stern at rowland.harvard.edu> Acked-by: Paul E. McKenney <paulmck at kernel.org> Signed-off-by: Will Deacon <will at kernel.org> --- .../Documentation/explanation.txt | 26 +++++++++---------- 1 file changed, 12 insertions(+), 14 deletions(-) diff --git a/tools/memory-model/Documentation/explanation.txt b/tools/memory-model/Documentation/explanation.txt index e91a2eb19592..01adf9e0ebac 100644 --- a/tools/memory-model/Documentation/explanation.txt +++ b/tools/memory-model/Documentation/explanation.txt @@ -1122,12 +1122,10 @@ maintain at least the appearance of FIFO order. In practice, this difficulty is solved by inserting a special fence between P1's two loads when the kernel is compiled for the Alpha architecture. In fact, as of version 4.15, the kernel automatically -adds this fence (called smp_read_barrier_depends() and defined as -nothing at all on non-Alpha builds) after every READ_ONCE() and atomic -load. The effect of the fence is to cause the CPU not to execute any -po-later instructions until after the local cache has finished -processing all the stores it has already received. Thus, if the code -was changed to: +adds this fence after every READ_ONCE() and atomic load on Alpha. The +effect of the fence is to cause the CPU not to execute any po-later +instructions until after the local cache has finished processing all +the stores it has already received. Thus, if the code was changed to: P1() { @@ -1146,14 +1144,14 @@ READ_ONCE() or another synchronization primitive rather than accessed directly. The LKMM requires that smp_rmb(), acquire fences, and strong fences -share this property with smp_read_barrier_depends(): They do not allow -the CPU to execute any po-later instructions (or po-later loads in the -case of smp_rmb()) until all outstanding stores have been processed by -the local cache. In the case of a strong fence, the CPU first has to -wait for all of its po-earlier stores to propagate to every other CPU -in the system; then it has to wait for the local cache to process all -the stores received as of that time -- not just the stores received -when the strong fence began. +share this property: They do not allow the CPU to execute any po-later +instructions (or po-later loads in the case of smp_rmb()) until all +outstanding stores have been processed by the local cache. In the +case of a strong fence, the CPU first has to wait for all of its +po-earlier stores to propagate to every other CPU in the system; then +it has to wait for the local cache to process all the stores received +as of that time -- not just the stores received when the strong fence +began. And of course, none of this matters for any architecture other than Alpha. -- 2.27.0.383.g050319c2ae-goog
Will Deacon
2020-Jul-10 16:51 UTC
[PATCH v3 13/19] include/linux: Remove smp_read_barrier_depends() from comments
smp_read_barrier_depends() doesn't exist any more, so reword the two comments that mention it to refer to "dependency ordering" instead. Acked-by: Paul E. McKenney <paulmck at kernel.org> Signed-off-by: Will Deacon <will at kernel.org> --- include/linux/percpu-refcount.h | 2 +- include/linux/ptr_ring.h | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h index 22d9d183950d..87d8a38bdea1 100644 --- a/include/linux/percpu-refcount.h +++ b/include/linux/percpu-refcount.h @@ -155,7 +155,7 @@ static inline bool __ref_is_percpu(struct percpu_ref *ref, * between contaminating the pointer value, meaning that * READ_ONCE() is required when fetching it. * - * The smp_read_barrier_depends() implied by READ_ONCE() pairs + * The dependency ordering from the READ_ONCE() pairs * with smp_store_release() in __percpu_ref_switch_to_percpu(). */ percpu_ptr = READ_ONCE(ref->percpu_count_ptr); diff --git a/include/linux/ptr_ring.h b/include/linux/ptr_ring.h index 417db0a79a62..808f9d3ee546 100644 --- a/include/linux/ptr_ring.h +++ b/include/linux/ptr_ring.h @@ -107,7 +107,7 @@ static inline int __ptr_ring_produce(struct ptr_ring *r, void *ptr) return -ENOSPC; /* Make sure the pointer we are storing points to a valid data. */ - /* Pairs with smp_read_barrier_depends in __ptr_ring_consume. */ + /* Pairs with the dependency ordering in __ptr_ring_consume. */ smp_wmb(); WRITE_ONCE(r->queue[r->producer++], ptr); -- 2.27.0.383.g050319c2ae-goog
Will Deacon
2020-Jul-10 16:51 UTC
[PATCH v3 14/19] checkpatch: Remove checks relating to [smp_]read_barrier_depends()
The [smp_]read_barrier_depends() macros no longer exist, so we don't need to deal with them in the checkpatch script. Acked-by: Paul E. McKenney <paulmck at kernel.org> Signed-off-by: Will Deacon <will at kernel.org> --- scripts/checkpatch.pl | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/scripts/checkpatch.pl b/scripts/checkpatch.pl index 4c820607540b..8032f80c5bc7 100755 --- a/scripts/checkpatch.pl +++ b/scripts/checkpatch.pl @@ -5903,8 +5903,7 @@ sub process { my $barriers = qr{ mb| rmb| - wmb| - read_barrier_depends + wmb }x; my $barrier_stems = qr{ mb__before_atomic| @@ -5953,12 +5952,6 @@ sub process { } } -# check for smp_read_barrier_depends and read_barrier_depends - if (!$file && $line =~ /\b(smp_|)read_barrier_depends\s*\(/) { - WARN("READ_BARRIER_DEPENDS", - "$1read_barrier_depends should only be used in READ_ONCE or DEC Alpha code\n" . $herecurr); - } - # check of hardware specific defines if ($line =~ m@^.\s*\#\s*if.*\b(__i386__|__powerpc64__|__sun__|__s390x__)\b@ && $realfile !~ m at include/asm-@) { CHK("ARCH_DEFINES", -- 2.27.0.383.g050319c2ae-goog
Will Deacon
2020-Jul-10 16:51 UTC
[PATCH v3 15/19] arm64: Reduce the number of header files pulled into vmlinux.lds.S
Although vmlinux.lds.S smells like an assembly file and is compiled with __ASSEMBLY__ defined, it's actually just fed to the preprocessor to create our linker script. This means that any assembly macros defined by headers that it includes will result in a helpful link error: | aarch64-linux-gnu-ld:./arch/arm64/kernel/vmlinux.lds:1: syntax error In preparation for an arm64-private asm/rwonce.h implementation, which will end up pulling assembly macros into linux/compiler.h, reduce the number of headers we include directly and transitively in vmlinux.lds.S Signed-off-by: Will Deacon <will at kernel.org> --- arch/arm64/include/asm/kernel-pgtable.h | 2 +- arch/arm64/include/asm/memory.h | 11 ++++++----- arch/arm64/include/asm/uaccess.h | 1 + arch/arm64/kernel/entry.S | 1 + arch/arm64/kernel/vmlinux.lds.S | 1 - arch/arm64/kvm/hyp-init.S | 1 + 6 files changed, 10 insertions(+), 7 deletions(-) diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h index 3bf626f6fe0c..329fb15f6bac 100644 --- a/arch/arm64/include/asm/kernel-pgtable.h +++ b/arch/arm64/include/asm/kernel-pgtable.h @@ -8,7 +8,7 @@ #ifndef __ASM_KERNEL_PGTABLE_H #define __ASM_KERNEL_PGTABLE_H -#include <linux/pgtable.h> +#include <asm/pgtable-hwdef.h> #include <asm/sparsemem.h> /* diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h index a1871bb32bb1..9d4bf58cf7b3 100644 --- a/arch/arm64/include/asm/memory.h +++ b/arch/arm64/include/asm/memory.h @@ -10,11 +10,8 @@ #ifndef __ASM_MEMORY_H #define __ASM_MEMORY_H -#include <linux/compiler.h> #include <linux/const.h> #include <linux/sizes.h> -#include <linux/types.h> -#include <asm/bug.h> #include <asm/page-def.h> /* @@ -157,11 +154,15 @@ #endif #ifndef __ASSEMBLY__ -extern u64 vabits_actual; -#define PAGE_END (_PAGE_END(vabits_actual)) #include <linux/bitops.h> +#include <linux/compiler.h> #include <linux/mmdebug.h> +#include <linux/types.h> +#include <asm/bug.h> + +extern u64 vabits_actual; +#define PAGE_END (_PAGE_END(vabits_actual)) extern s64 physvirt_offset; extern s64 memstart_addr; diff --git a/arch/arm64/include/asm/uaccess.h b/arch/arm64/include/asm/uaccess.h index bc5c7b091152..8d7c466f809b 100644 --- a/arch/arm64/include/asm/uaccess.h +++ b/arch/arm64/include/asm/uaccess.h @@ -19,6 +19,7 @@ #include <linux/string.h> #include <asm/cpufeature.h> +#include <asm/mmu.h> #include <asm/ptrace.h> #include <asm/memory.h> #include <asm/extable.h> diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 5304d193c79d..b668aad3b762 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -15,6 +15,7 @@ #include <asm/assembler.h> #include <asm/asm-offsets.h> #include <asm/asm_pointer_auth.h> +#include <asm/bug.h> #include <asm/cpufeature.h> #include <asm/errno.h> #include <asm/esr.h> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S index 5423ffe0a987..ec8e894684a7 100644 --- a/arch/arm64/kernel/vmlinux.lds.S +++ b/arch/arm64/kernel/vmlinux.lds.S @@ -10,7 +10,6 @@ #include <asm-generic/vmlinux.lds.h> #include <asm/cache.h> #include <asm/kernel-pgtable.h> -#include <asm/thread_info.h> #include <asm/memory.h> #include <asm/page.h> diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S index 6e6ed5581eed..076544393c3c 100644 --- a/arch/arm64/kvm/hyp-init.S +++ b/arch/arm64/kvm/hyp-init.S @@ -6,6 +6,7 @@ #include <linux/linkage.h> +#include <asm/alternative.h> #include <asm/assembler.h> #include <asm/kvm_arm.h> #include <asm/kvm_mmu.h> -- 2.27.0.383.g050319c2ae-goog
Will Deacon
2020-Jul-10 16:52 UTC
[PATCH v3 16/19] arm64: alternatives: Split up alternative.h
asm/alternative.h contains both the macros needed to use alternatives, as well the type definitions and function prototypes for applying them. Split the header in two, so that alternatives can be used from core header files such as linux/compiler.h without the risk of circular includes Signed-off-by: Will Deacon <will at kernel.org> --- arch/arm64/include/asm/alternative-macros.h | 276 ++++++++++++++++++++ arch/arm64/include/asm/alternative.h | 267 +------------------ arch/arm64/include/asm/insn.h | 3 +- 3 files changed, 279 insertions(+), 267 deletions(-) create mode 100644 arch/arm64/include/asm/alternative-macros.h diff --git a/arch/arm64/include/asm/alternative-macros.h b/arch/arm64/include/asm/alternative-macros.h new file mode 100644 index 000000000000..8f4e4b60e72a --- /dev/null +++ b/arch/arm64/include/asm/alternative-macros.h @@ -0,0 +1,276 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_ALTERNATIVE_MACROS_H +#define __ASM_ALTERNATIVE_MACROS_H + +#include <asm/cpucaps.h> + +#define ARM64_CB_PATCH ARM64_NCAPS + +/* A64 instructions are always 32 bits. */ +#define AARCH64_INSN_SIZE 4 + +#ifndef __ASSEMBLY__ + +#include <linux/stringify.h> + +#define ALTINSTR_ENTRY(feature) \ + " .word 661b - .\n" /* label */ \ + " .word 663f - .\n" /* new instruction */ \ + " .hword " __stringify(feature) "\n" /* feature bit */ \ + " .byte 662b-661b\n" /* source len */ \ + " .byte 664f-663f\n" /* replacement len */ + +#define ALTINSTR_ENTRY_CB(feature, cb) \ + " .word 661b - .\n" /* label */ \ + " .word " __stringify(cb) "- .\n" /* callback */ \ + " .hword " __stringify(feature) "\n" /* feature bit */ \ + " .byte 662b-661b\n" /* source len */ \ + " .byte 664f-663f\n" /* replacement len */ + +/* + * alternative assembly primitive: + * + * If any of these .org directive fail, it means that insn1 and insn2 + * don't have the same length. This used to be written as + * + * .if ((664b-663b) != (662b-661b)) + * .error "Alternatives instruction length mismatch" + * .endif + * + * but most assemblers die if insn1 or insn2 have a .inst. This should + * be fixed in a binutils release posterior to 2.25.51.0.2 (anything + * containing commit 4e4d08cf7399b606 or c1baaddf8861). + * + * Alternatives with callbacks do not generate replacement instructions. + */ +#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled) \ + ".if "__stringify(cfg_enabled)" == 1\n" \ + "661:\n\t" \ + oldinstr "\n" \ + "662:\n" \ + ".pushsection .altinstructions,\"a\"\n" \ + ALTINSTR_ENTRY(feature) \ + ".popsection\n" \ + ".subsection 1\n" \ + "663:\n\t" \ + newinstr "\n" \ + "664:\n\t" \ + ".previous\n\t" \ + ".org . - (664b-663b) + (662b-661b)\n\t" \ + ".org . - (662b-661b) + (664b-663b)\n" \ + ".endif\n" + +#define __ALTERNATIVE_CFG_CB(oldinstr, feature, cfg_enabled, cb) \ + ".if "__stringify(cfg_enabled)" == 1\n" \ + "661:\n\t" \ + oldinstr "\n" \ + "662:\n" \ + ".pushsection .altinstructions,\"a\"\n" \ + ALTINSTR_ENTRY_CB(feature, cb) \ + ".popsection\n" \ + "663:\n\t" \ + "664:\n\t" \ + ".endif\n" + +#define _ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg, ...) \ + __ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg)) + +#define ALTERNATIVE_CB(oldinstr, cb) \ + __ALTERNATIVE_CFG_CB(oldinstr, ARM64_CB_PATCH, 1, cb) +#else + +#include <asm/assembler.h> + +.macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len + .word \orig_offset - . + .word \alt_offset - . + .hword \feature + .byte \orig_len + .byte \alt_len +.endm + +.macro alternative_insn insn1, insn2, cap, enable = 1 + .if \enable +661: \insn1 +662: .pushsection .altinstructions, "a" + altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f + .popsection + .subsection 1 +663: \insn2 +664: .previous + .org . - (664b-663b) + (662b-661b) + .org . - (662b-661b) + (664b-663b) + .endif +.endm + +/* + * Alternative sequences + * + * The code for the case where the capability is not present will be + * assembled and linked as normal. There are no restrictions on this + * code. + * + * The code for the case where the capability is present will be + * assembled into a special section to be used for dynamic patching. + * Code for that case must: + * + * 1. Be exactly the same length (in bytes) as the default code + * sequence. + * + * 2. Not contain a branch target that is used outside of the + * alternative sequence it is defined in (branches into an + * alternative sequence are not fixed up). + */ + +/* + * Begin an alternative code sequence. + */ +.macro alternative_if_not cap + .set .Lasm_alt_mode, 0 + .pushsection .altinstructions, "a" + altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f + .popsection +661: +.endm + +.macro alternative_if cap + .set .Lasm_alt_mode, 1 + .pushsection .altinstructions, "a" + altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f + .popsection + .subsection 1 + .align 2 /* So GAS knows label 661 is suitably aligned */ +661: +.endm + +.macro alternative_cb cb + .set .Lasm_alt_mode, 0 + .pushsection .altinstructions, "a" + altinstruction_entry 661f, \cb, ARM64_CB_PATCH, 662f-661f, 0 + .popsection +661: +.endm + +/* + * Provide the other half of the alternative code sequence. + */ +.macro alternative_else +662: + .if .Lasm_alt_mode==0 + .subsection 1 + .else + .previous + .endif +663: +.endm + +/* + * Complete an alternative code sequence. + */ +.macro alternative_endif +664: + .if .Lasm_alt_mode==0 + .previous + .endif + .org . - (664b-663b) + (662b-661b) + .org . - (662b-661b) + (664b-663b) +.endm + +/* + * Callback-based alternative epilogue + */ +.macro alternative_cb_end +662: +.endm + +/* + * Provides a trivial alternative or default sequence consisting solely + * of NOPs. The number of NOPs is chosen automatically to match the + * previous case. + */ +.macro alternative_else_nop_endif +alternative_else + nops (662b-661b) / AARCH64_INSN_SIZE +alternative_endif +.endm + +#define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...) \ + alternative_insn insn1, insn2, cap, IS_ENABLED(cfg) + +.macro user_alt, label, oldinstr, newinstr, cond +9999: alternative_insn "\oldinstr", "\newinstr", \cond + _asm_extable 9999b, \label +.endm + +/* + * Generate the assembly for UAO alternatives with exception table entries. + * This is complicated as there is no post-increment or pair versions of the + * unprivileged instructions, and USER() only works for single instructions. + */ +#ifdef CONFIG_ARM64_UAO + .macro uao_ldp l, reg1, reg2, addr, post_inc + alternative_if_not ARM64_HAS_UAO +8888: ldp \reg1, \reg2, [\addr], \post_inc; +8889: nop; + nop; + alternative_else + ldtr \reg1, [\addr]; + ldtr \reg2, [\addr, #8]; + add \addr, \addr, \post_inc; + alternative_endif + + _asm_extable 8888b,\l; + _asm_extable 8889b,\l; + .endm + + .macro uao_stp l, reg1, reg2, addr, post_inc + alternative_if_not ARM64_HAS_UAO +8888: stp \reg1, \reg2, [\addr], \post_inc; +8889: nop; + nop; + alternative_else + sttr \reg1, [\addr]; + sttr \reg2, [\addr, #8]; + add \addr, \addr, \post_inc; + alternative_endif + + _asm_extable 8888b,\l; + _asm_extable 8889b,\l; + .endm + + .macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc + alternative_if_not ARM64_HAS_UAO +8888: \inst \reg, [\addr], \post_inc; + nop; + alternative_else + \alt_inst \reg, [\addr]; + add \addr, \addr, \post_inc; + alternative_endif + + _asm_extable 8888b,\l; + .endm +#else + .macro uao_ldp l, reg1, reg2, addr, post_inc + USER(\l, ldp \reg1, \reg2, [\addr], \post_inc) + .endm + .macro uao_stp l, reg1, reg2, addr, post_inc + USER(\l, stp \reg1, \reg2, [\addr], \post_inc) + .endm + .macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc + USER(\l, \inst \reg, [\addr], \post_inc) + .endm +#endif + +#endif /* __ASSEMBLY__ */ + +/* + * Usage: asm(ALTERNATIVE(oldinstr, newinstr, feature)); + * + * Usage: asm(ALTERNATIVE(oldinstr, newinstr, feature, CONFIG_FOO)); + * N.B. If CONFIG_FOO is specified, but not selected, the whole block + * will be omitted, including oldinstr. + */ +#define ALTERNATIVE(oldinstr, newinstr, ...) \ + _ALTERNATIVE_CFG(oldinstr, newinstr, __VA_ARGS__, 1) + +#endif /* __ASM_ALTERNATIVE_MACROS_H */ diff --git a/arch/arm64/include/asm/alternative.h b/arch/arm64/include/asm/alternative.h index 12f0eb56a1cc..a38b92e11811 100644 --- a/arch/arm64/include/asm/alternative.h +++ b/arch/arm64/include/asm/alternative.h @@ -2,17 +2,13 @@ #ifndef __ASM_ALTERNATIVE_H #define __ASM_ALTERNATIVE_H -#include <asm/cpucaps.h> -#include <asm/insn.h> - -#define ARM64_CB_PATCH ARM64_NCAPS +#include <asm/alternative-macros.h> #ifndef __ASSEMBLY__ #include <linux/init.h> #include <linux/types.h> #include <linux/stddef.h> -#include <linux/stringify.h> struct alt_instr { s32 orig_offset; /* offset to original instruction */ @@ -35,264 +31,5 @@ void apply_alternatives_module(void *start, size_t length); static inline void apply_alternatives_module(void *start, size_t length) { } #endif -#define ALTINSTR_ENTRY(feature) \ - " .word 661b - .\n" /* label */ \ - " .word 663f - .\n" /* new instruction */ \ - " .hword " __stringify(feature) "\n" /* feature bit */ \ - " .byte 662b-661b\n" /* source len */ \ - " .byte 664f-663f\n" /* replacement len */ - -#define ALTINSTR_ENTRY_CB(feature, cb) \ - " .word 661b - .\n" /* label */ \ - " .word " __stringify(cb) "- .\n" /* callback */ \ - " .hword " __stringify(feature) "\n" /* feature bit */ \ - " .byte 662b-661b\n" /* source len */ \ - " .byte 664f-663f\n" /* replacement len */ - -/* - * alternative assembly primitive: - * - * If any of these .org directive fail, it means that insn1 and insn2 - * don't have the same length. This used to be written as - * - * .if ((664b-663b) != (662b-661b)) - * .error "Alternatives instruction length mismatch" - * .endif - * - * but most assemblers die if insn1 or insn2 have a .inst. This should - * be fixed in a binutils release posterior to 2.25.51.0.2 (anything - * containing commit 4e4d08cf7399b606 or c1baaddf8861). - * - * Alternatives with callbacks do not generate replacement instructions. - */ -#define __ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg_enabled) \ - ".if "__stringify(cfg_enabled)" == 1\n" \ - "661:\n\t" \ - oldinstr "\n" \ - "662:\n" \ - ".pushsection .altinstructions,\"a\"\n" \ - ALTINSTR_ENTRY(feature) \ - ".popsection\n" \ - ".subsection 1\n" \ - "663:\n\t" \ - newinstr "\n" \ - "664:\n\t" \ - ".previous\n\t" \ - ".org . - (664b-663b) + (662b-661b)\n\t" \ - ".org . - (662b-661b) + (664b-663b)\n" \ - ".endif\n" - -#define __ALTERNATIVE_CFG_CB(oldinstr, feature, cfg_enabled, cb) \ - ".if "__stringify(cfg_enabled)" == 1\n" \ - "661:\n\t" \ - oldinstr "\n" \ - "662:\n" \ - ".pushsection .altinstructions,\"a\"\n" \ - ALTINSTR_ENTRY_CB(feature, cb) \ - ".popsection\n" \ - "663:\n\t" \ - "664:\n\t" \ - ".endif\n" - -#define _ALTERNATIVE_CFG(oldinstr, newinstr, feature, cfg, ...) \ - __ALTERNATIVE_CFG(oldinstr, newinstr, feature, IS_ENABLED(cfg)) - -#define ALTERNATIVE_CB(oldinstr, cb) \ - __ALTERNATIVE_CFG_CB(oldinstr, ARM64_CB_PATCH, 1, cb) -#else - -#include <asm/assembler.h> - -.macro altinstruction_entry orig_offset alt_offset feature orig_len alt_len - .word \orig_offset - . - .word \alt_offset - . - .hword \feature - .byte \orig_len - .byte \alt_len -.endm - -.macro alternative_insn insn1, insn2, cap, enable = 1 - .if \enable -661: \insn1 -662: .pushsection .altinstructions, "a" - altinstruction_entry 661b, 663f, \cap, 662b-661b, 664f-663f - .popsection - .subsection 1 -663: \insn2 -664: .previous - .org . - (664b-663b) + (662b-661b) - .org . - (662b-661b) + (664b-663b) - .endif -.endm - -/* - * Alternative sequences - * - * The code for the case where the capability is not present will be - * assembled and linked as normal. There are no restrictions on this - * code. - * - * The code for the case where the capability is present will be - * assembled into a special section to be used for dynamic patching. - * Code for that case must: - * - * 1. Be exactly the same length (in bytes) as the default code - * sequence. - * - * 2. Not contain a branch target that is used outside of the - * alternative sequence it is defined in (branches into an - * alternative sequence are not fixed up). - */ - -/* - * Begin an alternative code sequence. - */ -.macro alternative_if_not cap - .set .Lasm_alt_mode, 0 - .pushsection .altinstructions, "a" - altinstruction_entry 661f, 663f, \cap, 662f-661f, 664f-663f - .popsection -661: -.endm - -.macro alternative_if cap - .set .Lasm_alt_mode, 1 - .pushsection .altinstructions, "a" - altinstruction_entry 663f, 661f, \cap, 664f-663f, 662f-661f - .popsection - .subsection 1 - .align 2 /* So GAS knows label 661 is suitably aligned */ -661: -.endm - -.macro alternative_cb cb - .set .Lasm_alt_mode, 0 - .pushsection .altinstructions, "a" - altinstruction_entry 661f, \cb, ARM64_CB_PATCH, 662f-661f, 0 - .popsection -661: -.endm - -/* - * Provide the other half of the alternative code sequence. - */ -.macro alternative_else -662: - .if .Lasm_alt_mode==0 - .subsection 1 - .else - .previous - .endif -663: -.endm - -/* - * Complete an alternative code sequence. - */ -.macro alternative_endif -664: - .if .Lasm_alt_mode==0 - .previous - .endif - .org . - (664b-663b) + (662b-661b) - .org . - (662b-661b) + (664b-663b) -.endm - -/* - * Callback-based alternative epilogue - */ -.macro alternative_cb_end -662: -.endm - -/* - * Provides a trivial alternative or default sequence consisting solely - * of NOPs. The number of NOPs is chosen automatically to match the - * previous case. - */ -.macro alternative_else_nop_endif -alternative_else - nops (662b-661b) / AARCH64_INSN_SIZE -alternative_endif -.endm - -#define _ALTERNATIVE_CFG(insn1, insn2, cap, cfg, ...) \ - alternative_insn insn1, insn2, cap, IS_ENABLED(cfg) - -.macro user_alt, label, oldinstr, newinstr, cond -9999: alternative_insn "\oldinstr", "\newinstr", \cond - _asm_extable 9999b, \label -.endm - -/* - * Generate the assembly for UAO alternatives with exception table entries. - * This is complicated as there is no post-increment or pair versions of the - * unprivileged instructions, and USER() only works for single instructions. - */ -#ifdef CONFIG_ARM64_UAO - .macro uao_ldp l, reg1, reg2, addr, post_inc - alternative_if_not ARM64_HAS_UAO -8888: ldp \reg1, \reg2, [\addr], \post_inc; -8889: nop; - nop; - alternative_else - ldtr \reg1, [\addr]; - ldtr \reg2, [\addr, #8]; - add \addr, \addr, \post_inc; - alternative_endif - - _asm_extable 8888b,\l; - _asm_extable 8889b,\l; - .endm - - .macro uao_stp l, reg1, reg2, addr, post_inc - alternative_if_not ARM64_HAS_UAO -8888: stp \reg1, \reg2, [\addr], \post_inc; -8889: nop; - nop; - alternative_else - sttr \reg1, [\addr]; - sttr \reg2, [\addr, #8]; - add \addr, \addr, \post_inc; - alternative_endif - - _asm_extable 8888b,\l; - _asm_extable 8889b,\l; - .endm - - .macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc - alternative_if_not ARM64_HAS_UAO -8888: \inst \reg, [\addr], \post_inc; - nop; - alternative_else - \alt_inst \reg, [\addr]; - add \addr, \addr, \post_inc; - alternative_endif - - _asm_extable 8888b,\l; - .endm -#else - .macro uao_ldp l, reg1, reg2, addr, post_inc - USER(\l, ldp \reg1, \reg2, [\addr], \post_inc) - .endm - .macro uao_stp l, reg1, reg2, addr, post_inc - USER(\l, stp \reg1, \reg2, [\addr], \post_inc) - .endm - .macro uao_user_alternative l, inst, alt_inst, reg, addr, post_inc - USER(\l, \inst \reg, [\addr], \post_inc) - .endm -#endif - -#endif /* __ASSEMBLY__ */ - -/* - * Usage: asm(ALTERNATIVE(oldinstr, newinstr, feature)); - * - * Usage: asm(ALTERNATIVE(oldinstr, newinstr, feature, CONFIG_FOO)); - * N.B. If CONFIG_FOO is specified, but not selected, the whole block - * will be omitted, including oldinstr. - */ -#define ALTERNATIVE(oldinstr, newinstr, ...) \ - _ALTERNATIVE_CFG(oldinstr, newinstr, __VA_ARGS__, 1) - +#endif /* __ASSEMBLY__ */ #endif /* __ASM_ALTERNATIVE_H */ diff --git a/arch/arm64/include/asm/insn.h b/arch/arm64/include/asm/insn.h index 0bc46149e491..01da70ba2fb9 100644 --- a/arch/arm64/include/asm/insn.h +++ b/arch/arm64/include/asm/insn.h @@ -10,8 +10,7 @@ #include <linux/build_bug.h> #include <linux/types.h> -/* A64 instructions are always 32 bits. */ -#define AARCH64_INSN_SIZE 4 +#include <asm/alternative.h> #ifndef __ASSEMBLY__ /* -- 2.27.0.383.g050319c2ae-goog
Will Deacon
2020-Jul-10 16:52 UTC
[PATCH v3 17/19] arm64: cpufeatures: Add capability for LDAPR instruction
Armv8.3 introduced the LDAPR instruction, which provides weaker memory ordering semantics than LDARi (RCpc vs RCsc). Generally, we provide an RCsc implementation when implementing the Linux memory model, but LDAPR can be used as a useful alternative to dependency ordering, particularly when the compiler is capable of breaking the dependencies. Since LDAPR is not available on all CPUs, add a cpufeature to detect it at runtime and allow the instruction to be used with alternative code patching. Signed-off-by: Will Deacon <will at kernel.org> --- arch/arm64/Kconfig | 3 +++ arch/arm64/include/asm/cpucaps.h | 3 ++- arch/arm64/kernel/cpufeature.c | 10 ++++++++++ 3 files changed, 15 insertions(+), 1 deletion(-) diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig index 66dc41fd49f2..e1073210e70b 100644 --- a/arch/arm64/Kconfig +++ b/arch/arm64/Kconfig @@ -1409,6 +1409,9 @@ config ARM64_PAN The feature is detected at runtime, and will remain as a 'nop' instruction if the cpu does not implement the feature. +config AS_HAS_LDAPR + def_bool $(as-instr,.arch_extension rcpc) + config ARM64_LSE_ATOMICS bool default ARM64_USE_LSE_ATOMICS diff --git a/arch/arm64/include/asm/cpucaps.h b/arch/arm64/include/asm/cpucaps.h index d7b3bb0cb180..3ff0103d4dfd 100644 --- a/arch/arm64/include/asm/cpucaps.h +++ b/arch/arm64/include/asm/cpucaps.h @@ -62,7 +62,8 @@ #define ARM64_HAS_GENERIC_AUTH 52 #define ARM64_HAS_32BIT_EL1 53 #define ARM64_BTI 54 +#define ARM64_HAS_LDAPR 55 -#define ARM64_NCAPS 55 +#define ARM64_NCAPS 56 #endif /* __ASM_CPUCAPS_H */ diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c index 9fae0efc80c1..498bd9a7f1bc 100644 --- a/arch/arm64/kernel/cpufeature.c +++ b/arch/arm64/kernel/cpufeature.c @@ -2058,6 +2058,16 @@ static const struct arm64_cpu_capabilities arm64_features[] = { .sign = FTR_UNSIGNED, }, #endif + { + .desc = "RCpc load-acquire (LDAPR)", + .capability = ARM64_HAS_LDAPR, + .type = ARM64_CPUCAP_SYSTEM_FEATURE, + .sys_reg = SYS_ID_AA64ISAR1_EL1, + .sign = FTR_UNSIGNED, + .field_pos = ID_AA64ISAR1_LRCPC_SHIFT, + .matches = has_cpuid_feature, + .min_field_value = 1, + }, {}, }; -- 2.27.0.383.g050319c2ae-goog
Will Deacon
2020-Jul-10 16:52 UTC
[PATCH v3 18/19] arm64: alternatives: Remove READ_ONCE() usage during patch operation
In preparation for patching the internals of READ_ONCE() itself, replace its usage on the alternatives patching patch with a volatile variable instead. Signed-off-by: Will Deacon <will at kernel.org> --- arch/arm64/kernel/alternative.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/arch/arm64/kernel/alternative.c b/arch/arm64/kernel/alternative.c index d1757ef1b1e7..87bca8d44084 100644 --- a/arch/arm64/kernel/alternative.c +++ b/arch/arm64/kernel/alternative.c @@ -21,7 +21,8 @@ #define ALT_ORIG_PTR(a) __ALT_PTR(a, orig_offset) #define ALT_REPL_PTR(a) __ALT_PTR(a, alt_offset) -static int all_alternatives_applied; +/* Volatile, as we may be patching the guts of READ_ONCE() */ +static volatile int all_alternatives_applied; static DECLARE_BITMAP(applied_alternatives, ARM64_NCAPS); @@ -217,7 +218,7 @@ static int __apply_alternatives_multi_stop(void *unused) /* We always have a CPU 0 at this point (__init) */ if (smp_processor_id()) { - while (!READ_ONCE(all_alternatives_applied)) + while (!all_alternatives_applied) cpu_relax(); isb(); } else { @@ -229,7 +230,7 @@ static int __apply_alternatives_multi_stop(void *unused) BUG_ON(all_alternatives_applied); __apply_alternatives(®ion, false, remaining_capabilities); /* Barriers provided by the cache flushing */ - WRITE_ONCE(all_alternatives_applied, 1); + all_alternatives_applied = 1; } return 0; -- 2.27.0.383.g050319c2ae-goog
Will Deacon
2020-Jul-10 16:52 UTC
[PATCH v3 19/19] arm64: lto: Strengthen READ_ONCE() to acquire when CONFIG_LTO=y
When building with LTO, there is an increased risk of the compiler converting an address dependency headed by a READ_ONCE() invocation into a control dependency and consequently allowing for harmful reordering by the CPU. Ensure that such transformations are harmless by overriding the generic READ_ONCE() definition with one that provides acquire semantics when building with LTO. Signed-off-by: Will Deacon <will at kernel.org> --- arch/arm64/include/asm/rwonce.h | 63 +++++++++++++++++++++++++++++++ arch/arm64/kernel/vdso/Makefile | 2 +- arch/arm64/kernel/vdso32/Makefile | 2 +- 3 files changed, 65 insertions(+), 2 deletions(-) create mode 100644 arch/arm64/include/asm/rwonce.h diff --git a/arch/arm64/include/asm/rwonce.h b/arch/arm64/include/asm/rwonce.h new file mode 100644 index 000000000000..d78eb4cb795b --- /dev/null +++ b/arch/arm64/include/asm/rwonce.h @@ -0,0 +1,63 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2020 Google LLC. + */ +#ifndef __ASM_RWONCE_H +#define __ASM_RWONCE_H + +#ifdef CONFIG_LTO + +#include <linux/compiler_types.h> +#include <asm/alternative-macros.h> + +#ifndef BUILD_VDSO + +#ifdef CONFIG_AS_HAS_LDAPR +#define __LOAD_RCPC(sfx, regs...) \ + ALTERNATIVE( \ + "ldar" #sfx "\t" #regs, \ + ".arch_extension rcpc\n" \ + "ldapr" #sfx "\t" #regs, \ + ARM64_HAS_LDAPR) +#else +#define __LOAD_RCPC(sfx, regs...) "ldar" #sfx "\t" #regs +#endif /* CONFIG_AS_HAS_LDAPR */ + +#define __READ_ONCE(x) \ +({ \ + typeof(&(x)) __x = &(x); \ + int atomic = 1; \ + union { __unqual_scalar_typeof(*__x) __val; char __c[1]; } __u; \ + switch (sizeof(x)) { \ + case 1: \ + asm volatile(__LOAD_RCPC(b, %w0, %1) \ + : "=r" (*(__u8 *)__u.__c) \ + : "Q" (*__x) : "memory"); \ + break; \ + case 2: \ + asm volatile(__LOAD_RCPC(h, %w0, %1) \ + : "=r" (*(__u16 *)__u.__c) \ + : "Q" (*__x) : "memory"); \ + break; \ + case 4: \ + asm volatile(__LOAD_RCPC(, %w0, %1) \ + : "=r" (*(__u32 *)__u.__c) \ + : "Q" (*__x) : "memory"); \ + break; \ + case 8: \ + asm volatile(__LOAD_RCPC(, %0, %1) \ + : "=r" (*(__u64 *)__u.__c) \ + : "Q" (*__x) : "memory"); \ + break; \ + default: \ + atomic = 0; \ + } \ + atomic ? (typeof(*__x))__u.__val : (*(volatile typeof(__x))__x);\ +}) + +#endif /* !BUILD_VDSO */ +#endif /* CONFIG_LTO */ + +#include <asm-generic/rwonce.h> + +#endif /* __ASM_RWONCE_H */ diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile index 45d5cfe46429..60df97f2e7de 100644 --- a/arch/arm64/kernel/vdso/Makefile +++ b/arch/arm64/kernel/vdso/Makefile @@ -28,7 +28,7 @@ ldflags-y := -shared -nostdlib -soname=linux-vdso.so.1 --hash-style=sysv \ $(btildflags-y) -T ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18 -ccflags-y += -DDISABLE_BRANCH_PROFILING +ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) $(GCC_PLUGINS_CFLAGS) KBUILD_CFLAGS += $(DISABLE_LTO) diff --git a/arch/arm64/kernel/vdso32/Makefile b/arch/arm64/kernel/vdso32/Makefile index d88148bef6b0..4fdf3754a058 100644 --- a/arch/arm64/kernel/vdso32/Makefile +++ b/arch/arm64/kernel/vdso32/Makefile @@ -43,7 +43,7 @@ cc32-as-instr = $(call try-run,\ # As a result we set our own flags here. # KBUILD_CPPFLAGS and NOSTDINC_FLAGS from top-level Makefile -VDSO_CPPFLAGS := -D__KERNEL__ -nostdinc -isystem $(shell $(CC_COMPAT) -print-file-name=include) +VDSO_CPPFLAGS := -DBUILD_VDSO -D__KERNEL__ -nostdinc -isystem $(shell $(CC_COMPAT) -print-file-name=include) VDSO_CPPFLAGS += $(LINUXINCLUDE) # Common C and assembly flags -- 2.27.0.383.g050319c2ae-goog
Nick Desaulniers
2020-Jul-10 17:06 UTC
[PATCH v3 06/19] asm/rwonce: Don't pull <asm/barrier.h> into 'asm-generic/rwonce.h'
On Fri, Jul 10, 2020 at 9:52 AM Will Deacon <will at kernel.org> wrote:> > Now that 'smp_read_barrier_depends()' has gone the way of the Norwegian > Blue, drop the inclusion of <asm/barrier.h> in 'asm-generic/rwonce.h'. > > This requires fixups to some architecture vdso headers which were > previously relying on 'asm/barrier.h' coming in via 'linux/compiler.h'. > > Signed-off-by: Will Deacon <will at kernel.org> > --- > arch/arm/include/asm/vdso/gettimeofday.h | 1 + > arch/arm64/include/asm/vdso/compat_gettimeofday.h | 1 + > arch/arm64/include/asm/vdso/gettimeofday.h | 1 + > arch/riscv/include/asm/vdso/gettimeofday.h | 1 + > include/asm-generic/rwonce.h | 2 -- > include/linux/nospec.h | 2 ++ > 6 files changed, 6 insertions(+), 2 deletions(-) > > diff --git a/arch/arm/include/asm/vdso/gettimeofday.h b/arch/arm/include/asm/vdso/gettimeofday.h > index 36dc18553ed8..1b207cf07697 100644 > --- a/arch/arm/include/asm/vdso/gettimeofday.h > +++ b/arch/arm/include/asm/vdso/gettimeofday.h > @@ -7,6 +7,7 @@ > > #ifndef __ASSEMBLY__ > > +#include <asm/barrier.h> > #include <asm/errno.h> > #include <asm/unistd.h> > #include <asm/vdso/cp15.h> > diff --git a/arch/arm64/include/asm/vdso/compat_gettimeofday.h b/arch/arm64/include/asm/vdso/compat_gettimeofday.h > index b6907ae78e53..bcf7649999a4 100644 > --- a/arch/arm64/include/asm/vdso/compat_gettimeofday.h > +++ b/arch/arm64/include/asm/vdso/compat_gettimeofday.h > @@ -7,6 +7,7 @@ > > #ifndef __ASSEMBLY__ > > +#include <asm/barrier.h> > #include <asm/unistd.h> > #include <asm/errno.h> > > diff --git a/arch/arm64/include/asm/vdso/gettimeofday.h b/arch/arm64/include/asm/vdso/gettimeofday.h > index afba6ba332f8..127fa63893e2 100644 > --- a/arch/arm64/include/asm/vdso/gettimeofday.h > +++ b/arch/arm64/include/asm/vdso/gettimeofday.h > @@ -7,6 +7,7 @@ > > #ifndef __ASSEMBLY__ > > +#include <asm/barrier.h> > #include <asm/unistd.h> > > #define VDSO_HAS_CLOCK_GETRES 1 > diff --git a/arch/riscv/include/asm/vdso/gettimeofday.h b/arch/riscv/include/asm/vdso/gettimeofday.h > index c8e818688ec1..3099362d9f26 100644 > --- a/arch/riscv/include/asm/vdso/gettimeofday.h > +++ b/arch/riscv/include/asm/vdso/gettimeofday.h > @@ -4,6 +4,7 @@ > > #ifndef __ASSEMBLY__ > > +#include <asm/barrier.h> > #include <asm/unistd.h> > #include <asm/csr.h> > #include <uapi/linux/time.h> > diff --git a/include/asm-generic/rwonce.h b/include/asm-generic/rwonce.h > index cc810f1f18ca..cd0302746fb4 100644 > --- a/include/asm-generic/rwonce.h > +++ b/include/asm-generic/rwonce.h > @@ -26,8 +26,6 @@ > #include <linux/kasan-checks.h> > #include <linux/kcsan-checks.h> > > -#include <asm/barrier.h> > - > /* > * Use __READ_ONCE() instead of READ_ONCE() if you do not require any > * atomicity. Note that this may result in tears! > diff --git a/include/linux/nospec.h b/include/linux/nospec.h > index 0c5ef54fd416..c1e79f72cd89 100644 > --- a/include/linux/nospec.h > +++ b/include/linux/nospec.h > @@ -5,6 +5,8 @@ > > #ifndef _LINUX_NOSPEC_H > #define _LINUX_NOSPEC_H > + > +#include <linux/compiler.h>The other hunks LGTM, but this one is a little more curious to me. Can you walk me through this addition?> #include <asm/barrier.h> > > struct task_struct; > -- > 2.27.0.383.g050319c2ae-goog >-- Thanks, ~Nick Desaulniers
Peter Zijlstra
2020-Jul-13 10:34 UTC
[PATCH 00/18] Allow architectures to override __READ_ONCE()
On Fri, Jul 10, 2020 at 05:51:44PM +0100, Will Deacon wrote:> SeongJae Park (1): > Documentation/barriers/kokr: Remove references to > [smp_]read_barrier_depends() > > Will Deacon (18): > tools: bpf: Use local copy of headers including uapi/linux/filter.h > compiler.h: Split {READ,WRITE}_ONCE definitions out into rwonce.h > asm/rwonce: Allow __READ_ONCE to be overridden by the architecture > alpha: Override READ_ONCE() with barriered implementation > asm/rwonce: Remove smp_read_barrier_depends() invocation > asm/rwonce: Don't pull <asm/barrier.h> into 'asm-generic/rwonce.h' > vhost: Remove redundant use of read_barrier_depends() barrier > alpha: Replace smp_read_barrier_depends() usage with smp_[r]mb() > locking/barriers: Remove definitions for [smp_]read_barrier_depends() > Documentation/barriers: Remove references to > [smp_]read_barrier_depends() > tools/memory-model: Remove smp_read_barrier_depends() from informal > doc > include/linux: Remove smp_read_barrier_depends() from comments > checkpatch: Remove checks relating to [smp_]read_barrier_depends() > arm64: Reduce the number of header files pulled into vmlinux.lds.S > arm64: alternatives: Split up alternative.h > arm64: cpufeatures: Add capability for LDAPR instruction > arm64: alternatives: Remove READ_ONCE() usage during patch operation > arm64: lto: Strengthen READ_ONCE() to acquire when CONFIG_LTO=yAcked-by: Peter Zijlstra (Intel) <peterz at infradead.org>
Michael S. Tsirkin
2020-Jul-13 11:27 UTC
[PATCH v3 07/19] vhost: Remove redundant use of read_barrier_depends() barrier
On Fri, Jul 10, 2020 at 05:51:51PM +0100, Will Deacon wrote:> Since commit 76ebbe78f739 ("locking/barriers: Add implicit > smp_read_barrier_depends() to READ_ONCE()"), there is no need to use > smp_read_barrier_depends() outside of the Alpha architecture code. > > Unfortunately, there is precisely _one_ user in the vhost code, and > there isn't an obvious READ_ONCE() access making the barrier > redundant. However, on closer inspection (thanks, Jason), it appears > that vring synchronisation between the producer and consumer occurs via > the 'avail_idx' field, which is followed up by an rmb() in > vhost_get_vq_desc(), making the read_barrier_depends() redundant on > Alpha. > > Jason says: > > | I'm also confused about the barrier here, basically in driver side > | we did: > | > | 1) allocate pages > | 2) store pages in indirect->addr > | 3) smp_wmb() > | 4) increase the avail idx (somehow a tail pointer of vring) > | > | in vhost we did: > | > | 1) read avail idx > | 2) smp_rmb() > | 3) read indirect->addr > | 4) read from indirect->addr > | > | It looks to me even the data dependency barrier is not necessary > | since we have rmb() which is sufficient for us to the correct > | indirect->addr and driver are not expected to do any writing to > | indirect->addr after avail idx is increased > > Remove the redundant barrier invocation. > > Suggested-by: Jason Wang <jasowang at redhat.com> > Acked-by: Paul E. McKenney <paulmck at kernel.org> > Signed-off-by: Will Deacon <will at kernel.org>I agree Acked-by: Michael S. Tsirkin <mst at redhat.com> Pls merge with the rest of the patchset.> --- > drivers/vhost/vhost.c | 5 ----- > 1 file changed, 5 deletions(-) > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c > index d7b8df3edffc..74d135ee7e26 100644 > --- a/drivers/vhost/vhost.c > +++ b/drivers/vhost/vhost.c > @@ -2092,11 +2092,6 @@ static int get_indirect(struct vhost_virtqueue *vq, > return ret; > } > iov_iter_init(&from, READ, vq->indirect, ret, len); > - > - /* We will use the result as an address to read from, so most > - * architectures only need a compiler barrier here. */ > - read_barrier_depends(); > - > count = len / sizeof desc; > /* Buffers are chained via a 16 bit next field, so > * we can have at most 2^16 of these. */ > -- > 2.27.0.383.g050319c2ae-goog
boqun.feng at gmail.com
2020-Jul-13 12:23 UTC
[PATCH v3 02/19] compiler.h: Split {READ, WRITE}_ONCE definitions out into rwonce.h
On Fri, Jul 10, 2020 at 05:51:46PM +0100, Will Deacon wrote:> In preparation for allowing architectures to define their own > implementation of the READ_ONCE() macro, move the generic > {READ,WRITE}_ONCE() definitions out of the unwieldy 'linux/compiler.h' > file and into a new 'rwonce.h' header under 'asm-generic'. > > Acked-by: Paul E. McKenney <paulmck at kernel.org> > Signed-off-by: Will Deacon <will at kernel.org> > --- > include/asm-generic/Kbuild | 1 + > include/asm-generic/barrier.h | 2 +- > include/asm-generic/rwonce.h | 91 +++++++++++++++++++++++++++++++++++ > include/linux/compiler.h | 83 +------------------------------- > 4 files changed, 95 insertions(+), 82 deletions(-) > create mode 100644 include/asm-generic/rwonce.h > > diff --git a/include/asm-generic/Kbuild b/include/asm-generic/Kbuild > index 44ec80e70518..74b0612601dd 100644 > --- a/include/asm-generic/Kbuild > +++ b/include/asm-generic/Kbuild > @@ -45,6 +45,7 @@ mandatory-y += pci.h > mandatory-y += percpu.h > mandatory-y += pgalloc.h > mandatory-y += preempt.h > +mandatory-y += rwonce.h > mandatory-y += sections.h > mandatory-y += serial.h > mandatory-y += shmparam.h > diff --git a/include/asm-generic/barrier.h b/include/asm-generic/barrier.h > index 2eacaf7d62f6..8116744bb82c 100644 > --- a/include/asm-generic/barrier.h > +++ b/include/asm-generic/barrier.h > @@ -13,7 +13,7 @@ > > #ifndef __ASSEMBLY__ > > -#include <linux/compiler.h> > +#include <asm/rwonce.h> > > #ifndef nop > #define nop() asm volatile ("nop") > diff --git a/include/asm-generic/rwonce.h b/include/asm-generic/rwonce.h > new file mode 100644 > index 000000000000..92cc2f223cb3 > --- /dev/null > +++ b/include/asm-generic/rwonce.h > @@ -0,0 +1,91 @@ > +/* SPDX-License-Identifier: GPL-2.0 */ > +/* > + * Prevent the compiler from merging or refetching reads or writes. The > + * compiler is also forbidden from reordering successive instances of > + * READ_ONCE and WRITE_ONCE, but only when the compiler is aware of some > + * particular ordering. One way to make the compiler aware of ordering is to > + * put the two invocations of READ_ONCE or WRITE_ONCE in different C > + * statements. > + * > + * These two macros will also work on aggregate data types like structs or > + * unions. > + * > + * Their two major use cases are: (1) Mediating communication between > + * process-level code and irq/NMI handlers, all running on the same CPU, > + * and (2) Ensuring that the compiler does not fold, spindle, or otherwise > + * mutilate accesses that either do not require ordering or that interact > + * with an explicit memory barrier or atomic instruction that provides the > + * required ordering. > + */ > +#ifndef __ASM_GENERIC_RWONCE_H > +#define __ASM_GENERIC_RWONCE_H > + > +#ifndef __ASSEMBLY__ > + > +#include <linux/compiler_types.h> > +#include <linux/kasan-checks.h> > +#include <linux/kcsan-checks.h> > + > +#include <asm/barrier.h> > + > +/* > + * Use __READ_ONCE() instead of READ_ONCE() if you do not require any > + * atomicity or dependency ordering guarantees. Note that this may result > + * in tears! > + */ > +#define __READ_ONCE(x) (*(const volatile __unqual_scalar_typeof(x) *)&(x)) > + > +#define __READ_ONCE_SCALAR(x) \ > +({ \ > + __unqual_scalar_typeof(x) __x = __READ_ONCE(x); \ > + smp_read_barrier_depends(); \ > + (typeof(x))__x; \ > +}) > + > +#define READ_ONCE(x) \ > +({ \ > + compiletime_assert_rwonce_type(x); \Does it make sense if we also move the definition of this compile time assertion into rwonce.h too? Regards, Boqun> + __READ_ONCE_SCALAR(x); \ > +}) > +[...]
Pavel Machek
2020-Jul-28 20:40 UTC
[PATCH v3 19/19] arm64: lto: Strengthen READ_ONCE() to acquire when CONFIG_LTO=y
On Fri 2020-07-10 17:52:03, Will Deacon wrote:> When building with LTO, there is an increased risk of the compiler > converting an address dependency headed by a READ_ONCE() invocation > into a control dependency and consequently allowing for harmful > reordering by the CPU. > > Ensure that such transformations are harmless by overriding the generic > READ_ONCE() definition with one that provides acquire semantics when > building with LTO.Traditionally, READ_ONCE had only effects on compiler optimalizations, not on special semantics of the load instruction. Do you have example how LTO optimalizations break the code? Should some documentation be added? Because I believe users will need to understand what is going on there. It is not LTO-only problem and it is not arm64-only problem, right? Best regards, Pavel> +#ifdef CONFIG_AS_HAS_LDAPR > +#define __LOAD_RCPC(sfx, regs...) \ > + ALTERNATIVE( \ > + "ldar" #sfx "\t" #regs, \ > + ".arch_extension rcpc\n" \ > + "ldapr" #sfx "\t" #regs, \ > + ARM64_HAS_LDAPR) > +#else > +#define __LOAD_RCPC(sfx, regs...) "ldar" #sfx "\t" #regs > +#endif /* CONFIG_AS_HAS_LDAPR */ > + > +#define __READ_ONCE(x) \ > +({ \ > + typeof(&(x)) __x = &(x); \ > + int atomic = 1; \ > + union { __unqual_scalar_typeof(*__x) __val; char __c[1]; } __u; \ > + switch (sizeof(x)) { \ > + case 1: \ > + asm volatile(__LOAD_RCPC(b, %w0, %1) \ > + : "=r" (*(__u8 *)__u.__c) \ > + : "Q" (*__x) : "memory"); \ > + break; \ > + case 2: \ > + asm volatile(__LOAD_RCPC(h, %w0, %1) \ > + : "=r" (*(__u16 *)__u.__c) \ > + : "Q" (*__x) : "memory"); \ > + break; \ > + case 4: \ > + asm volatile(__LOAD_RCPC(, %w0, %1) \ > + : "=r" (*(__u32 *)__u.__c) \ > + : "Q" (*__x) : "memory"); \ > + break; \ > + case 8: \ > + asm volatile(__LOAD_RCPC(, %0, %1) \ > + : "=r" (*(__u64 *)__u.__c) \ > + : "Q" (*__x) : "memory"); \ > + break; \ > + default: \ > + atomic = 0; \ > + } \ > + atomic ? (typeof(*__x))__u.__val : (*(volatile typeof(__x))__x);\ > +}) > + > +#endif /* !BUILD_VDSO */ > +#endif /* CONFIG_LTO */ > + > +#include <asm-generic/rwonce.h> > + > +#endif /* __ASM_RWONCE_H */ > diff --git a/arch/arm64/kernel/vdso/Makefile b/arch/arm64/kernel/vdso/Makefile > index 45d5cfe46429..60df97f2e7de 100644 > --- a/arch/arm64/kernel/vdso/Makefile > +++ b/arch/arm64/kernel/vdso/Makefile > @@ -28,7 +28,7 @@ ldflags-y := -shared -nostdlib -soname=linux-vdso.so.1 --hash-style=sysv \ > $(btildflags-y) -T > > ccflags-y := -fno-common -fno-builtin -fno-stack-protector -ffixed-x18 > -ccflags-y += -DDISABLE_BRANCH_PROFILING > +ccflags-y += -DDISABLE_BRANCH_PROFILING -DBUILD_VDSO > > CFLAGS_REMOVE_vgettimeofday.o = $(CC_FLAGS_FTRACE) -Os $(CC_FLAGS_SCS) $(GCC_PLUGINS_CFLAGS) > KBUILD_CFLAGS += $(DISABLE_LTO) > diff --git a/arch/arm64/kernel/vdso32/Makefile b/arch/arm64/kernel/vdso32/Makefile > index d88148bef6b0..4fdf3754a058 100644 > --- a/arch/arm64/kernel/vdso32/Makefile > +++ b/arch/arm64/kernel/vdso32/Makefile > @@ -43,7 +43,7 @@ cc32-as-instr = $(call try-run,\ > # As a result we set our own flags here. > > # KBUILD_CPPFLAGS and NOSTDINC_FLAGS from top-level Makefile > -VDSO_CPPFLAGS := -D__KERNEL__ -nostdinc -isystem $(shell $(CC_COMPAT) -print-file-name=include) > +VDSO_CPPFLAGS := -DBUILD_VDSO -D__KERNEL__ -nostdinc -isystem $(shell $(CC_COMPAT) -print-file-name=include) > VDSO_CPPFLAGS += $(LINUXINCLUDE) > > # Common C and assembly flags > -- > 2.27.0.383.g050319c2ae-goog-- (english) http://www.livejournal.com/~pavelmachek (cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html
Possibly Parallel Threads
- [PATCH v3 02/19] compiler.h: Split {READ, WRITE}_ONCE definitions out into rwonce.h
- [PATCH 02/18] compiler.h: Split {READ, WRITE}_ONCE definitions out into rwonce.h
- [PATCH v3 02/19] compiler.h: Split {READ, WRITE}_ONCE definitions out into rwonce.h
- [PATCH v3 06/19] asm/rwonce: Don't pull <asm/barrier.h> into 'asm-generic/rwonce.h'
- [PATCH v3 06/19] asm/rwonce: Don't pull <asm/barrier.h> into 'asm-generic/rwonce.h'