search for: __smp_store_mb

Displaying 20 results from an estimated 25 matches for "__smp_store_mb".

2015 Dec 31
0
[PATCH v2 31/32] sh: support a 2-byte smp_store_mb
...-git a/arch/sh/include/asm/barrier.h b/arch/sh/include/asm/barrier.h index f887c64..0cc5735 100644 --- a/arch/sh/include/asm/barrier.h +++ b/arch/sh/include/asm/barrier.h @@ -32,7 +32,15 @@ #define ctrl_barrier() __asm__ __volatile__ ("nop;nop;nop;nop;nop;nop;nop;nop") #endif -#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0) +#define __smp_store_mb(var, value) do { \ + if (sizeof(var) != 4 && sizeof(var) != 1) { \ + WRITE_ONCE(var, value); \ + __smp_mb(); \ + } else { \ + (void)xchg(&var, value); \ + } \ +} while (0) + #define smp_store_mb(var,...
2016 Jan 19
1
virtio pull for 4.5 (was Re: [PULL] virtio: barrier rework+fixes)
...with this pull request. > If there's an issue, pls let me know! It was just pulled because I wasn't 100% sure I wanted the extra indirection. Oh well, pulled now. One question: - the arch/sh/ part of the pacth looks dubious. Why does it do that #define smp_store_mb(var, value) __smp_store_mb(var, value) despite the commit log saying it's done by asm-generic? I haven't pushed out yet, my allmodconfig sanity-check build is still going.. Linus
2016 Jan 21
0
[PATCH] sh: fix smp_store_mb for !SMP
...ls let me know. arch/sh/include/asm/barrier.h | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/sh/include/asm/barrier.h b/arch/sh/include/asm/barrier.h index f887c64..8a84e05 100644 --- a/arch/sh/include/asm/barrier.h +++ b/arch/sh/include/asm/barrier.h @@ -33,7 +33,6 @@ #endif #define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0) -#define smp_store_mb(var, value) __smp_store_mb(var, value) #include <asm-generic/barrier.h> -- MST
2016 Jan 19
1
virtio pull for 4.5 (was Re: [PULL] virtio: barrier rework+fixes)
...with this pull request. > If there's an issue, pls let me know! It was just pulled because I wasn't 100% sure I wanted the extra indirection. Oh well, pulled now. One question: - the arch/sh/ part of the pacth looks dubious. Why does it do that #define smp_store_mb(var, value) __smp_store_mb(var, value) despite the commit log saying it's done by asm-generic? I haven't pushed out yet, my allmodconfig sanity-check build is still going.. Linus
2016 Jan 21
0
[PATCH] sh: fix smp_store_mb for !SMP
...ls let me know. arch/sh/include/asm/barrier.h | 1 - 1 file changed, 1 deletion(-) diff --git a/arch/sh/include/asm/barrier.h b/arch/sh/include/asm/barrier.h index f887c64..8a84e05 100644 --- a/arch/sh/include/asm/barrier.h +++ b/arch/sh/include/asm/barrier.h @@ -33,7 +33,6 @@ #endif #define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0) -#define smp_store_mb(var, value) __smp_store_mb(var, value) #include <asm-generic/barrier.h> -- MST
2017 Oct 27
1
[PATCH v6] x86: use lock+addl for smp_mb()
...fine __smp_mb() asm volatile("lock; addl $0,-4(%%esp)" ::: "memory", "cc") +#else +#define __smp_mb() asm volatile("lock; addl $0,-4(%%rsp)" ::: "memory", "cc") +#endif #define __smp_rmb() dma_rmb() #define __smp_wmb() barrier() #define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0) diff --git a/tools/virtio/ringtest/main.h b/tools/virtio/ringtest/main.h index 90b0133..5706e07 100644 --- a/tools/virtio/ringtest/main.h +++ b/tools/virtio/ringtest/main.h @@ -110,11 +110,15 @@ static inline void busy_wait(void) barrier(...
2017 Oct 27
1
[PATCH v6] x86: use lock+addl for smp_mb()
...fine __smp_mb() asm volatile("lock; addl $0,-4(%%esp)" ::: "memory", "cc") +#else +#define __smp_mb() asm volatile("lock; addl $0,-4(%%rsp)" ::: "memory", "cc") +#endif #define __smp_rmb() dma_rmb() #define __smp_wmb() barrier() #define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0) diff --git a/tools/virtio/ringtest/main.h b/tools/virtio/ringtest/main.h index 90b0133..5706e07 100644 --- a/tools/virtio/ringtest/main.h +++ b/tools/virtio/ringtest/main.h @@ -110,11 +110,15 @@ static inline void busy_wait(void) barrier(...
2015 Dec 30
46
[PATCH 00/34] arch: barrier cleanup + __smp_XXX barriers for virt
...re. Finally, the following patches put the __smp_XXX APIs to work for virt: -. Patches 29-31 convert virtio and xen drivers to use the __smp_XXX APIs xen patches are untested virtio ones have been tested on x86 -. Patches 33-34 teach virtio to use __smp_load_acquire/__smp_store_release/__smp_store_mb This is what started all this work. tested on x86 The patchset has been in linux-next for a bit, so far without issues. Michael S. Tsirkin (34): Documentation/memory-barriers.txt: document __smb_mb() asm-generic: guard smp_store_release/load_acquire ia64: rename nop->iosapic_nop...
2015 Dec 30
46
[PATCH 00/34] arch: barrier cleanup + __smp_XXX barriers for virt
...re. Finally, the following patches put the __smp_XXX APIs to work for virt: -. Patches 29-31 convert virtio and xen drivers to use the __smp_XXX APIs xen patches are untested virtio ones have been tested on x86 -. Patches 33-34 teach virtio to use __smp_load_acquire/__smp_store_release/__smp_store_mb This is what started all this work. tested on x86 The patchset has been in linux-next for a bit, so far without issues. Michael S. Tsirkin (34): Documentation/memory-barriers.txt: document __smb_mb() asm-generic: guard smp_store_release/load_acquire ia64: rename nop->iosapic_nop...
2015 Dec 31
54
[PATCH v2 00/34] arch: barrier cleanup + barriers for virt
Changes since v1: - replaced my asm-generic patch with an equivalent patch already in tip - add wrappers with virt_ prefix for better code annotation, as suggested by David Miller - dropped XXX in patch names as this makes vger choke, Cc all relevant mailing lists on all patches (not personal email, as the list becomes too long then) I parked this in vhost tree for now, but the
2015 Dec 31
54
[PATCH v2 00/34] arch: barrier cleanup + barriers for virt
Changes since v1: - replaced my asm-generic patch with an equivalent patch already in tip - add wrappers with virt_ prefix for better code annotation, as suggested by David Miller - dropped XXX in patch names as this makes vger choke, Cc all relevant mailing lists on all patches (not personal email, as the list becomes too long then) I parked this in vhost tree for now, but the
2016 Jan 10
0
[PATCH v3 27/41] x86: define __smp_xxx
...se /* !SMP */ -#define smp_mb() barrier() -#define smp_rmb() barrier() -#define smp_wmb() barrier() -#define smp_store_mb(var, value) do { WRITE_ONCE(var, value); barrier(); } while (0) -#endif /* SMP */ +#define __smp_mb() mb() +#define __smp_rmb() dma_rmb() +#define __smp_wmb() barrier() +#define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0) #if defined(CONFIG_X86_PPRO_FENCE) @@ -50,31 +43,31 @@ * model and we should fall back to full barriers. */ -#define smp_store_release(p, v) \ +#define __smp_store_release(p, v) \ do { \ compiletime_assert_atom...
2016 Jan 27
0
[PATCH v4 5/5] x86: drop mfence in favor of lock+addl
...ot;mfence":::"memory") @@ -30,7 +30,7 @@ #endif #define dma_wmb() barrier() -#define __smp_mb() mb() +#define __smp_mb() asm volatile("lock; addl $0,-4(%%esp)" ::: "memory", "cc") #define __smp_rmb() dma_rmb() #define __smp_wmb() barrier() #define __smp_store_mb(var, value) do { (void)xchg(&var, value); } while (0) -- MST
2016 Jan 10
48
[PATCH v3 00/41] arch: barrier cleanup + barriers for virt
Changes since v2: - extended checkpatch tests for barriers, and added patches teaching it to warn about incorrect usage of barriers (__smp_xxx barriers are for use by asm-generic code only), should help prevent misuse by arch code to address comments by Russell King - patched more instances of xen to use virt_ barriers as suggested by Stefano Stabellini - implemented a 2 byte xchg on sh
2016 Jan 10
48
[PATCH v3 00/41] arch: barrier cleanup + barriers for virt
Changes since v2: - extended checkpatch tests for barriers, and added patches teaching it to warn about incorrect usage of barriers (__smp_xxx barriers are for use by asm-generic code only), should help prevent misuse by arch code to address comments by Russell King - patched more instances of xen to use virt_ barriers as suggested by Stefano Stabellini - implemented a 2 byte xchg on sh
2016 Jan 27
6
[PATCH v4 0/5] x86: faster smp_mb()+documentation tweaks
mb() typically uses mfence on modern x86, but a micro-benchmark shows that it's 2 to 3 times slower than lock; addl that we use on older CPUs. So we really should use the locked variant everywhere, except that intel manual says that clflush is only ordered by mfence, so we can't. Note: some callers of clflush seems to assume sfence will order it, so there could be existing bugs around
2016 Jan 27
6
[PATCH v4 0/5] x86: faster smp_mb()+documentation tweaks
mb() typically uses mfence on modern x86, but a micro-benchmark shows that it's 2 to 3 times slower than lock; addl that we use on older CPUs. So we really should use the locked variant everywhere, except that intel manual says that clflush is only ordered by mfence, so we can't. Note: some callers of clflush seems to assume sfence will order it, so there could be existing bugs around
2016 Jan 13
3
[PULL] virtio: barrier rework+fixes
The following changes since commit afd2ff9b7e1b367172f18ba7f693dfb62bdcb2dc: Linux 4.4 (2016-01-10 15:01:32 -0800) are available in the git repository at: git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git tags/for_linus for you to fetch changes up to 43e361f23c49dbddf74f56ddf6cdd85c5dbff6da: checkpatch: add virt barriers (2016-01-12 20:47:08 +0200)
2016 Jan 13
3
[PULL] virtio: barrier rework+fixes
The following changes since commit afd2ff9b7e1b367172f18ba7f693dfb62bdcb2dc: Linux 4.4 (2016-01-10 15:01:32 -0800) are available in the git repository at: git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git tags/for_linus for you to fetch changes up to 43e361f23c49dbddf74f56ddf6cdd85c5dbff6da: checkpatch: add virt barriers (2016-01-12 20:47:08 +0200)
2019 Nov 08
15
[PATCH 00/13] Finish off [smp_]read_barrier_depends()
Hi all, Although [smp_]read_barrier_depends() became part of READ_ONCE() in commit 76ebbe78f739 ("locking/barriers: Add implicit smp_read_barrier_depends() to READ_ONCE()"), it still limps on in the Linux memory model with the sinister hope of attracting innocent new users so that it becomes impossible to remove altogether. Let's strike before it's too late: there's only