search for: __atomic_seq_cst

Displaying 18 results from an estimated 18 matches for "__atomic_seq_cst".

2018 Jan 25
2
[PATCH net-next 12/12] tools/virtio: fix smp_mb on x86
...#if defined(__x86_64__) || defined(__i386__) -#define smp_mb() asm volatile("lock; addl $0,-128(%%rsp)" ::: "memory", "cc") +#define smp_mb() asm volatile("lock; addl $0,-132(%%rsp)" ::: "memory", "cc") #else /* * Not using __ATOMIC_SEQ_CST since gcc docs say they are only synchronized -- MST
2018 Jan 25
2
[PATCH net-next 12/12] tools/virtio: fix smp_mb on x86
...#if defined(__x86_64__) || defined(__i386__) -#define smp_mb() asm volatile("lock; addl $0,-128(%%rsp)" ::: "memory", "cc") +#define smp_mb() asm volatile("lock; addl $0,-132(%%rsp)" ::: "memory", "cc") #else /* * Not using __ATOMIC_SEQ_CST since gcc docs say they are only synchronized -- MST
2013 Mar 17
2
[LLVMdev] Running cross compiled binaries for ARM on gem5
...rary call yet CV_XADD(refcount, 1); ^~~~~~~~~~~~~~~~~~~~ /home/silky/VecProject/opencv/OpenCVInstall/arm/include/opencv2/core/operations.hpp:61:38: note: expanded from macro 'CV_XADD' #define CV_XADD(addr, delta) __c11_atomic_fetch_add((_Atomic(int)*)(addr), (delta), __ATOMIC_SEQ_CST) Could someone please suggest what I am missing here, or what the error indicates? Thank you all. -- View this message in context: http://llvm.1065342.n5.nabble.com/Running-cross-compiled-binaries-for-ARM-on-gem5-tp55767p56023.html Sent from the LLVM - Dev mailing list archive at Nabble.com.
2013 Mar 18
0
[LLVMdev] Running cross compiled binaries for ARM on gem5
...refcount, 1); > ^~~~~~~~~~~~~~~~~~~~ > > /home/silky/VecProject/opencv/OpenCVInstall/arm/include/opencv2/core/operations.hpp:61:38: > note: expanded from macro 'CV_XADD' > #define CV_XADD(addr, delta) > __c11_atomic_fetch_add((_Atomic(int)*)(addr), (delta), __ATOMIC_SEQ_CST) > This is odd. This atomic is implemented in CGAtomic.cpp, but it's being lowered as a library call because "UseLibcall" is true: bool UseLibcall = (Size != Align || getContext().toBits(sizeChars) > MaxInlineWidthInBits); I don't think it should in y...
2016 Jan 14
2
RFC: non-temporal fencing in LLVM IR
...isks unexpected coherence miss problems, though they would >> probably be very rare. But they would be very surprising if they did occur. >> > > Today's LLVM already emits 'lock or %eax, (%esp)' for 'fence > seq_cst'/__sync_synchronize/__atomic_thread_fence(__ATOMIC_SEQ_CST) when > targeting 32-bit x86 machines which do not support mfence. What > instruction sequence should we be using instead? > Do they have non-temporal accesses in the ISA? On Wed, Jan 13, 2016 at 10:59 AM, Tim Northover <t.p.northover at gmail.com> >> wrote: >> >&g...
2016 Jan 14
2
RFC: non-temporal fencing in LLVM IR
...though they would >>>> probably be very rare. But they would be very surprising if they did occur. >>>> >>> >>> Today's LLVM already emits 'lock or %eax, (%esp)' for 'fence >>> seq_cst'/__sync_synchronize/__atomic_thread_fence(__ATOMIC_SEQ_CST) when >>> targeting 32-bit x86 machines which do not support mfence. What >>> instruction sequence should we be using instead? >>> >> >> Do they have non-temporal accesses in the ISA? >> > > I thought not but there appear to be instructions like m...
2017 Oct 27
1
[PATCH v6] x86: use lock+addl for smp_mb()
...n.h +++ b/tools/virtio/ringtest/main.h @@ -110,11 +110,15 @@ static inline void busy_wait(void) barrier(); } +#if defined(__x86_64__) || defined(__i386__) +#define smp_mb() asm volatile("lock; addl $0,-128(%%rsp)" ::: "memory", "cc") +#else /* * Not using __ATOMIC_SEQ_CST since gcc docs say they are only synchronized * with other __ATOMIC_SEQ_CST calls. */ #define smp_mb() __sync_synchronize() +#endif /* * This abuses the atomic builtins for thread fences, and -- MST
2017 Oct 27
1
[PATCH v6] x86: use lock+addl for smp_mb()
...n.h +++ b/tools/virtio/ringtest/main.h @@ -110,11 +110,15 @@ static inline void busy_wait(void) barrier(); } +#if defined(__x86_64__) || defined(__i386__) +#define smp_mb() asm volatile("lock; addl $0,-128(%%rsp)" ::: "memory", "cc") +#else /* * Not using __ATOMIC_SEQ_CST since gcc docs say they are only synchronized * with other __ATOMIC_SEQ_CST calls. */ #define smp_mb() __sync_synchronize() +#endif /* * This abuses the atomic builtins for thread fences, and -- MST
2013 Mar 18
2
[LLVMdev] Running cross compiled binaries for ARM on gem5
...^~~~~~~~~~~~~~~~~~~~ > /home/silky/VecProject/opencv/OpenCVInstall/arm/include/opencv2/core/operations.hpp:61:38: > note: expanded from macro 'CV_XADD' > #define CV_XADD(addr, delta) > __c11_atomic_fetch_add((_Atomic(int)*)(addr), (delta), > __ATOMIC_SEQ_CST) > > > This is odd. This atomic is implemented in CGAtomic.cpp, but it's > being lowered as a library call because "UseLibcall" is true: > > bool UseLibcall = (Size != Align || > getContext().toBits(sizeChars) > > MaxInlineWidthInBits)...
2016 Jan 20
0
[PATCH] tools/virtio: use virt_xxx barriers
...rier() +#define virt_wmb() barrier() +/* Atomic store should be enough, but gcc generates worse code in that case. */ +#define virt_store_mb(var, value) do { \ + typeof(var) virt_store_mb_value = (value); \ + __atomic_exchange(&(var), &virt_store_mb_value, &virt_store_mb_value, \ + __ATOMIC_SEQ_CST); \ + barrier(); \ +} while (0); /* Weak barriers should be used. If not - it's a bug */ -# define rmb() abort() -# define wmb() abort() +# define mb() abort() +# define rmb() abort() +# define wmb() abort() #else #error Please fill in barrier macros #endif diff --git a/tools/virtio/linux/c...
2018 Jan 26
0
[PATCH net-next 12/12] tools/virtio: fix smp_mb on x86
...quot;lock; addl $0,-128(%%rsp)" ::: "memory", "cc") Just wonder did "rsp" work for __i386__ ? Thanks > +#define smp_mb() asm volatile("lock; addl $0,-132(%%rsp)" ::: "memory", "cc") > #else > /* > * Not using __ATOMIC_SEQ_CST since gcc docs say they are only synchronized
2016 Jan 20
0
[PATCH] tools/virtio: use virt_xxx barriers
...rier() +#define virt_wmb() barrier() +/* Atomic store should be enough, but gcc generates worse code in that case. */ +#define virt_store_mb(var, value) do { \ + typeof(var) virt_store_mb_value = (value); \ + __atomic_exchange(&(var), &virt_store_mb_value, &virt_store_mb_value, \ + __ATOMIC_SEQ_CST); \ + barrier(); \ +} while (0); /* Weak barriers should be used. If not - it's a bug */ -# define rmb() abort() -# define wmb() abort() +# define mb() abort() +# define rmb() abort() +# define wmb() abort() #else #error Please fill in barrier macros #endif diff --git a/tools/virtio/linux/c...
2013 Mar 11
0
[LLVMdev] Running cross compiled binaries for ARM on gem5
Hi Silky, If I got correctly, you seem to be trying to run a bare-metal image on your model, but you compile with linux-gnueabi GCC. I don't know if that will make a difference, but I'd try to use none-eabi GCC toolchain and set the -target armv7a-none-eabi just in case. On 10 March 2013 00:26, Silky Arora <silkyar at umich.edu> wrote: > Most of the search results talk about
2016 Jan 14
4
RFC: non-temporal fencing in LLVM IR
I agree with Tim's assessment for ARM. That's interesting; I wasn't previously aware of that instruction. My understanding is that Alpha would have the same problem for normal loads. I'm all in favor of more systematic handling of the fences associated with x86 non-temporal accesses. AFAICT, nontemporal loads and stores seem to have different fencing rules on x86, none of them
2013 Mar 10
2
[LLVMdev] Running cross compiled binaries for ARM on gem5
Hi, I am trying to optimize some benchmarks using LLVM and run them on gem5 simulator (build for ARM). I am using Sourcery Codebench cross-compiler for ARM on my x-86 machine. My steps up till now have been using the following commands. 1. clang -static -emit-llvm -march=armv7-a -mfloat-abi=soft -target arm-elf a.cpp -c -integrated-as \
2016 Jan 21
1
[PATCH] tools/virtio: add ringtest utilities
...u_relax() asm ("rep; nop" ::: "memory") +#else +#define cpu_relax() assert(0) +#endif + +extern bool do_relax; + +static inline void busy_wait(void) +{ + if (do_relax) + cpu_relax(); + else + /* prevent compiler from removing busy loops */ + barrier(); +} + +/* + * Not using __ATOMIC_SEQ_CST since gcc docs say they are only synchronized + * with other __ATOMIC_SEQ_CST calls. + */ +#define smp_mb() __sync_synchronize() + +/* + * This abuses the atomic builtins for thread fences, and + * adds a compiler barrier. + */ +#define smp_release() do { \ + barrier(); \ + __atomic_thread_fe...
2016 Jan 21
1
[PATCH] tools/virtio: add ringtest utilities
...u_relax() asm ("rep; nop" ::: "memory") +#else +#define cpu_relax() assert(0) +#endif + +extern bool do_relax; + +static inline void busy_wait(void) +{ + if (do_relax) + cpu_relax(); + else + /* prevent compiler from removing busy loops */ + barrier(); +} + +/* + * Not using __ATOMIC_SEQ_CST since gcc docs say they are only synchronized + * with other __ATOMIC_SEQ_CST calls. + */ +#define smp_mb() __sync_synchronize() + +/* + * This abuses the atomic builtins for thread fences, and + * adds a compiler barrier. + */ +#define smp_release() do { \ + barrier(); \ + __atomic_thread_fe...
2016 Jan 15
3
RFC: non-temporal fencing in LLVM IR
...rare. But they would be very surprising if they > did occur. > > > Today's LLVM already emits 'lock or %eax, (%esp)' for > 'fence > seq_cst'/__sync_synchronize/__atomic_thread_fence(__ATOMIC_SEQ_CST) > when targeting 32-bit x86 machines which do not > support mfence. What instruction sequence should we > be using instead? > > > Do they have non-temporal accesses in the ISA? > > > I thought not but t...