search for: __atomic_load

Displaying 7 results from an estimated 7 matches for "__atomic_load".

Did you mean: __atomic_load_8
2016 Sep 02
2
call_once and TSan
...TERCEPTOR(call_once, o) { > __tsan_acquire_release(o); > REAL(call_once)(o); > } > > That will have some performance impact. If we hardcode the "fully > initialized" value, then we can eliminate the additional overhead: > > INTERCEPTOR(call_once, o) { > if (__atomic_load(o, acquire) == FULLY_INITIALIZED) { > __tsan_acquire(o); > return; > } > __tsan_acquire_release(o); > REAL(call_once)(o); > } Unfortunately, the first fast-path check is inlined and cannot be intercepted. We can only intercept the inner call to __call_once. But how w...
2016 Sep 01
2
call_once and TSan
Hi, I'm trying to write a TSan interceptor for the C++11 call_once function. There are currently false positive reports, because the inner __call_once function is located in the (non-instrumented) libcxx library, and on macOS we can't expect the users to build their own instrumented libcxx. TSan already supports pthread_once and dispatch_once by having interceptors that re-implement the
2016 Sep 02
2
call_once and TSan
...;>> REAL(call_once)(o); >>> } >>> >>> That will have some performance impact. If we hardcode the "fully >>> initialized" value, then we can eliminate the additional overhead: >>> >>> INTERCEPTOR(call_once, o) { >>> if (__atomic_load(o, acquire) == FULLY_INITIALIZED) { >>> __tsan_acquire(o); >>> return; >>> } >>> __tsan_acquire_release(o); >>> REAL(call_once)(o); >>> } >> >> Unfortunately, the first fast-path check is inlined and cannot be intercepted. We...
2016 Jan 27
7
Adding sanity to the Atomics implementation
...-- always emit llvm atomic ops. Except for one case: clang will still need to emit library calls itself for data not aligned naturally for its size. The LLVM atomic instructions currently will not handle unaligned data, but unaligned data is allowed for the four "slab of memory" builtins (__atomic_load, __atomic_store, __atomic_compare_exchange, and __atomic_exchange). A3) In LLVM, add "align" attributes to cmpxchg and atomicrmw, and allow specifying "align" values for "load atomic" and "store atomic" (where the attribute currently exists but cannot be use...
2016 Jan 28
0
Adding sanity to the Atomics implementation
...ic ops. > Except for one case: clang will still need to emit library calls > itself for data not aligned naturally for its size. The LLVM atomic > instructions currently will not handle unaligned data, but unaligned > data is allowed for the four "slab of memory" builtins (__atomic_load, > __atomic_store, __atomic_compare_exchange, and __atomic_exchange). > > A3) In LLVM, add "align" attributes to cmpxchg and atomicrmw, and > allow specifying "align" values for "load atomic" and "store atomic" > (where the attribute current...
2016 Sep 02
2
call_once and TSan
...;>>>> >>>>> That will have some performance impact. If we hardcode the "fully >>>>> initialized" value, then we can eliminate the additional overhead: >>>>> >>>>> INTERCEPTOR(call_once, o) { >>>>> if (__atomic_load(o, acquire) == FULLY_INITIALIZED) { >>>>> __tsan_acquire(o); >>>>> return; >>>>> } >>>>> __tsan_acquire_release(o); >>>>> REAL(call_once)(o); >>>>> } >>>> >>>> Unfortunately, the f...
2016 Jan 31
2
Adding sanity to the Atomics implementation
...c ops. > Except for one case: clang will still need to emit library calls > itself for data not aligned naturally for its size. The LLVM atomic > instructions currently will not handle unaligned data, but unaligned > data is allowed for the four "slab of memory" builtins > (__atomic_load, __atomic_store, __atomic_compare_exchange, and > __atomic_exchange). > > > A3) In LLVM, add "align" attributes to cmpxchg and atomicrmw, and > allow specifying "align" values for "load atomic" and "store atomic" > (where the attribute cur...