search for: ldrex

Displaying 20 results from an estimated 25 matches for "ldrex".

2012 Aug 16
3
[LLVMdev] error: instruction requires: thumb2
...et=arm) with the following command: > clang -march=armv7-a -mfloat-abi=soft -ccc-host-triple arm-none-linux-gnueabi -integrated-as main.c -o main.o -c and get error message: ------------------------------------------------------- main.c:9:9: error: instruction requires: thumb2 "ldrex %[oldValue], [%[ptr], #0]\n" // oldValue = *ptr ^ <inline asm>:1:2: note: instantiated into assembly here ldrex r6, [r4, #0] ^ main.c:11:3: error: instruction requires: thumb2 "strexeq %[failed], %[newValue], [%[ptr], #0]\n" ^ <inlin...
2012 Aug 16
2
[LLVMdev] error: instruction requires: thumb2
...rch=armv7-a -mfloat-abi=soft -ccc-host-triple arm-none-linux-gnueabi -integrated-as main.c -o main.o -c >> >> and get error message: >> >> ------------------------------------------------------- >> main.c:9:9: error: instruction requires: thumb2 >> "ldrex %[oldValue], [%[ptr], #0]\n" // oldValue = *ptr >> ^ >> <inline asm>:1:2: note: instantiated into assembly here >> ldrex r6, [r4, #0] >> ^ >> main.c:11:3: error: instruction requires: thumb2 >> "strexeq %[failed], %[new...
2012 Aug 16
0
[LLVMdev] error: instruction requires: thumb2
...gt; > > clang -march=armv7-a -mfloat-abi=soft -ccc-host-triple arm-none-linux-gnueabi -integrated-as main.c -o main.o -c > > and get error message: > > ------------------------------------------------------- > main.c:9:9: error: instruction requires: thumb2 > "ldrex %[oldValue], [%[ptr], #0]\n" // oldValue = *ptr > ^ > <inline asm>:1:2: note: instantiated into assembly here > ldrex r6, [r4, #0] > ^ > main.c:11:3: error: instruction requires: thumb2 > "strexeq %[failed], %[newValue], [%[ptr], #0]...
2012 Aug 16
0
[LLVMdev] error: instruction requires: thumb2
Sure. Use legal ARM mode syntax for the instruction. Specifically, there is no offset immediate for the ARM mode LDREX instruction. It's illegal syntax to supply one, even if it's zero. -Jim On Aug 16, 2012, at 2:36 PM, Lei Zhao <leizhao833 at gmail.com> wrote: > It works. But a follow-up question: why do I have to compile it to thumb mode in order to pass the compilation? Is there a way to make...
2014 May 10
6
[LLVMdev] Replacing Platform Specific IR Codes with Generic Implementation and Introducing Macro Facilities
On 10 May 2014, at 13:53, Tim Northover <t.p.northover at gmail.com> wrote: > It doesn't make sense for everything though, particularly if you want > target-specific IR to simply not exist. What would you map ARM's > "ldrex" to on x86? This isn't a great example. Having load-linked / store-conditional in the IR would make a number of transforms related to atomics easier. We currently can't correctly model the weak compare-and-exchange from the C[++]11 memory model and we generate terrible code for a n...
2014 May 10
4
[LLVMdev] Replacing Platform Specific IR Codes with Generic Implementation and Introducing Macro Facilities
Hi, This might sound a bit controversial at this stage of maturity for LLVM. Can the community consider deprecating architecture specific features which has come into the IR and replacing them with more generic IR codes. Also some form of powerful macro facility which supports platform specific macros which can be used to expand generic IRs to a set of IRs which might have equivalent results and
2014 May 10
2
[LLVMdev] Replacing Platform Specific IR Codes with Generic Implementation and Introducing Macro Facilities
...om> wrote: > Actually, I really agree there. I considered it recently, but decided > to leave it as an intrinsic for now (the new IR expansion pass happens > after most optimisations so there wouldn't be much benefit, but if we > did it earlier and the mid-end understood what an ldrex/strex meant, I > could see code getting much better). > > Load linked would be fairly easy (perhaps even written as "load > linked", a minor extension to "load atomic"). Store conditional would > be a bigger change since stores don't return anything at the mo...
2013 Mar 15
1
Re: [PATCH 6/9] tools: memshr: arm64 support
...ned(__arm__) > +#if defined(__arm__) > static inline void atomic_inc(uint32_t *v) > { > unsigned long tmp; > int result; > > - __asm__ __volatile__("@ atomic_add\n" > + __asm__ __volatile__("@ atomic_inc\n" > "1: ldrex %0, [%3]\n" > " add %0, %0, #1\n" > " strex %1, %0, [%3]\n" > @@ -130,7 +121,7 @@ static inline void atomic_dec(uint32_t *v) > unsigned long tmp; > int result; > > - __asm__ __volatile__("@ atomic_sub\n&q...
2013 Feb 03
2
[LLVMdev] A bug in LLVM-GCC 4.2 with inlining __exchange_and_add
...447a add r2, pc00000036 f8ddc018 ldr.w ip, [sp, #24]0000003a 3004 adds r0, #40000003c 9001 str r0, [sp, #4]0000003e 6808 ldr r0, [r1, #0]00000040 f1020108 add.w r1, r2, #8 @ 0x800000044 f8cc1000 str.w r1, [ip]00000048 f3bf8f5a dmb ishst0000004c 9901 ldr r1, [sp, #4]0000004e e8512f00 ldrex r2, [r1]00000052 9200 str r2, [sp, #0]00000054 441a add r2, r300000056 e8412c00 strex ip, r2, [r1]0000005a f1bc0f00 cmp.w ip, #0 @ 0x00000005e d1f6 bne.n 0x4e... What happens in the code between 4e and 5e is an atomic check of a variable by the inlined __exchange_and_add. The problem is...
2015 Apr 08
2
[LLVMdev] __sync_add_and_fetch in objc block for global variable on ARM
Hello community, I faced with bug in multithread environment in objective C code which using dispatch_async and block, __sync_add_and_fetch increments global variable. But in case of many..many threads> 5, after every __sync_add_and_fetch got damaged ... int32_t count = 0; ... int main(int argc, char *argv[]) {    for (i = 1; i < 32; ++i) {      ...         char* name;        
2014 May 10
2
[LLVMdev] Replacing Platform Specific IR Codes with Generic Implementation and Introducing Macro Facilities
...ntly the expansion happens post-ISel (emitAtomicBinary and > friends building the control flow and MachineInstrs directly). > > This moves it to before ISel but still late in the pipeline (actually, > you could even put it earlier: I didn't because of fears of opaque > @llvm.arm.ldrex intrinsics pessimising mid-end optimisations). > Strictly earlier than what happens now, and a reasonable > stepping-stone to generic load-linked instructions or intrinsics. The problem is that the optimisations that we're most interested in should be done by the mid-level optimisers and...
2015 Apr 09
2
[LLVMdev] __sync_add_and_fetch in objc block for global variable on ARM
...h in objc block for global variable on ARM > From: t.p.northover at gmail.com > To: alexey.perevalov at hotmail.com > CC: llvmdev at cs.uiuc.edu > >> in disas I see dmb ish instruction, but I don't know is it enough. > > There should be 2 dmb instructions: one before the ldrex/strex loop > and one after. But I wouldn't expect dropping one to actually cause a > problem in the code you posted. Yes, there are two dmb's => 0x00008ed8 <+224>:    dmb    ish    0x00008edc <+228>:    movw    r1, #10800    ; 0x2a30    0x00008ee0 <+232>:    movt ...
2016 May 10
2
Atomic LL/SC loops in llvm
...on page A3-121. A Store-Exclusive > instruction to the same address clears the tag. > > And: > > The value of a in this assignment is IMPLEMENTATION DEFINED, between a > minimum value of 3 and a maximum value of 11. For example, in an > implementation where a is 4, a successful LDREX of address 0x000341B4 gives > a tag value of bits[31:4] of the address, giving 0x000341B. This means that > the four words of memory from 0x000341B0 to 0x000341BF are tagged for > exclusive access. > The size of the tagged memory block is called the Exclusives Reservation > Granule....
2018 Jun 13
12
RFC: Atomic LL/SC loops in LLVM revisited
# RFC: Atomic LL/SC loops in LLVM revisited ## Summary This proposal gives a brief overview of the challenges of lowering to LL/SC loops and details the approach I am taking for RISC-V. Beyond getting feedback on that work, my intention is to find consensus on moving other backends towards a similar approach and sharing common code where feasible. Scroll down to 'Questions' for a summary
2011 Apr 14
1
Bug#618616: arm build failure with latest binutils - usr/klibc/syscalls/_exit.S:29: Error: .size expression does not evaluate to a constant
tags 618616 pending stop On Wed, 16 Mar 2011, Lo?c Minier wrote: > I've fixed this in Ubuntu with the attached patch, but didn't find > where to upstream it; since you're a klibc upstream developer and since > it probably already affects Debian, I figured it was probably best to > send it here :-) thank you applied after review by hpa and pushed out to klibc git.
2013 Feb 22
48
[PATCH v3 00/46] initial arm v8 (64-bit) support
This round implements all of the review comments from V2 and all patches are now acked. Unless there are any objections I intend to apply later this morning. Ian.
2016 May 10
4
Atomic LL/SC loops in llvm
So, taking PR25526 as context, and after reading the internet, it seems to me that creating an atomic LL/SC loop at the IR level -- as is the modern way to do it in LLVM -- is effectively impossible to do correctly. Nor, for that matter, was it correct to create such a loop at isel time, as the implementation prior to r205525 did (and, as some targets still do, not having been updated yet to use
2014 May 29
4
[LLVMdev] Proposal: "load linked" and "store conditional" atomic instructions
...effectively an "asm volatile" for optimisation purposes, which is very heavy-handed for LLVM's other optimisations. - Still need target hooks to create the calls, because intrinsics don't get type lowered and so you can't write a fully generic one (e.g. an i64 ldrex on ARM needs to return { i32, i32 }). 4. Change the cmpxchg operation to return (say) { iN, i1 } where the second value indicates success. - Probably good idea anyway as part of support for weak compare-exchange operations. - Doesn't actually help this issue much: it'...
2011 Mar 10
3
[LLVMdev] Building VMKit
I tried to build VMKit on an ARM device today (a Sheevaplug - armv5te) (native, not cross compiled), and got this error: llvm[3]: Building LLVM assembly with /home/debio/build/vmkit-build/vmkit/lib/Mvm/Runtime/LLVMAssembly.ll /home/debio/build/vmkit-build/vmkit/lib/Mvm/Runtime/LLVMAssembly64.ll ExpandIntegerResult #0: 0x16fbf88: i64,ch = AtomicCmpSwap 0x16e8d84, 0x16fbf00, 0x16fc3c8,
2012 Mar 09
10
[PATCH 0 of 9] (v2) arm: SMP boot
This patch series implements SMP boot for arch/arm, as far as getting all CPUs up and running the idle loop. Changes from v1: - moved barriers out of loop in udelay() - dropped broken GIC change in favour of explanatory comment - made the increment of ready_cpus atomic (I couldn''t move the increment to before signalling the next CPU because the PT switch has to happen between