Displaying 12 results from an estimated 12 matches for "larx".
Did you mean:
larl
2008 Jul 10
2
[LLVMdev] Implementing llvm.atomic.cmp.swap.i32 on PowerPC
...moves the register into the FPSCR.
MTFSF,
+ /// ATOMIC_LOAD_ADD, ATOMIC_CMP_SWAP, ATOMIC_SWAP - These
+ /// correspond to the llvm.atomic.load.add, llvm.atomic.cmp.swap
+ /// and llvm.atomic.swap intrinsics.
+ ATOMIC_LOAD_ADD, ATOMIC_CMP_SWAP, ATOMIC_SWAP,
+
/// LARX = This corresponds to PPC l{w|d}arx instrcution: load and
/// reserve indexed. This is used to implement atomic operations.
LARX,
@@ -160,10 +165,6 @@
/// indexed. This is used to implement atomic operations.
STCX,
- /// CMP_UNRESERVE = Test for equality and "...
2008 Jul 08
0
[LLVMdev] Implementing llvm.atomic.cmp.swap.i32 on PowerPC
PPCTargetLowering::EmitInstrWithCustomInserter has a reference
to the current MachineFunction for other purposes. Can you use
MachineFunction::getRegInfo instead?
Dan
On Jul 8, 2008, at 1:56 PM, Gary Benson wrote:
> Would it be acceptable to change MachineInstr::getRegInfo from private
> to public so I can use it from
> PPCTargetLowering::EmitInstrWithCustomInserter?
>
>
2008 Jul 11
2
[LLVMdev] Implementing llvm.atomic.cmp.swap.i32 on PowerPC
...moves the register into the FPSCR.
MTFSF,
+ /// ATOMIC_LOAD_ADD, ATOMIC_CMP_SWAP, ATOMIC_SWAP - These
+ /// correspond to the llvm.atomic.load.add, llvm.atomic.cmp.swap
+ /// and llvm.atomic.swap intrinsics.
+ ATOMIC_LOAD_ADD, ATOMIC_CMP_SWAP, ATOMIC_SWAP,
+
/// LARX = This corresponds to PPC l{w|d}arx instrcution: load and
/// reserve indexed. This is used to implement atomic operations.
LARX,
@@ -160,10 +165,6 @@
/// indexed. This is used to implement atomic operations.
STCX,
- /// CMP_UNRESERVE = Test for equality and "...
2008 Jul 11
0
[LLVMdev] Implementing llvm.atomic.cmp.swap.i32 on PowerPC
Hi Gary,
This does not patch cleanly for me (PPCISelLowering.cpp). Can you
prepare a updated patch?
Thanks,
Evan
On Jul 10, 2008, at 11:45 AM, Gary Benson wrote:
> Cool, that worked. New patch attached...
>
> Cheers,
> Gary
>
> Evan Cheng wrote:
>> Just cast both values to const TargetRegisterClass*.
>>
>> Evan
>>
>> On Jul 10, 2008, at 7:36
2008 Jul 10
0
[LLVMdev] Implementing llvm.atomic.cmp.swap.i32 on PowerPC
Just cast both values to const TargetRegisterClass*.
Evan
On Jul 10, 2008, at 7:36 AM, Gary Benson wrote:
> Evan Cheng wrote:
>> How about?
>>
>> const TargetRegisterClass *RC = is64Bit ? &PPC:GPRCRegClass :
>> &PPC:G8RCRegClass;
>> unsigned TmpReg = RegInfo.createVirtualRegister(RC);
>
> I tried something like that yesterday:
>
> const
2008 Jul 10
2
[LLVMdev] Implementing llvm.atomic.cmp.swap.i32 on PowerPC
Evan Cheng wrote:
> How about?
>
> const TargetRegisterClass *RC = is64Bit ? &PPC:GPRCRegClass :
> &PPC:G8RCRegClass;
> unsigned TmpReg = RegInfo.createVirtualRegister(RC);
I tried something like that yesterday:
const TargetRegisterClass *RC =
is64bit ? &PPC::GPRCRegClass : &PPC::G8RCRegClass;
but I kept getting this error no matter how I arranged it:
2008 Jun 30
0
[LLVMdev] Implementing llvm.atomic.cmp.swap.i32 on PowerPC
You need to insert new basic blocks and update CFG to accomplish this.
There is a hackish way to do this right now. Add a pseudo instruction
to represent this operation and mark it usesCustomDAGSchedInserter.
This means the intrinsic is mapped to a single (pseudo) node. But it
is then expanded into instructions that can span multiple basic
blocks. See
2008 Jul 09
2
[LLVMdev] Implementing llvm.atomic.cmp.swap.i32 on PowerPC
...moves the register into the FPSCR.
MTFSF,
+ /// ATOMIC_LOAD_ADD, ATOMIC_CMP_SWAP, ATOMIC_SWAP - These
+ /// correspond to the llvm.atomic.load.add, llvm.atomic.cmp.swap
+ /// and llvm.atomic.swap intrinsics.
+ ATOMIC_LOAD_ADD, ATOMIC_CMP_SWAP, ATOMIC_SWAP,
+
/// LARX = This corresponds to PPC l{w|d}arx instrcution: load and
/// reserve indexed. This is used to implement atomic operations.
LARX,
@@ -160,10 +165,6 @@
/// indexed. This is used to implement atomic operations.
STCX,
- /// CMP_UNRESERVE = Test for equality and "...
2008 Jul 08
2
[LLVMdev] Implementing llvm.atomic.cmp.swap.i32 on PowerPC
Would it be acceptable to change MachineInstr::getRegInfo from private
to public so I can use it from PPCTargetLowering::EmitInstrWithCustomInserter?
Cheers,
Gary
Evan Cheng wrote:
> Look for createVirtualRegister. These are examples in
> PPCISelLowering.cpp.
>
> Evan
> On Jul 8, 2008, at 8:24 AM, Gary Benson wrote:
>
> > Hi Evan,
> >
> > Evan Cheng wrote:
2008 Jun 30
2
[LLVMdev] Implementing llvm.atomic.cmp.swap.i32 on PowerPC
Chris Lattner wrote:
> On Jun 27, 2008, at 8:27 AM, Gary Benson wrote:
> > def CMP_UNRESw : Pseudo<(outs), (ins GPRC:$rA, GPRC:$rB, i32imm:
> > $label),
> > "cmpw $rA, $rB\n\tbne- La${label}_exit",
> > [(PPCcmp_unres GPRC:$rA, GPRC:$rB, imm:
> > $label)]>;
> > }
> >
> > ...and
2008 Jul 02
2
[LLVMdev] Implementing llvm.atomic.cmp.swap.i32 on PowerPC
...moves the register into the FPSCR.
MTFSF,
+ /// ATOMIC_LOAD_ADD, ATOMIC_CMP_SWAP, ATOMIC_SWAP - These
+ /// correspond to the llvm.atomic.load.add, llvm.atomic.cmp.swap
+ /// and llvm.atomic.swap intrinsics.
+ ATOMIC_LOAD_ADD, ATOMIC_CMP_SWAP, ATOMIC_SWAP,
+
/// LARX = This corresponds to PPC l{w|d}arx instrcution: load and
/// reserve indexed. This is used to implement atomic operations.
LARX,
@@ -160,10 +165,6 @@
/// indexed. This is used to implement atomic operations.
STCX,
- /// CMP_UNRESERVE = Test for equality and "...
2011 Dec 06
4
[LLVMdev] Comments on the bundle proposal
...oblivious to
bundles isn't well defined. If this means that a pass should be able to
run as if the bundles weren't present, then such a pass can easily damage
the "bundling". One could think of cases, where certain instruction sequences
should always stay together (idioms using larx/stcx. on PPC come to my mind).
If, on the other hand, it means that an "oblivious" pass should see a
bundle as if it was a single instruction, then the proposed solution
would not work, but it could be a reasonable requirement regardless.
Another approach would be to have a pseudo-instru...