similar to: [LLVMdev] [Query] Programming Register Allocation

Displaying 20 results from an estimated 2000 matches similar to: "[LLVMdev] [Query] Programming Register Allocation"

2010 Aug 29
1
[LLVMdev] [Query] Programming Register Allocation
Thanks for the information. I still don't know how do I partition registers into different classes from the virtual registers? For instance, I have the function who which iterates over the instructions, but I don't know how to write the function which returns the different register class. void RAOptimal::Gather(MachineFunction &Fn) { // Gather just iterates over the blocks,
2010 Aug 29
0
[LLVMdev] [Query] Programming Register Allocation
On Sat, Aug 28, 2010 at 16:20:42 -0400, Jeff Kunkel wrote: > What I need to know is how to access the machine register classes. Also, I > need to know which virtual register is to be mapped into each specific > register class. I assume there is type information on the registers. I need > to know how to access it. MachineRegisterInfo::getRegClass will give you the TargetRegisterClass
2010 Feb 04
2
[LLVMdev] Integrated instruction scheduling/register allocation
A more pressing need is a pre-regalloc scheduler that can switch modes to balance reducing latency vs. reducing register pressure. The problem is the current approach is the scheduler is locked into one mode or the other. For x86, it generally makes sense to schedule for low register pressure. That is, until you are dealing with a block that are explicitly SSE code in 64-bit mode. In that case,
2012 Jun 18
0
[LLVMdev] Is cross-compiling for ARM on x86 with llvm/Clang possible?
On Sat, Jun 16, 2012 at 20:20:23 +0900, Journeyer J. Joh wrote: > I wonder if llvm/Clang can compile C or C++ for ARM from on x86. Yes. I use clang -emit-llvm -ccc-host-triple arm-unknown-linux-gnu -I /..arm../include/ to generate LLVM bitcode files for ARM. llc then automagically knows to generate ARM assembly, and ARM binutils take it from there. > If the cross compiling is supported,
2010 Feb 06
1
[LLVMdev] Integrated instruction scheduling/register allocation
On Feb 5, 2010, at 2:01 AM, Gergö Barany wrote: > On Thu, Feb 04, 2010 at 13:59:08 -0800, Evan Cheng wrote: >> A more pressing need is a pre-regalloc scheduler that can switch modes to >> balance reducing latency vs. reducing register pressure. > > Right. I'm actually working on implementing a variant of IPS (Goodman and > Hsu, Code scheduling and register allocation
2010 Feb 05
0
[LLVMdev] Integrated instruction scheduling/register allocation
On Thu, Feb 04, 2010 at 13:59:08 -0800, Evan Cheng wrote: > A more pressing need is a pre-regalloc scheduler that can switch modes to > balance reducing latency vs. reducing register pressure. Right. I'm actually working on implementing a variant of IPS (Goodman and Hsu, Code scheduling and register allocation in large basic blocks, http://doi.acm.org/10.1145/55364.55407) based on the
2012 Jun 16
4
[LLVMdev] Is cross-compiling for ARM on x86 with llvm/Clang possible?
Hello list, I wonder if llvm/Clang can compile C or C++ for ARM from on x86. http://comments.gmane.org/gmane.comp.compilers.clang.devel/8896 The talk above answered 'NO' to my question, which means Clang is not yet able to cross compile for ARM on X86. Is the answer still correct for my question? I saw somewhere that Clang supports ARM on Darwin only. Then is the cross compiling
2010 Nov 05
4
[LLVMdev] Basic block liveouts
Is there an easy way to obtain all liveout variables of a basic block? Liveins can be found for each MachineBasicBlock, but I can only find liveouts for the whole function, at MachineRegisterInfo. Do I need to find them out manually?
2010 Nov 05
0
[LLVMdev] Basic block liveouts
Because I feel bad for giving a non-answer: An easy way to find if a virtual register is alive after the basic block is to While iterating over the virtual registers - Check to see if the virtual register's "next" value exists outside of the basic block. for instance: std::vector<unsigned> findLiveOut( MachineBasicBlock * mbb ) { std::vector<unsigned> liveout; for(
2010 Nov 26
2
[LLVMdev] ARM Intruction Constraint DestReg!=SrcReg patch?
Hi, Paul Curtis wrote: > If you read the Arm Architecture document for ARMv5, it states for MUL: > > "Operand restriction: Specifying the same register for <Rd> and <Rm> was > previously described as producing UNPREDICTABLE results. There is no > restriction in ARMv6, and it is believed all relevant ARMv4 and ARMv5 > implementations do not require this
2012 May 04
3
[LLVMdev] how compile subproject
Hello, is it possible to compile just an subproject? For example, just llc or lli? Cheers. Beckert. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20120503/c20aa7b1/attachment.html>
2012 Jun 21
1
[LLVMdev] LLVM stack
Hello Everyone, Would you please send me any links to documentation on LLVM stack? I am particularly interested in knowing how each instruction in an LLVM bit code file(.ll file) affects its stack. To be specific, is it possible to map an LLVM program as operations on a stack? Thanks, Amruth
2012 Aug 02
1
[LLVMdev] Question about arm thumb2 code generation
Thanks andrew for the answer. I would like to generate code for Cortex-A9 that don't use neon for fp computation but vfpv3 -d16. I've tried some combination of -mattr=+neon,-neonfp,+vfp3,+d16 but couldn't get ".fpu vfpv3-d16" directive generated in assembly file. Do you know how to make it happen ? Best Regards Seb From: Andrew Trick [mailto:atrick at apple.com] Sent:
2012 Aug 08
1
[LLVMdev] Creating DAGs
All, I apologize if this is an inappropriate question for this mailing list. If so, please recommend an appropriate place to post the question. I'm also somewhat new to LLVM, so I could have some pretty fundamental misunderstandings about what I am trying to do. I have searched for information on the llvm.org website (user's guide, programmer's manual and doxygen documentation),
2015 Feb 11
2
[LLVMdev] deleting or replacing a MachineInst
This seems a very natural approach but I probably am having a trouble with the iterator invalidation. However, looking at other peephole optimizers passes, I couldn't see how to do this: #define BUILD_INS(opcode, new_reg, i) \ BuildMI(*MBB, MBBI, MBBI->getDebugLoc(), TII->get(X86::opcode)) \ .addReg(X86::new_reg, kill).addImm(i) for
2011 May 26
2
[LLVMdev] Need advice on writing scheduling pass
Hi, thank you for your explanations. In order to get a pre-RA scheduling, I would need something like: - LiveVars - PhiElim - TwoAddr - LiveIntervals - Coalescing - Scheduler (new) - SlotIndexing - LiveIntervals2 (new) - RegAllocMy qeustion then is, is it really so difficult to create the live intervals information, with modifications to the original algorithm, or even from scratch?
2007 Apr 12
8
[LLVMdev] Regalloc Refactoring
Chris Lattner wrote: > On Thu, 12 Apr 2007, David Greene wrote: >> As I work toward improving LLVM register allocation, I've >> come across the need to do some refactoring. > > cool. :) One request: Evan is currently out on vacation until Monday. > This is an area that he is very interested in and will want to chime in > on. Please don't start anything
2015 Feb 11
2
[LLVMdev] deleting or replacing a MachineInst
I'm writing a peephole pass and I'm done with the X86_64 instruction level detail work. But I'm having difficulty with the basic block surgery of replacing the old MachineInst. The peephole pass gets called per MachineFunction and then iterates over each MachineBasicBlock and in turn over each MachineInst. When it finds an instruction which should be replaced, it builds a new
2012 Sep 10
5
[LLVMdev] Minimum Array Size
Hello, clang currently seems to generate the same code for both: double something_a(char A[const static 256]) { ... } and for: double something_b(char (*const A)) { ... } even though in the first case the programmer has told us that the array A is at least 256 bytes in length (and, thus, will not be null). Do we currently have a way to pass this information to LLVM? Thanks again, Hal
2011 Nov 09
3
[LLVMdev] Alternate instruction sequences
I was wondering, is there any way to express in the IR that an instruction/instruction sequence/basic block/region/function/module/whatever is an alternate version of another? e.g. let's keep things simple and say that I have an instruction I. An optimization pass reads it and could be able to produce one or more functionally-equivalent instructions I1, ..., In without being really able