similar to: [Pipeliner] MachinePipeliner TargetInstrInfo hooks need more information?

Displaying 20 results from an estimated 300 matches similar to: "[Pipeliner] MachinePipeliner TargetInstrInfo hooks need more information?"

2019 Jul 15
2
MachinePipeliner refactoring
Hi Brendan (and friends of MachinePipeliner, +llvm-dev for openness), Over the past week or so I've been attempting to extend the MachinePipeliner to support different idioms of code generation. To make this a bit more concrete, there are two areas where the currently generated code could be improved depending on architecture: 1) The epilog blocks peel off the final iterations in reverse
2019 Jul 15
1
MachinePipeliner refactoring
Hi James: Personally, I like the idea of refactoring and more abstraction, But unfortunately, I don't know enough about the edges cases either. BTW: the prototype is still causing quite some Asseertions in PowerPC - some nodes are not generated in correct order. Best, Jinsong Ji (纪金松), PhD. XL/LLVM on Power Compiler Development E-mail: jji at us.ibm.com From: James Molloy <james at
2019 Jul 16
2
MachinePipeliner refactoring
Hi James, I also think that refactoring the code generation part is a great idea. That code is very complicated and difficult to maintain. I’ve wanted to rewrite that code for a long time, but just have never got to it. There are quite a few edge cases to handle (at least in the current code). I’ll take a deeper look at your patch. The abstractions that you mention, Stage and Block, are good
2018 Jun 08
4
[RFC] Porting MachinePipeliner to AArch64+SVE
Hi, I am extending LLVM for HPC applications. As one of them, I am trying to make MachinePipeliner available on AArch64 + Scalable Vector Extension environment. MachinePipeliner is currently used only by Hexagon CPU. Since it is a very portable implementation, I think that it will actually work just by adding a little code for many CPUs(See Code [2]). The current MachinePipeliner is written on
2020 Sep 07
2
[EXTERNAL] RE: Machinepipeliner interface. shouldIgnoreForPipelining, actually not ignoring.
Hi James, Having not worked on this for circa one year I've gone and refreshed my memory. We have a pretty capable implementation of swing modulo scheduling downstream, distinct from the MachinePipeliner implementation. Historically, MachinePipeliner had very tight coupling between the finding of a suitable schedule and emitting the code that adheres to that schedule. I spent quite a bit of
2020 Sep 09
2
[EXTERNAL] RE: Machinepipeliner interface. shouldIgnoreForPipelining, actually not ignoring.
Hi James, One last thing - is your target upstream? or are you working on a downstream target? Cheers, James On Tue, 8 Sep 2020 at 23:02, Nagurne, James <j-nagurne at ti.com> wrote: > I greatly appreciate you going back to gather that intel, James. It > actually helps my understanding of the whole pipeliner puzzle quite a bit! > > > > I did identify, like you, that the
2020 Sep 03
1
[EXTERNAL] RE: Machinepipeliner interface. shouldIgnoreForPipelining, actually not ignoring.
Hi James, Adding Hendrik, who has taken over ownership of the downstream code involved. I can also add background about the rationale, of that helps? It was added to ignore induction variable update code (scalar code) that is rewritten when we unroll / peel the prolog epilog anyway. Targets like Hexagon or PPC with dedicated loop control instructions for pipelined loops don't need this, but
2020 Jun 01
2
Machinepipeliner interface. shouldIgnoreForPipelining, actually not ignoring.
Hi all, I think there is a mistake in the machinepipeliner interface. In the TargetInstrInfo.h in the class PipelinerLoopInfo there is a function "bool shouldIgnoreForPipelining(const MachineInstr *MI)". The description says that if this function returns true for a given MachineInstr it will not be pipelined. However in reality it is not ignored and is being considered for
2017 May 25
3
Some questions about software pipeline in LLVM 4.0.0
Hi, I have some questions about the implementation of Software pipeline in MachinePipeliner.cpp. First, in hexagon backend, between MachinePipeliner and regalloc pass, there're some other passes like phi eliminate, two-address, register coalescing, which may change or insert intructions like 'copy' in MBB, and swp kernel loop may be destroyed by these passes. Why not put
2020 Sep 02
2
[EXTERNAL] Re: Machinepipeliner interface. shouldIgnoreForPipelining, actually not ignoring.
Sorry to bring this thread from 3 months ago back, but I’m running into this issue too. I do see that shouldIgnore is not called in the MachinePipeliner, however, James’ comment doesn’t really resolve the issue or make the story any clearer. My summary of the comment is: “Hexagon and PPC9 do not need to ignore any instructions. However, in the case that you do, such as when the indvar update is
2018 Jul 24
2
Software pipeline using LLVM
Hi all, I want to generate assembly code using Swing Modulo Scheduling in LLVM for many ALU (May could be Adders, multilayer ......), I need some help how I can do that, which commend I run? Also if possible more information about the scheduling and the register location ......, and which pass responsible about that, and which LLVM version support Swing Modulo Scheduling. Thank you. Regards
2017 Jun 01
1
Some questions about software pipeline in LLVM 4.0.0
Hi - I replied to the original sender only by mistake. Sorry about that. When we started working on the pipeliner, and added it before the scheduler, we also were concerned that the scheduler or other passes would undo the work of the pipeliner. The initial thought was that we would add information (using metadata or some other way like you've suggested) to the basic block to tell the
2015 Mar 19
2
[LLVMdev] RFC: Loop versioning for LICM
Hi Ashutosh, > On Mar 16, 2015, at 9:06 PM, Nema, Ashutosh <Ashutosh.Nema at amd.com> wrote: > > Hi Adam, > > From: Adam Nemet [mailto:anemet at apple.com <mailto:anemet at apple.com>] > Sent: Wednesday, March 11, 2015 10:48 AM > To: Nema, Ashutosh > Cc: llvmdev at cs.uiuc.edu <mailto:llvmdev at cs.uiuc.edu> > Subject: Re: [LLVMdev] RFC: Loop
2015 Mar 20
2
[LLVMdev] RFC: Loop versioning for LICM
> On Mar 19, 2015, at 9:46 PM, Nema, Ashutosh <Ashutosh.Nema at amd.com> wrote: > > Thanks Adam for your reply. > > From: Adam Nemet [mailto:anemet at apple.com <mailto:anemet at apple.com>] > Sent: Friday, March 20, 2015 3:23 AM > To: Nema, Ashutosh > Cc: Hal Finkel; Philip Reames; llvmdev at cs.uiuc.edu <mailto:llvmdev at cs.uiuc.edu> > Subject:
2018 Dec 11
2
Implement VLIW Backend on LLVM (Assembler Related Questions)
Hi paulr, Thank you for your response :) Hi Krzysztof, This is really helpful! Thank you for your guidance!! I would like to trace the Hexagon's llvm implementation. I am very interested on how Hexagon implement instruction pattern matching, instruction scheduling, and register allocation, could you give me some suggestions or reading lists to help me understand Hexagon's llvm
2015 Mar 24
3
[LLVMdev] RFC: Loop versioning for LICM
> On Mar 20, 2015, at 8:02 PM, Nema, Ashutosh <Ashutosh.Nema at amd.com> wrote: > > > Yes, this is what I was proposing above and here ;): > Thanks Adam it’s for confirming J NP :). > > > No, not hasLoopInvariantStore but hasAccessToLoopInvariantAddress. > Its only for invariant stores[not loads], Using ‘hasLoopInvariantStore’ (or a name with invariant store)
2017 Jan 05
3
LLVMTargetMachine with optimization level passed from clang.
I want the optimization to be turned on at -O1 and above. In my case, it is a target independent back-end pass. (Eg: MachinePipeliner) On 2017-01-04 18:10, Mehdi Amini wrote: >> On Jan 4, 2017, at 4:03 PM, Sumanth Gundapaneni via llvm-dev >> <llvm-dev at lists.llvm.org> wrote: >> >> I see the BackendUtil.cpp of Clang creates the TargetMachine with >> the
2015 Mar 11
2
[LLVMdev] RFC: Loop versioning for LICM
> On Mar 5, 2015, at 10:33 PM, Nema, Ashutosh <Ashutosh.Nema at amd.com> wrote: > > > I am about to post the patches to make LAA suitable for Loop Distribution. As you will hopefully find this will make the LAA more generic. I will cc you on the patches. > > Sure Adam. > > RuntimeCheckEmitter > “RuntimeCheckEmitter::addRuntimeCheck” > While creating
2017 Jan 06
2
LLVMTargetMachine with optimization level passed from clang.
getOptLevel() gets the level from TargetMachine which is created by the Backendutil in clang with either "Default", "None" or "Aggressive". Threre is no correspondence for "Less". This boils down to , if I pass "-O1", the Target Machine is created with CodeGenOpt::Default. I am available on IRC @ sgundapa. -----Original Message----- From:
2017 Jan 06
3
LLVMTargetMachine with optimization level passed from clang.
Here is a problem scenario. I want to enable a backend pass at -O2 or above. if (TM->getOptLevel() >= CodeGenOpt::Default) addPass(&xxxxx); This pass will be run at -O1 too since clang is creating the TargetMachine with CodeGenOpt::Default for -O1. --Sumanth G -----Original Message----- From: mehdi.amini at apple.com [mailto:mehdi.amini at apple.com] Sent: Friday, January 6, 2017