Displaying 20 results from an estimated 1000 matches similar to: "[RFC] Porting MachinePipeliner to AArch64+SVE"
2018 Jul 24
2
Software pipeline using LLVM
Hi all,
I want to generate assembly code using Swing Modulo Scheduling in LLVM for many ALU (May could be Adders, multilayer ......), I need some help how I can do that, which commend I run?
Also if possible more information about the scheduling and the register location ......, and which pass responsible about that, and which LLVM version support Swing Modulo Scheduling.
Thank you.
Regards
2020 Sep 09
2
[EXTERNAL] RE: Machinepipeliner interface. shouldIgnoreForPipelining, actually not ignoring.
Hi James,
One last thing - is your target upstream? or are you working on a
downstream target?
Cheers,
James
On Tue, 8 Sep 2020 at 23:02, Nagurne, James <j-nagurne at ti.com> wrote:
> I greatly appreciate you going back to gather that intel, James. It
> actually helps my understanding of the whole pipeliner puzzle quite a bit!
>
>
>
> I did identify, like you, that the
2020 Nov 06
2
Loop-vectorizer prototype for the EPI Project based on the RISC-V Vector Extension (Scalable vectors)
On 11/6/20 12:39 PM, Sjoerd Meijer wrote:
Hello Simon,
Thanks for your replies, very useful. And yes, thanks for the example and making the target differences clear:
; Some examples:
; RISC-V V & VE(*):
; %mask = (splat i1 1)
; %evl = min(256, %n - %i)
; MVE/SVE :
; %mask = get.active.lane.mask(%i, %n)
; %evl = call @llvm.vscale()
; AVX:
; %mask = icmp (%i + (seq
2020 Sep 07
2
[EXTERNAL] RE: Machinepipeliner interface. shouldIgnoreForPipelining, actually not ignoring.
Hi James,
Having not worked on this for circa one year I've gone and refreshed my
memory.
We have a pretty capable implementation of swing modulo scheduling
downstream, distinct from the MachinePipeliner implementation.
Historically, MachinePipeliner had very tight coupling between the finding
of a suitable schedule and emitting the code that adheres to that schedule.
I spent quite a bit of
2019 May 10
2
[Pipeliner] MachinePipeliner TargetInstrInfo hooks need more information?
Hello,
I'm working on integrating the MachinePipeliner.cpp pass into our VLIW
backend, and so far we've managed to get it working with some nice
speedups.
Unlike Hexagon however, our backend doesn't generate hardware loop
instructions and so all our loops are a combination of induction
variables, comparisons and branches. So when it came to implementing
reduceLoopCount for our
2020 Sep 03
1
[EXTERNAL] RE: Machinepipeliner interface. shouldIgnoreForPipelining, actually not ignoring.
Hi James,
Adding Hendrik, who has taken over ownership of the downstream code
involved.
I can also add background about the rationale, of that helps? It was added
to ignore induction variable update code (scalar code) that is rewritten
when we unroll / peel the prolog epilog anyway.
Targets like Hexagon or PPC with dedicated loop control instructions for
pipelined loops don't need this, but
2020 Sep 02
2
[EXTERNAL] Re: Machinepipeliner interface. shouldIgnoreForPipelining, actually not ignoring.
Sorry to bring this thread from 3 months ago back, but I’m running into this issue too.
I do see that shouldIgnore is not called in the MachinePipeliner, however, James’ comment doesn’t really resolve the issue or make the story any clearer.
My summary of the comment is: “Hexagon and PPC9 do not need to ignore any instructions. However, in the case that you do, such as when the indvar update is
2019 Jul 15
2
MachinePipeliner refactoring
Hi Brendan (and friends of MachinePipeliner, +llvm-dev for openness),
Over the past week or so I've been attempting to extend the
MachinePipeliner to support different idioms of code generation. To make
this a bit more concrete, there are two areas where the currently generated
code could be improved depending on architecture:
1) The epilog blocks peel off the final iterations in reverse
2020 Nov 06
4
Loop-vectorizer prototype for the EPI Project based on the RISC-V Vector Extension (Scalable vectors)
On 11/6/20 8:49 AM, Roger Ferrer Ibáñez wrote:
Hi Sjoerd,
Trying to remember how everything fits together here, but could get.active.lane.mask not create the %mask of the VP intrinsics? Or in other words, in the vectoriser, who's producing the %mask and %evl that is consumed by the VP intrinsics?
I'm not sure what would be the best way here. I think about the Loop Vectorizer. I imagine
2019 Jul 15
1
MachinePipeliner refactoring
Hi James:
Personally, I like the idea of refactoring and more abstraction,
But unfortunately, I don't know enough about the edges cases either.
BTW: the prototype is still causing quite some Asseertions in PowerPC -
some nodes are not generated in correct order.
Best,
Jinsong Ji (纪金松), PhD.
XL/LLVM on Power Compiler Development
E-mail: jji at us.ibm.com
From: James Molloy <james at
2017 Jun 01
1
Some questions about software pipeline in LLVM 4.0.0
Hi - I replied to the original sender only by mistake. Sorry about that.
When we started working on the pipeliner, and added it before the scheduler,
we also were concerned that the scheduler or other passes would undo the
work of the pipeliner. The initial thought was that we would add information
(using metadata or some other way like you've suggested) to the basic block
to tell the
2020 Jun 01
2
Machinepipeliner interface. shouldIgnoreForPipelining, actually not ignoring.
Hi all,
I think there is a mistake in the machinepipeliner interface. In the
TargetInstrInfo.h in the class PipelinerLoopInfo there is a function
"bool shouldIgnoreForPipelining(const MachineInstr *MI)". The
description says that if this function returns true for a given
MachineInstr it will not be pipelined.
However in reality it is not ignored and is being considered for
2019 Jul 16
2
MachinePipeliner refactoring
Hi James,
I also think that refactoring the code generation part is a great idea. That code is very complicated and difficult to maintain. I’ve wanted to rewrite that code for a long time, but just have never got to it. There are quite a few edge cases to handle (at least in the current code). I’ll take a deeper look at your patch. The abstractions that you mention, Stage and Block, are good
2017 May 25
3
Some questions about software pipeline in LLVM 4.0.0
Hi,
I have some questions about the implementation of Software pipeline in MachinePipeliner.cpp.
First, in hexagon backend, between MachinePipeliner and regalloc pass, there're some other passes like phi eliminate, two-address, register coalescing, which may change or insert intructions like 'copy' in MBB, and swp kernel loop may be destroyed by these passes.
Why not put
2020 Nov 02
2
Loop-vectorizer prototype for the EPI Project based on the RISC-V Vector Extension (Scalable vectors)
Hi all,
At the Barcelona Supercomputing Center, we have been working on an
end-to-end vectorizer using scalable vectors for RISC-V Vector extension
in context of the EPI Project
<https://www.european-processor-initiative.eu/accelerator/>. We earlier
shared a demo of our prototype implementation
(https://repo.hca.bsc.es/epic/z/9eYRIF, see below) with the folks
involved with LLVM
2012 Jun 12
3
[LLVMdev] DFAPacketizer with StateTrans != 0 Assertion
Hi Ivan,
The assertion was happening because I wasn't checking after the first
attempt failed. The first packet was failing and so it was ended, and
then the packetizer attempted to add it to the next packet without
checking for available resources. However this highlights probably the
real problem - my packetizer is unable to find resources for the first
instruction, or any of my
2013 Feb 18
0
[LLVMdev] DFAPacketizer
Hi Anshu,
Would there be any interest in extending this algorithm to handling more extensive models, such as VLIW scheduling based on FU's and bundle space... ie handle multiple stages ?
I might do it and commit, if there is acceptance and guidance...
Jonas
________________________________
From: Anshuman Dasgupta [mailto:adasgupt at codeaurora.org]
Sent: Tuesday, February 12, 2013 4:47 PM
2013 Feb 12
2
[LLVMdev] DFAPacketizer
Hi Jonas,
> It is interesting to find this in the ARM backend, considering your
answer.
The ARM backend doesn't use the DFA packetizer. It's only used by
Hexagon. At this point, there is no plan to address thisin the DFA
packetizer since none of the supported targets needthe functionality.
Thanks
-Anshu
---
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
2013 Feb 12
0
[LLVMdev] DFAPacketizer
Hi,
I looked a bit through the mail archives, and found this question answered in Oct 2011 (see below). It is interesting to find this in the ARM backend, considering your answer. Can you give more information about for example is this a temporary deficiency in the DFAPacketizer? What is the IIC_iMOVi itinerary doing below?
Thanks,
Jonas
Thu Oct 6 15:11:25 CDT 2011:
Hello Hal.
> Is there
2013 Feb 11
2
[LLVMdev] DFAPacketizer
Jonas,
At this point, the DFA packetizer models a simple VLIW architecture and
does not accommodate multiple stages. That's the reason for the behavior
you're seeing.
-Anshu
---
Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum,
hosted by The Linux Foundation
*From:*llvmdev-bounces at cs.uiuc.edu [mailto:llvmdev-bounces at cs.uiuc.edu]
*On Behalf Of *Jonas