Wojciech Daniło
2012-Nov-17 21:09 UTC
[LLVMdev] Dynamic optimalization passes in LLVM based compiler
Thank you for yours response :) I know that LLVM Pass was designed to transform IR, but lets focus on an example - LLVM Pass is a function that transform some set of input into output. It can transform IR into graph of lets say strongly connected components and then other passes can use it (that data - not IR) to generate other data OR to manipulate the IR. So why I can not create passes, that would need data generated by other passes (ie. graph loaded from disk) and then transform it into LLVM IR? I do not see any difference between these cases. Am I wrong? 2012/11/17 David Blaikie <dblaikie at gmail.com>> On Sat, Nov 17, 2012 at 4:44 AM, Wojciech Daniło > <wojtek.danilo.ml at gmail.com> wrote: > > Hi! > > I'm new to LLVM but I've read tons of articles, I want to implement my > own > > compiler and I came across a big problem. > > I have several questions, that I cannot answer myself: > > > > 1) If I'm writing custom compiler do I have to "hardcode" passes that it > > uses (like in Kaleidoscope example: > > http://llvm.org/docs/tutorial/LangImpl4.html) or I have to generate > LLVM IR > > and then use the 'opt' tool to run selected passes on generated code? > > I think the solution with opt is not quite good, because the opt tool > has to > > parse the LLVM IR (or BC) input file, which is not needed, because we are > > generating it, so we have had it in memory before. > > Maybe there is another better solution allowing for enabling and > disabling > > passes in custom compiler with argument options like in opt? > > I believe Clang just hardcodes passes. If you a user wants to > experiment with different pass options they can use the option to > generate LLVM bitcode from Clang then pass that to opt themselves. > > > 2) I want to write compiler that does NOT generate LLVM IR by its own, it > > should simply run one of available module passes and such pass will > generate > > LLVM IR. > > The motivation behind this decision is that I want to have a graph (C++ > > serialized structure) as compiler input and I want to load this graph as > > pass, run other passes (which will modify this graph) and then run a > > "conversion module pass", which will convert this graph into LLVM IR. > > Additional I want to be able to read several formats and because of that > I > > want to load this graph as a pass. (This pass will be of course grouped > with > > other "load passes") > > LLVM's pass system is for IR transformations only. Anything else you > want to do you'll have to build separately/in front of LLVM. Once your > other system generates IR, then you can pass it to LLVM. > > > > > Could you please tell me what will be the best (most flexible and easy) > > solution to do this, keeping in mind the first question? > > > > I have an idea of solution (which does not work completely) - the idea > is to > > create an compiler which will initialize the base module and will do > nothing > > at all. Then I can use the opt tool with my module passes, which will > load, > > modify graph and convert it to LLVM IR (with IRBUilder) - the problem is > if > > the opt could be run without input file and if it will handle correctly > this > > situation. > > > > I was researching very long and I have not found any good answer for > these > > problems. > > I would be very thankful for any help! > > > > _______________________________________________ > > LLVM Developers mailing list > > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20121117/da9fecaa/attachment.html>
David Blaikie
2012-Nov-17 21:19 UTC
[LLVMdev] Dynamic optimalization passes in LLVM based compiler
On Sat, Nov 17, 2012 at 1:09 PM, Wojciech Daniło <wojtek.danilo.ml at gmail.com> wrote:> Thank you for yours response :) > I know that LLVM Pass was designed to transform IR, but lets focus on an > example - LLVM Pass is a function that transform some set of input into > output. It can transform IR into graph of lets say strongly connected > components and then other passes can use it (that data - not IR) to generate > other data OR to manipulate the IR. > > So why I can not create passes, that would need data generated by other > passes (ie. graph loaded from disk) and then transform it into LLVM IR? I do > not see any difference between these cases. > Am I wrong?A little. That would be stretching the concepts/machinery of LLVM a little bit far, probably. A few minor corrections: Transformations in the LLVM sense are always IR to IR. When you talk about SSC & the like, those are analyses - an Analysis never modifies the IR, it only computes values from the IR it's given. Transformations then depend on (& invalidate) analyses to decide what transformations to perform. What you're proposing is an analysis that doesn't analyze the IR at all (because there is none) - it loads information from an external source. There is one example (though I'm not sure if it's phrased as an Analysis) of that that I can think of in the current IR: profile guided optimization. The profile must be loaded from some external source, references built up to the IR, and then Transformations can depend on this information when choosing how to optimize. Effectively your graph transformations would exist purely as analyses - transforming non-IR data from pass to pass until you reached some transformation that would transform null IR into the actual IR represented by the graph from the analyses. It's not really going to give you a lot of value compared to just building your own graph transformation pipeline & then producing IR at the end of that. To come back to your original question: "I want to write a compiler that does NOT generate LLVM IR by its own, it should simply run one of available module passes and such pass will generate LLVM IR" - why do you want to do this? You're just going to have to write the graph-IR transformation sooner or later anyway? Why not do it as the first step & then do IR level optimizations? (I'm not saying there's no reason to do this, I'm just wondering what /your/ reasons are) - David> > > 2012/11/17 David Blaikie <dblaikie at gmail.com> >> >> On Sat, Nov 17, 2012 at 4:44 AM, Wojciech Daniło >> <wojtek.danilo.ml at gmail.com> wrote: >> > Hi! >> > I'm new to LLVM but I've read tons of articles, I want to implement my >> > own >> > compiler and I came across a big problem. >> > I have several questions, that I cannot answer myself: >> > >> > 1) If I'm writing custom compiler do I have to "hardcode" passes that it >> > uses (like in Kaleidoscope example: >> > http://llvm.org/docs/tutorial/LangImpl4.html) or I have to generate LLVM >> > IR >> > and then use the 'opt' tool to run selected passes on generated code? >> > I think the solution with opt is not quite good, because the opt tool >> > has to >> > parse the LLVM IR (or BC) input file, which is not needed, because we >> > are >> > generating it, so we have had it in memory before. >> > Maybe there is another better solution allowing for enabling and >> > disabling >> > passes in custom compiler with argument options like in opt? >> >> I believe Clang just hardcodes passes. If you a user wants to >> experiment with different pass options they can use the option to >> generate LLVM bitcode from Clang then pass that to opt themselves. >> >> > 2) I want to write compiler that does NOT generate LLVM IR by its own, >> > it >> > should simply run one of available module passes and such pass will >> > generate >> > LLVM IR. >> > The motivation behind this decision is that I want to have a graph (C++ >> > serialized structure) as compiler input and I want to load this graph as >> > pass, run other passes (which will modify this graph) and then run a >> > "conversion module pass", which will convert this graph into LLVM IR. >> > Additional I want to be able to read several formats and because of that >> > I >> > want to load this graph as a pass. (This pass will be of course grouped >> > with >> > other "load passes") >> >> LLVM's pass system is for IR transformations only. Anything else you >> want to do you'll have to build separately/in front of LLVM. Once your >> other system generates IR, then you can pass it to LLVM. >> >> > >> > Could you please tell me what will be the best (most flexible and easy) >> > solution to do this, keeping in mind the first question? >> > >> > I have an idea of solution (which does not work completely) - the idea >> > is to >> > create an compiler which will initialize the base module and will do >> > nothing >> > at all. Then I can use the opt tool with my module passes, which will >> > load, >> > modify graph and convert it to LLVM IR (with IRBUilder) - the problem is >> > if >> > the opt could be run without input file and if it will handle correctly >> > this >> > situation. >> > >> > I was researching very long and I have not found any good answer for >> > these >> > problems. >> > I would be very thankful for any help! >> > >> > _______________________________________________ >> > LLVM Developers mailing list >> > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu >> > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev >> > > >
Wojciech Daniło
2012-Nov-17 21:56 UTC
[LLVMdev] Dynamic optimalization passes in LLVM based compiler
> > I know that LLVM Pass was designed to transform IR, but lets focus on an > > example - LLVM Pass is a function that transform some set of input into > > output. It can transform IR into graph of lets say strongly connected > > components and then other passes can use it (that data - not IR) to > generate > > other data OR to manipulate the IR. > > > > So why I can not create passes, that would need data generated by other > > passes (ie. graph loaded from disk) and then transform it into LLVM IR? > I do > > not see any difference between these cases. > > Am I wrong? > > A little. That would be stretching the concepts/machinery of LLVM a > little bit far, probably. > > A few minor corrections: > > Transformations in the LLVM sense are always IR to IR. > When you talk about SSC & the like, those are analyses - an Analysis > never modifies the IR, it only computes values from the IR it's > given. Transformations then depend on (& invalidate) analyses to > decide what transformations to perform. >You are right, my nomenclature was wrong - I want to write analysis passes and one transformation pass genrating LLVM IR.> > What you're proposing is an analysis that doesn't analyze the IR at > all (because there is none) - it loads information from an external > source. There is one example (though I'm not sure if it's phrased as > an Analysis) of that that I can think of in the current IR: profile > guided optimization. The profile must be loaded from some external > source, references built up to the IR, and then Transformations can > depend on this information when choosing how to optimize. > > Effectively your graph transformations would exist purely as analyses > - transforming non-IR data from pass to pass until you reached some > transformation that would transform null IR into the actual IR > represented by the graph from the analyses. > > It's not really going to give you a lot of value compared to just > building your own graph transformation pipeline & then producing IR at > the end of that. >It allows me to use LLVM dependency pass manager - with analysis groups etc. I would have to write exactly the same the other way, I think.> > To come back to your original question: "I want to write a compiler > that does NOT generate LLVM IR by its own, it should simply run one of > available module passes and such pass will generate LLVM IR" - why do > you want to do this? You're just going to have to write the graph-IR > transformation sooner or later anyway? Why not do it as the first step > & then do IR level optimizations? (I'm not saying there's no reason to > do this, I'm just wondering what /your/ reasons are) >The answer is simple - In the graph loaded from disk there is a lot more information than in generated IR, so I want to do some transformations on the beginning. (There are other reasons, but this one is one of the biggest).> > - David > > > > > > > 2012/11/17 David Blaikie <dblaikie at gmail.com> > >> > >> On Sat, Nov 17, 2012 at 4:44 AM, Wojciech Daniło > >> <wojtek.danilo.ml at gmail.com> wrote: > >> > Hi! > >> > I'm new to LLVM but I've read tons of articles, I want to implement my > >> > own > >> > compiler and I came across a big problem. > >> > I have several questions, that I cannot answer myself: > >> > > >> > 1) If I'm writing custom compiler do I have to "hardcode" passes that > it > >> > uses (like in Kaleidoscope example: > >> > http://llvm.org/docs/tutorial/LangImpl4.html) or I have to generate > LLVM > >> > IR > >> > and then use the 'opt' tool to run selected passes on generated code? > >> > I think the solution with opt is not quite good, because the opt tool > >> > has to > >> > parse the LLVM IR (or BC) input file, which is not needed, because we > >> > are > >> > generating it, so we have had it in memory before. > >> > Maybe there is another better solution allowing for enabling and > >> > disabling > >> > passes in custom compiler with argument options like in opt? > >> > >> I believe Clang just hardcodes passes. If you a user wants to > >> experiment with different pass options they can use the option to > >> generate LLVM bitcode from Clang then pass that to opt themselves. > >> > >> > 2) I want to write compiler that does NOT generate LLVM IR by its own, > >> > it > >> > should simply run one of available module passes and such pass will > >> > generate > >> > LLVM IR. > >> > The motivation behind this decision is that I want to have a graph > (C++ > >> > serialized structure) as compiler input and I want to load this graph > as > >> > pass, run other passes (which will modify this graph) and then run a > >> > "conversion module pass", which will convert this graph into LLVM IR. > >> > Additional I want to be able to read several formats and because of > that > >> > I > >> > want to load this graph as a pass. (This pass will be of course > grouped > >> > with > >> > other "load passes") > >> > >> LLVM's pass system is for IR transformations only. Anything else you > >> want to do you'll have to build separately/in front of LLVM. Once your > >> other system generates IR, then you can pass it to LLVM. > >> > >> > > >> > Could you please tell me what will be the best (most flexible and > easy) > >> > solution to do this, keeping in mind the first question? > >> > > >> > I have an idea of solution (which does not work completely) - the idea > >> > is to > >> > create an compiler which will initialize the base module and will do > >> > nothing > >> > at all. Then I can use the opt tool with my module passes, which will > >> > load, > >> > modify graph and convert it to LLVM IR (with IRBUilder) - the problem > is > >> > if > >> > the opt could be run without input file and if it will handle > correctly > >> > this > >> > situation. > >> > > >> > I was researching very long and I have not found any good answer for > >> > these > >> > problems. > >> > I would be very thankful for any help! > >> > > >> > _______________________________________________ > >> > LLVM Developers mailing list > >> > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > >> > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev > >> > > > > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20121117/ed78a969/attachment.html>
Seemingly Similar Threads
- [LLVMdev] Dynamic optimalization passes in LLVM based compiler
- [LLVMdev] Dynamic optimalization passes in LLVM based compiler
- [LLVMdev] Dynamic optimalization passes in LLVM based compiler
- [LLVMdev] Dynamic optimalization passes in LLVM based compiler
- [LLVMdev] Dynamic optimalization passes in LLVM based compiler