Xin Tong Utoronto
2011-Mar-31 21:35 UTC
[LLVMdev] GSOC Adaptive Compilation Framework for LLVM JIT Compiler
On Tue, Mar 29, 2011 at 4:45 PM, Eric Christopher <echristo at apple.com>wrote:> > > > Project Outline: > > > > > > > > Currently, the LLVM JIT serves as a management layer for the executed > LLVM IR, it manages the compiled code and calls the LLVM code generator to > do the real work. There are levels of optimizations for the LLVM code > generator, and depends on how much optimizations the code generator is asked > to do, the time taken may vary significantly. The adaptive compilation > mechanism should be able to detect when a method is getting hot, compiling > or recompiling it using the appropriate optimization levels. Moreover, this > should happen transparently to the running application. In order to keep > track of how many times a JITed function is called. This involves inserting > instrumentational code into the function's LLVM bitcode before it is sent to > the code generator. This code will increment a counter when the function is > called. And when the counter reaches a threshold, the function gives control > back to the LLVM JIT. Then the JIT will look at the hotness of all the > methods and find the one that triggered the recompilation threshold. The JIT > can then choose to raise the level of optimization based on the algorithm > below or some other algorithms developed later. > > > > > > IF (getCompilationCount(method) > 50 in the last 100 samples) = > > Recompile at Aggressive > > ELSE Recompile at the next optimization level. > > > > > > Even though the invocation counting introduces a few lines of binary, but > the advantages of adaptive optimization should far overweigh the extra few > lines of binary introduced. Note the adaptive compilation framework I > propose here is orthogonal to the LLVM profile-guided optimizations. The > profile-guided optimization is a technique used to optimize code with some > profiling or external information. But the adaptive compilation framework is > concerned with level of optimizations instead of how the optimizations are > to be performed. > > > > So, one way that current projects use the JIT is via getPointerToFunction() > which returns an address that can then be casted and called with the > appropriate arguments. The compile task itself is often done on a separate > thread. How would you deal with the updating problem in the calling > application? What sort of use cases for the JIT have you looked at so far? >I assume the updating problem means the problem when a method gets recompiled. Here is an algorithm to deal with that. Say A calls B. when B gets recompiled we patch B with *br helper* at the beginning of its code, then when A calls B, B branches to the helper and the helper patches the *br B* in A with *br newB*. as we don't know all the callers of B, we have to wait until they call B to know who they are and patch them one-by-one. The helper can get the address of the *br B* in A from the link register or some specific registers or memory locations. For newly compiled code, the address of the newB can be used. There is another problem with recompilation. obsolete methods(methods that have recompiled copies) need to be recycled. In order to do that, we will need to keep a *br helper* in place of the old method and reclaim the old method body. As for use case, the LLVM JIT is used as an execution engine for a few number of ported languages, for example JIT compiler for PHP, in 2008 GSOC. There are also people using LLVM JIT for industry work, https://llvm.org/svn/llvm-project/www-pubs/trunk/2010-01-Wennborg-Thesis.pdf . As LLVM is growing more and more powerful, LLVM JIT will become more and more attractive to language designer and implementer. And I think that is one of the most important reasons we need to have an adaptive compilation framework. This framework can also work together with the LLVM profile-guided optimizations to make LLVM JIT a much faster execution engine.> > > > > > This is a relatively small project and does not involve a lot of coding, > but good portion of the time will be spent benchmarking, tuning and > experimenting with different algorithms, i.e. what would be the algorithm to > raise the compilation level when a method recompilation threshold is > reached, can we make this algorithm adaptive too, etc. Therefore, my > timeline for the project is as follow > > > > > > Week 1 > > Benchmarking the current LLVM JIT compiler, measuring compilation speed > differences for different levels of compilation. This piece of information > is required to understand why a heuristic will outperform others > > > > Week 10 - 13 > > Benchmarking, tuning and experimenting with different recompilation > algorithms. Typically benchmarking test cases would be > > > > What do you have in mind for benchmarking? Which of the jitted problems > were you looking at, or just running large programs through lli and that > interface? (Which isn't threaded and therefore doesn't have the problems I > mentioned above - it has other problems). >Widely known benchmarks, such as SPEC CPU, would be good candidates. In addition to these benchmarks, we may want to introduce some specific tests for Just-In-Time compilers, like ones with a small portions of the methods taking up 80%+ of the time and ones with all the methods spend about the same amount of time and ones in the middle of the two.> > > > > Week 14 > > Test and organize code. Documentation > > > > As a general note all of these things would need to be done during the > project along with incremental changes made to the repository (on a branch > if possible). > > > Overall Goals: > > > > > > My main goal at the end of the summer is to have an automated profiling > and adaptive compilation framework for the LLVM. Even though the performance > improvements are still unclear at this point, I believe that this adaptive > compilation framework will definitely give noticeable performance benefits, > as the current JIT compilation is either too simple to give a reasonably > fast code or too expensive to apply to all functions. > > My comments above aside, I think this is a great idea for a project. It is > aggressive so the amount of time you put in will likely be larger than a > scaled back project.>From the questions you asked, I now understand why this project might takemore time than I originally anticipated. Thank You. - Xin>> -eric-- Kind Regards Xin Tong -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20110331/11bf1efc/attachment.html>
Eric Christopher
2011-Mar-31 22:47 UTC
[LLVMdev] GSOC Adaptive Compilation Framework for LLVM JIT Compiler
> >> So, one way that current projects use the JIT is via getPointerToFunction() which returns an address that can then be casted and called with the appropriate arguments. The compile task itself is often done on a separate thread. How would you deal with the updating problem in the calling application? What sort of use cases for the JIT have you looked at so far? >> > I assume the updating problem means the problem when a method gets recompiled. Here is an algorithm to deal with that. Say A calls B. when B gets recompiled we patch B with br helper at the beginning of its code, then when A calls B, B branches to the helper and the helper patches the br B in A with br newB. as we don't know all the callers of B, we have to wait until they call B to know who they are and patch them one-by-one. The helper can get the address of the br B in A from the link register or some specific registers or memory locations. For newly compiled code, the address of the newB can be used. There is another problem with recompilation. obsolete methods(methods that have recompiled copies) need to be recycled. In order to do that, we will need to keep a br helper in place of the old method and reclaim the old method body. >This all assumes that you have control over where the parent (for lack of a better term) calls the function you're compiling. This method of replacement only works when you call a stub in place of the function - since the JIT owns the stub you'll have a relocation to replace. If you are giving an actual address that is the real start of the function this won't work since you'll have no way of updating. Just some food for though.> As for use case, the LLVM JIT is used as an execution engine for a few number of ported languages, for example JIT compiler for PHP, in 2008 GSOC. There are also people using LLVM JIT for industry work, https://llvm.org/svn/llvm-project/www-pubs/trunk/2010-01-Wennborg-Thesis.pdf. As LLVM is growing more and more powerful, LLVM JIT will become more and more attractive to language designer and implementer. And I think that is one of the most important reasons we need to have an adaptive compilation framework. This framework can also work together with the LLVM profile-guided optimizations to make LLVM JIT a much faster execution engine. >Heh. Not quite what I meant by use cases :) I meant some "ways that you expect people will take the address of the code you are providing to run".>> What do you have in mind for benchmarking? Which of the jitted problems were you looking at, or just running large programs through lli and that interface? (Which isn't threaded and therefore doesn't have the problems I mentioned above - it has other problems). > > Widely known benchmarks, such as SPEC CPU, would be good candidates. In addition to these benchmarks, we may want to introduce some specific tests for Just-In-Time compilers, like ones with a small portions of the methods taking up 80%+ of the time and ones with all the methods spend about the same amount of time and ones in the middle of the two. >*nod* You'll also want to test short lifetime code to make certain that you aren't regressing the performance of that too much as well.> From the questions you asked, I now understand why this project might take more time than I originally anticipated. Thank You.Thanks for looking into it. I think it's a great idea for a GSoC project. -eric
Xin Tong Utoronto
2011-Apr-01 03:06 UTC
[LLVMdev] GSOC Adaptive Compilation Framework for LLVM JIT Compiler
On Thu, Mar 31, 2011 at 6:47 PM, Eric Christopher <echristo at apple.com>wrote:> > > >> So, one way that current projects use the JIT is via > getPointerToFunction() which returns an address that can then be casted and > called with the appropriate arguments. The compile task itself is often done > on a separate thread. How would you deal with the updating problem in the > calling application? What sort of use cases for the JIT have you looked at > so far? > >> > > I assume the updating problem means the problem when a method gets > recompiled. Here is an algorithm to deal with that. Say A calls B. when B > gets recompiled we patch B with br helper at the beginning of its code, then > when A calls B, B branches to the helper and the helper patches the br B in > A with br newB. as we don't know all the callers of B, we have to wait until > they call B to know who they are and patch them one-by-one. The helper can > get the address of the br B in A from the link register or some specific > registers or memory locations. For newly compiled code, the address of the > newB can be used. There is another problem with recompilation. obsolete > methods(methods that have recompiled copies) need to be recycled. In order > to do that, we will need to keep a br helper in place of the old method and > reclaim the old method body. > > > > This all assumes that you have control over where the parent (for lack of a > better term) calls the function you're compiling. This method of replacement > only works when you call a stub in place of the function - since the JIT > owns the stub you'll have a relocation to replace. If you are giving an > actual address that is the real start of the function this won't work since > you'll have no way of updating. > > Just some food for though. >No we will always have control over where the parent calls the functions that we are recompiling. As explained in the example below Original Code Binary for A: Binary for B: ... ... ... ... br B ... ... ... ... After B is recompiled, we patch the entry of B with br helper Binary for A: Binary for B: Binary for Recompiled B ... br helper ... ... ... ... br B ... ... ... ... now when the parent A calls B, B branches to the helper, we will get the address of br B from the return address pushed onto the stack or saved in the link register. The helper then patches the callsite in A After Patching Binary for A: Binary for B: Binary for Recompiled B ... br helper ... ... ... ... br Recompiled B ... ... ... ...> > > As for use case, the LLVM JIT is used as an execution engine for a few > number of ported languages, for example JIT compiler for PHP, in 2008 GSOC. > There are also people using LLVM JIT for industry work, > https://llvm.org/svn/llvm-project/www-pubs/trunk/2010-01-Wennborg-Thesis.pdf. > As LLVM is growing more and more powerful, LLVM JIT will become more and > more attractive to language designer and implementer. And I think that is > one of the most important reasons we need to have an adaptive compilation > framework. This framework can also work together with the LLVM > profile-guided optimizations to make LLVM JIT a much faster execution > engine. > > > > Heh. Not quite what I meant by use cases :) I meant some "ways that you > expect people will take the address of the code you are providing to run". > > >> What do you have in mind for benchmarking? Which of the jitted problems > were you looking at, or just running large programs through lli and that > interface? (Which isn't threaded and therefore doesn't have the problems I > mentioned above - it has other problems). > > > > Widely known benchmarks, such as SPEC CPU, would be good candidates. In > addition to these benchmarks, we may want to introduce some specific tests > for Just-In-Time compilers, like ones with a small portions of the methods > taking up 80%+ of the time and ones with all the methods spend about the > same amount of time and ones in the middle of the two. > > > > *nod* You'll also want to test short lifetime code to make certain that you > aren't regressing the performance of that too much as well. > > > From the questions you asked, I now understand why this project might > take more time than I originally anticipated. Thank You. > > Thanks for looking into it. I think it's a great idea for a GSoC project. > > -eric >-- Kind Regards Xin Tong -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20110331/f2a4bc34/attachment.html>
Reasonably Related Threads
- [LLVMdev] GSOC Adaptive Compilation Framework for LLVM JIT Compiler
- [LLVMdev] GSOC Adaptive Compilation Framework for LLVM JIT Compiler
- [LLVMdev] GSOC Adaptive Compilation Framework for LLVM JIT Compiler
- [LLVMdev] GSOC Adaptive Compilation Framework for LLVM JIT Compiler
- [LLVMdev] GSOC Adaptive Compilation Framework for LLVM JIT Compiler