Terry Guo via llvm-dev
2020-Jun-16 15:39 UTC
[llvm-dev] Need help on JIT compilation speed
Hi there, I am trying to JIT a rather big wasm bytecode program to x86 native code and running into this JIT compilation time issue. In the first stage, I use MCJIT to translate wasm bytecode to a single LLVM IR Module which ends up with 927 functions. Then it took a pretty long time to apply several optimization passes to this big IR module and finally generate x86 code. What should I do to shorten the compilation time? Is it possible to compile this single big IR Module with MCJIT in parallel? Is OrcV2 JIT faster than MCJIT? Can the 'concurrent compilation' feature mentioned in Orcv2 webpage help on this? Thanks in advance for any advice. This is how I organized the optimization pass: LLVMAddBasicAliasAnalysisPass(comp_ctx->pass_mgr); LLVMAddPromoteMemoryToRegisterPass(comp_ctx->pass_mgr); LLVMAddInstructionCombiningPass(comp_ctx->pass_mgr); LLVMAddJumpThreadingPass(comp_ctx->pass_mgr); LLVMAddConstantPropagationPass(comp_ctx->pass_mgr); LLVMAddReassociatePass(comp_ctx->pass_mgr); LLVMAddGVNPass(comp_ctx->pass_mgr); LLVMAddCFGSimplificationPass(comp_ctx->pass_mgr); This is how I apply passes to my single IR module (which actually includes 927 functions) if (comp_ctx->optimize) { LLVMInitializeFunctionPassManager(comp_ctx->pass_mgr); for (i = 0; i < comp_ctx->func_ctx_count; i++) LLVMRunFunctionPassManager(comp_ctx->pass_mgr, comp_ctx->func_ctxes[i]->func); } BR, Terry -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200616/f95d4bb4/attachment.html>
Praveen Velliengiri via llvm-dev
2020-Jun-16 16:12 UTC
[llvm-dev] Need help on JIT compilation speed
Hi Terry, CC'ed lang hames he is the best person to answer. In general, ORCv2 is the new and stable JIT environment. In order to have a fast compilation time you can use the -lazy compilation option in ORCv2, this will result in fast compile time and interleave compile time with execution time. You can also use the concurrent compilation option in ORCv2 to speedup. Additionally, we did a new feature called "speculative compilation" in ORcv2 which yields good results for a set of benchmarks. If you are interested please try this out. We would like to have some benchmarks on your case :) To try things out you can check out the examples directory in LLVM for ExecutionEngine. I hope this helps On Tue, 16 Jun 2020 at 21:10, Terry Guo via llvm-dev < llvm-dev at lists.llvm.org> wrote:> Hi there, > > I am trying to JIT a rather big wasm bytecode program to x86 native code > and running into this JIT compilation time issue. In the first stage, I use > MCJIT to translate wasm bytecode to a single LLVM IR Module which ends up > with 927 functions. Then it took a pretty long time to apply several > optimization passes to this big IR module and finally generate x86 code. > What should I do to shorten the compilation time? Is it possible to compile > this single big IR Module with MCJIT in parallel? Is OrcV2 JIT faster than > MCJIT? Can the 'concurrent compilation' feature mentioned in Orcv2 webpage > help on this? Thanks in advance for any advice. > > This is how I organized the optimization pass: > > LLVMAddBasicAliasAnalysisPass(comp_ctx->pass_mgr); > LLVMAddPromoteMemoryToRegisterPass(comp_ctx->pass_mgr); > LLVMAddInstructionCombiningPass(comp_ctx->pass_mgr); > LLVMAddJumpThreadingPass(comp_ctx->pass_mgr); > LLVMAddConstantPropagationPass(comp_ctx->pass_mgr); > LLVMAddReassociatePass(comp_ctx->pass_mgr); > LLVMAddGVNPass(comp_ctx->pass_mgr); > LLVMAddCFGSimplificationPass(comp_ctx->pass_mgr); > > This is how I apply passes to my single IR module (which actually includes > 927 functions) > > if (comp_ctx->optimize) { > LLVMInitializeFunctionPassManager(comp_ctx->pass_mgr); > for (i = 0; i < comp_ctx->func_ctx_count; i++) > LLVMRunFunctionPassManager(comp_ctx->pass_mgr, > comp_ctx->func_ctxes[i]->func); > } > > BR, > Terry > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200616/59469ff4/attachment.html>
Terry Guo via llvm-dev
2020-Jun-18 01:47 UTC
[llvm-dev] Need help on JIT compilation speed
Hi Praveen, Thanks for your help. I will follow your suggestions and get back if I can make some progress. BR, Terry On Wed, Jun 17, 2020 at 12:12 AM Praveen Velliengiri < praveenvelliengiri at gmail.com> wrote:> Hi Terry, > CC'ed lang hames he is the best person to answer. > > In general, ORCv2 is the new and stable JIT environment. In order to have > a fast compilation time you can use the -lazy compilation option in ORCv2, > this will result in fast compile time and interleave compile time with > execution time. You can also use the concurrent compilation option in ORCv2 > to speedup. Additionally, we did a new feature called "speculative > compilation" in ORcv2 which yields good results for a set of benchmarks. If > you are interested please try this out. We would like to have some > benchmarks on your case :) > To try things out you can check out the examples directory in LLVM for > ExecutionEngine. > I hope this helps > > On Tue, 16 Jun 2020 at 21:10, Terry Guo via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > >> Hi there, >> >> I am trying to JIT a rather big wasm bytecode program to x86 native code >> and running into this JIT compilation time issue. In the first stage, I use >> MCJIT to translate wasm bytecode to a single LLVM IR Module which ends up >> with 927 functions. Then it took a pretty long time to apply several >> optimization passes to this big IR module and finally generate x86 code. >> What should I do to shorten the compilation time? Is it possible to compile >> this single big IR Module with MCJIT in parallel? Is OrcV2 JIT faster than >> MCJIT? Can the 'concurrent compilation' feature mentioned in Orcv2 webpage >> help on this? Thanks in advance for any advice. >> >> This is how I organized the optimization pass: >> >> LLVMAddBasicAliasAnalysisPass(comp_ctx->pass_mgr); >> LLVMAddPromoteMemoryToRegisterPass(comp_ctx->pass_mgr); >> LLVMAddInstructionCombiningPass(comp_ctx->pass_mgr); >> LLVMAddJumpThreadingPass(comp_ctx->pass_mgr); >> LLVMAddConstantPropagationPass(comp_ctx->pass_mgr); >> LLVMAddReassociatePass(comp_ctx->pass_mgr); >> LLVMAddGVNPass(comp_ctx->pass_mgr); >> LLVMAddCFGSimplificationPass(comp_ctx->pass_mgr); >> >> This is how I apply passes to my single IR module (which actually >> includes 927 functions) >> >> if (comp_ctx->optimize) { >> LLVMInitializeFunctionPassManager(comp_ctx->pass_mgr); >> for (i = 0; i < comp_ctx->func_ctx_count; i++) >> LLVMRunFunctionPassManager(comp_ctx->pass_mgr, >> comp_ctx->func_ctxes[i]->func); >> } >> >> BR, >> Terry >> _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org >> https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >> >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20200618/f71f8a3e/attachment.html>