Baris Aktemur
2012-Oct-12 18:14 UTC
[LLVMdev] Dynamically loading native code generated from LLVM IR
On 12 Eki 2012, at 20:00, Jim Grosbach wrote:> > On Oct 12, 2012, at 7:07 AM, Baris Aktemur <baris.aktemur at ozyegin.edu.tr> wrote: > >> Dear Tim, >> >>> >>> The JIT sounds like it does almost exactly what you want. LLVM's JIT >>> isn't a classical lightweight, dynamic one like you'd see for >>> JavaScript or Java. All it really does is produce a native .o file in >>> memory, take care of the relocations for you and then jump into it (or >>> provide you with a function-pointer). Is there any other reason you >>> want to avoid it? >>> >> >> Based on the experiments I ran, JIT version runs significantly slower than the code compiled to native. But according to your explanation, this shouldn't have happened. I wonder why I witnessed the performance difference. >> > > Did you compile the native version with any optimizations enabled?Yes. When I dump the IR, I get the same output as "clang -O3". Are the back-end optimizations enabled separately?> >> Thank you. >> >> -Baris Aktemur >> >> >> _______________________________________________ >> LLVM Developers mailing list >> LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu >> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev >
Jim Grosbach
2012-Oct-12 18:17 UTC
[LLVMdev] Dynamically loading native code generated from LLVM IR
On Oct 12, 2012, at 11:14 AM, Baris Aktemur <baris.aktemur at ozyegin.edu.tr> wrote:> > On 12 Eki 2012, at 20:00, Jim Grosbach wrote: > >> >> On Oct 12, 2012, at 7:07 AM, Baris Aktemur <baris.aktemur at ozyegin.edu.tr> wrote: >> >>> Dear Tim, >>> >>>> >>>> The JIT sounds like it does almost exactly what you want. LLVM's JIT >>>> isn't a classical lightweight, dynamic one like you'd see for >>>> JavaScript or Java. All it really does is produce a native .o file in >>>> memory, take care of the relocations for you and then jump into it (or >>>> provide you with a function-pointer). Is there any other reason you >>>> want to avoid it? >>>> >>> >>> Based on the experiments I ran, JIT version runs significantly slower than the code compiled to native. But according to your explanation, this shouldn't have happened. I wonder why I witnessed the performance difference. >>> >> >> Did you compile the native version with any optimizations enabled? > > > Yes. When I dump the IR, I get the same output as "clang -O3". Are the back-end optimizations enabled separately?Yes, but it's more the code generation model I'm suspecting is what's going on. Specifically, that you're using SelectionDAGISel for static compilation and FastISel for JIT compilation. The latter generated code very quickly, as the name implies, but the quality of that code is generally pretty terrible compared to the static compiler.> > >> >>> Thank you. >>> >>> -Baris Aktemur >>> >>> >>> _______________________________________________ >>> LLVM Developers mailing list >>> LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu >>> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev >> >
Baris Aktemur
2012-Oct-12 18:28 UTC
[LLVMdev] Dynamically loading native code generated from LLVM IR
On 12 Eki 2012, at 21:14, Baris Aktemur wrote:> > On 12 Eki 2012, at 20:00, Jim Grosbach wrote: > >> >> On Oct 12, 2012, at 7:07 AM, Baris Aktemur <baris.aktemur at ozyegin.edu.tr> wrote: >> >>> Dear Tim, >>> >>>> >>>> The JIT sounds like it does almost exactly what you want. LLVM's JIT >>>> isn't a classical lightweight, dynamic one like you'd see for >>>> JavaScript or Java. All it really does is produce a native .o file in >>>> memory, take care of the relocations for you and then jump into it (or >>>> provide you with a function-pointer). Is there any other reason you >>>> want to avoid it? >>>> >>> >>> Based on the experiments I ran, JIT version runs significantly slower than the code compiled to native. But according to your explanation, this shouldn't have happened. I wonder why I witnessed the performance difference. >>> >> >> Did you compile the native version with any optimizations enabled? > > > Yes. When I dump the IR, I get the same output as "clang -O3". Are the back-end optimizations enabled separately? >Sorry, I misunderstood the question. I compiled the native version with optimizations enabled, using "clang -shared -fPIC -O3". In the version that uses JIT, I build the IR, then run the same passes over the IR that "opt -O3" runs to obtain optimized IR. After these runs, I call ExecutionEngine::getPointerToFunction().>> >>> Thank you. >>> >>> -Baris Aktemur >>> >>> >>> _______________________________________________ >>> LLVM Developers mailing list >>> LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu >>> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev >> >
Baris Aktemur
2012-Oct-17 16:56 UTC
[LLVMdev] Dynamically loading native code generated from LLVM IR
Dear Jim, On 12 Eki 2012, at 21:17, Jim Grosbach wrote:> > On Oct 12, 2012, at 11:14 AM, Baris Aktemur <baris.aktemur at ozyegin.edu.tr> wrote: > >> >> On 12 Eki 2012, at 20:00, Jim Grosbach wrote: >> >>> >>> On Oct 12, 2012, at 7:07 AM, Baris Aktemur <baris.aktemur at ozyegin.edu.tr> wrote: >>> >>>> Dear Tim, >>>> >>>>> >>>>> The JIT sounds like it does almost exactly what you want. LLVM's JIT >>>>> isn't a classical lightweight, dynamic one like you'd see for >>>>> JavaScript or Java. All it really does is produce a native .o file in >>>>> memory, take care of the relocations for you and then jump into it (or >>>>> provide you with a function-pointer). Is there any other reason you >>>>> want to avoid it? >>>>> >>>> >>>> Based on the experiments I ran, JIT version runs significantly slower than the code compiled to native. But according to your explanation, this shouldn't have happened. I wonder why I witnessed the performance difference. >>>> >>> >>> Did you compile the native version with any optimizations enabled? >> >> >> Yes. When I dump the IR, I get the same output as "clang -O3". Are the back-end optimizations enabled separately? > > Yes, but it's more the code generation model I'm suspecting is what's going on. Specifically, that you're using SelectionDAGISel for static compilation and FastISel for JIT compilation. The latter generated code very quickly, as the name implies, but the quality of that code is generally pretty terrible compared to the static compiler. >Is there an option that I can pass to the (MC)JITer to force it to use SelectionDAGISel? I'm also curious which passes/algorithms are used when I set the MCJIT option to true and the opt level to Aggressive. E.g: engineBuilder.setUseMCJIT(true); engineBuilder.setOptLevel(llvm::CodeGenOpt::Aggressive); I adapted lli.cpp to use MCJIT in my code. I get better performance now -- close to statically compiled native code, but still not exactly the same (about 10% slower). Thank you. -Baris Aktemur
Reasonably Related Threads
- [LLVMdev] Dynamically loading native code generated from LLVM IR
- [LLVMdev] Dynamically loading native code generated from LLVM IR
- [LLVMdev] Dynamically loading native code generated from LLVM IR
- [LLVMdev] Dynamically loading native code generated from LLVM IR
- [LLVMdev] Programmatically converting LLVM IR to native code