While compiling some sources, translating from my compiler's IR to LLVM using the C++ API requires 2.5 seconds. If the resulting LLVM module is dumped as LLVM assembler, the file is 240,000 lines long. Generating LLVM code is fast. However, generating the native code is quite slow: 33 seconds. I force native code generation calling ExecutionEngine::getPointerToFunction for each function on the module. This is on x86/Windows/MinGW. The only pass is TargetData, so no fancy optimizations. I don't think that a static compiler (llvm-gcc, for instance) needs so much time for generating unoptimized native code for a similarly sized module. Is there something special about the JIT that makes it so slow? -- Óscar
On Aug 25, 2009, at 1:40 PM, Óscar Fuentes wrote:> While compiling some sources, translating from my compiler's IR to > LLVM > using the C++ API requires 2.5 seconds. If the resulting LLVM module > is > dumped as LLVM assembler, the file is 240,000 lines long. Generating > LLVM code is fast. > > However, generating the native code is quite slow: 33 seconds. I force > native code generation calling ExecutionEngine::getPointerToFunction > for > each function on the module. > > This is on x86/Windows/MinGW. The only pass is TargetData, so no fancy > optimizations. > > I don't think that a static compiler (llvm-gcc, for instance) needs so > much time for generating unoptimized native code for a similarly sized > module. Is there something special about the JIT that makes it so > slow?The JIT uses the entire code generator, which uses N^2 algorithms etc in some cases. If you care about compile time, I'd strongly suggest using the "local" register allocator and the "-fast" mode. This is what we do for -O0 compiles and it is much much faster than the defaults. However, you get worse performing code out of the compiler. -Chris
On Wed, Aug 26, 2009 at 1:10 AM, Óscar Fuentes<ofv at wanadoo.es> wrote:> While compiling some sources, translating from my compiler's IR to LLVM > using the C++ API requires 2.5 seconds. If the resulting LLVM module is > dumped as LLVM assembler, the file is 240,000 lines long. Generating > LLVM code is fast. > > However, generating the native code is quite slow: 33 seconds. I force > native code generation calling ExecutionEngine::getPointerToFunction for > each function on the module. > > This is on x86/Windows/MinGW. The only pass is TargetData, so no fancy > optimizations. > > I don't think that a static compiler (llvm-gcc, for instance) needs so > much time for generating unoptimized native code for a similarly sized > module. Is there something special about the JIT that makes it so slow?For comparison, how long does it take to write the whole thing out as native assembler? What optimization level are you using for code generation? -Eli
Chris Lattner <clattner at apple.com> writes:> The JIT uses the entire code generator, which uses N^2 algorithms etc > in some cases. If you care about compile time, I'd strongly suggest > using the "local" register allocator and the "-fast" mode. This is > what we do for -O0 compiles and it is much much faster than the > defaults.Okay, I'll do if some day I figure out how to pass those options to the JIT :-)> However, you get worse performing code out of the compiler.This affects the quality of register allocation and instruction selection, but optimization passes (inlining, mem2reg, etc) are still effective, aren't they? Thanks. -- Óscar
Eli Friedman <eli.friedman at gmail.com> writes:> On Wed, Aug 26, 2009 at 1:10 AM, Óscar Fuentes<ofv at wanadoo.es> wrote: >> While compiling some sources, translating from my compiler's IR to LLVM >> using the C++ API requires 2.5 seconds. If the resulting LLVM module is >> dumped as LLVM assembler, the file is 240,000 lines long. Generating >> LLVM code is fast. >> >> However, generating the native code is quite slow: 33 seconds. I force >> native code generation calling ExecutionEngine::getPointerToFunction for >> each function on the module. >> >> This is on x86/Windows/MinGW. The only pass is TargetData, so no fancy >> optimizations. >> >> I don't think that a static compiler (llvm-gcc, for instance) needs so >> much time for generating unoptimized native code for a similarly sized >> module. Is there something special about the JIT that makes it so slow? > > For comparison, how long does it take to write the whole thing out as > native assembler?What kind of metric this is? How string manipulation and I/O are a better indication than the number of llvm assembly lines generated or the ratio (llvm IR generation time / native code generation time)?> What optimization level are you using for code > generation?As explained on the original post, there are no optimizations whatsoever. After reading Chris' message, my only hope is to disable the non-linear stuff and still get decent native code. -- Óscar