Prakash Prabhu
2008-May-27 21:47 UTC
[LLVMdev] JIT question: Inner workings of getPointerToFunction()
Hi, I was just reading through the Kaleidoscope tutorial (which is greatly written and understandable, thanks! ) hoping to get some glimpse about the workings of the JIT and the optimizations that are done at run time. I am curious as to how LLVM's JIT dynamically generates native code from bit code at run time and runs that code (I think my question is also somewhat more general in the sense how does any JIT system translate some form of low level IR (which presumably is JIT's data) into native code which is actually made executable at runtime). Specifically, in the following code snippet (from the tutorial), how does getPointerToFunction() actually generate native code for function LF and the call to FP succeed as if FPtr was a pointer to statically compiled code ? // JIT the function, returning a function pointer. void *FPtr = TheExecutionEngine->getPointerToFunction(LF); // Cast it to the right type (takes no arguments, returns a double) so we // can call it as a native function. double (*FP)() = (double (*)())FPtr; I took a look at getPointerToFunction() and it seems it calls materializeFunction ( is this the run time code generator ? ) where most of the work is done. It would be great if you could point out a good starting place to understand the whole JIT'ing place in the source and relevant documentation (I read the paper about Jello). Also are there any dynamic optimizations that are currently done using the JIT ? Thanks for your time ! - Prakash -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20080527/99748e62/attachment.html>
Evan Cheng
2008-May-28 00:03 UTC
[LLVMdev] JIT question: Inner workings of getPointerToFunction()
On May 27, 2008, at 2:47 PM, Prakash Prabhu wrote:> Hi, > > I was just reading through the Kaleidoscope tutorial (which is > greatly written and understandable, thanks! ) hoping to get some > glimpse about the workings of the JIT and the optimizations that are > done at run time. I am curious as to how LLVM's JIT dynamically > generates native code from bit code at run time and runs that code > (I think my question is also somewhat more general in the sense how > does any JIT system translate some form of low level IR (which > presumably is JIT's data) into native code which is actually made > executable at runtime). Specifically, in the following code snippet > (from the tutorial), how does getPointerToFunction() actually > generate native code for function LF and the call to FP succeed as > if FPtr was a pointer to statically compiled code ? > > // JIT the function, returning a function pointer. > void *FPtr = TheExecutionEngine->getPointerToFunction(LF); > > // Cast it to the right type (takes no arguments, returns a double) > so we > // can call it as a native function. > double (*FP)() = (double (*)())FPtr; > > I took a look at getPointerToFunction() and it seems it calls > materializeFunction ( is this the run time code generator ? ) where > most of the work is done. It would be great if you could point out a > good starting place to understand the whole JIT'ing place in the > source and relevant documentation (I read the paper about Jello). > Also are there any dynamic optimizations that are currently done > using the JIT ?JIT::getPointerToFunction() calls runJITOnFunction(F) which starts the whole code generation process. LLVM jit doesn't do any dynamic optimizations at this point. Evan> > > Thanks for your time ! > > - Prakash > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev