Paul J. Lucas
2012-Nov-06 15:43 UTC
[LLVMdev] Using LLVM to serialize object state -- and performance
Thanks for responding. Sorry for the delay in my reply, but I was dealing with hurricane Sandy. Anyway.... My software build produces libmylib.so. The JIT'd function only calls external C functions in libmylib.so and not other JIT'd functions. The C functions are simple thunks to call constructors. For example, given: class BinaryNode : public Node { public: BinaryNode( Node *left, Node *right ); // ... }; there exists a C thunk: void* T_BinaryNode_new_2Pv( void *left, void *right ) { return new BinaryNode( (Node*)left, (Node*)right ); } The JIT'd function is just a sequence of such calls to thunks to build up an object tree. The idea is to generate LLVM code, write it out to disk, terminate execution of the current program's process; then, at some later time, start a new process for the program, read in the previously generated LLVM code from disk, call the JIT function that will reconstitute the state of the tree just as it was. Elsewhere in my code, I keep a set of llvm::Function*'s, one for each thunk. For each function, I use ExecutionEngine::addGlobalMapping() to bind the Function* to the actual thunk. The binding does use Module::getFunction(). Oddly, on Mac OS X, I only have to do this when my program is creating the LLVM code; on Linux, I also have to do it when my program is reading the LLVM code back in and trying to execute it. Hopefully, I've explained this better. You then later wrote:> The default JITMemoryManager implementation uses sys::DynamicLibrary::SearchForAddressOfSymbol to find the function. If you know all of the names and addresses of the functions that will need to be resolved, you can provide a custom memory manager implementation to optimize this external function resolution.Based on my clarification, is this still the best course of action? - Paul On Oct 26, 2012, at 5:32 PM, "Kaylor, Andrew" <andrew.kaylor at intel.com> wrote:> I'm not sure I have a clear picture of what you're JIT'ing. If any of the JIT'ed functions call other JIT'ed functions, it may be difficult to find all the dependencies of a functions and recreate them correctly on a subsequent load. Even if the JIT'ed functions only call non-JIT'ed functions, I think you'd need some confidence that the address of the called functions wasn't being moved. > > It's possible that what you're considering would work, but I don't think it's a scenario that the JIT intends to support. > > It would be possible, however, to use the MCJIT engine and cache its results. It requires some modifications to the MCJIT engine but nothing major (I know because my team has a patch in the works to do this, but it's blocked by some other things at the moment). MCJIT generates complete object images and then uses RuntimeDyld to load them. If you had a hook to save the generated object, you could use RuntimeDyld directly to load it later. There are other ways to generate the object image (i.e. without MCJIT), but I'm not sure it would be easier. > > You basically just need to grab the Buffer that MCJIT::emitObject() has after it calls PM.run() and Buffer->flush() but before it passes it to Dyld.loadObject(). If you prefer, you could copy what MCJIT does and move it somewhere in your own code. There's not a lot to it. > > -Andy > > > -----Original Message----- > From: llvmdev-bounces at cs.uiuc.edu [mailto:llvmdev-bounces at cs.uiuc.edu] On Behalf Of Paul J. Lucas > Sent: Friday, October 26, 2012 4:17 PM > To: llvmdev at cs.uiuc.edu List > Subject: [LLVMdev] Using LLVM to serialize object state -- and performance > > I have a legacy C++ application that constructs a tree of C++ objects (an iterator tree to implement a query language). I am trying to use LLVM to "serialize" the state of this tree to disk for later loading and execution (or "compile" it to disk, if you prefer). > > Each of the C++ iterator objects now has a codegen() member function that adds to the LLVM code of an llvm::Function. The LLVM code generated is a sequence of instructions to set up the arguments for and call the constructor of each C++ object. (I am using C "thunks" that provide a C API to LLVM to make C++ class constructor calls.) Hence, all the LLVM code taken together into a single "reconstitute" function are mostly a sequence of "call" instructions with a few "store" and "getelementptr" instructions here and there -- fairly straight-forward LLVM code. > > I then write out the LLVM IR code to disk and, at some later time, read it back in with ParseIR(), do getPointerToFunction(), execute that function, and the C++ iterator tree has been reconstituted. > > This all works, but the JIT compile step is *slow*. For a sequence of about 8000 LLVM instructions (most of which are "call"), it takes several seconds to execute. > > It occurred to me that I don't really want JIT compiling. I really want to compile the LLVM code to machine code and write that to disk so that when I read it back, I can just run it. The "reconstitute" function is only ever run once per query invocation, so there's no benefit from JIT compiling it since it will never be run a second or subsequent time. > > Questions: > > * Is what I'm doing with LLVM a "reasonable" thing to do with LLVM? > * If so, how can I speed it up? By generating machine code? If so, how? > > I've looked at the source for llc, but that apparently only generates assembly source code, not object code. > > - Paul > > > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Kaylor, Andrew
2012-Nov-06 19:49 UTC
[LLVMdev] Using LLVM to serialize object state -- and performance
Hi Paul, I think you may have gone beyond what I understand in how the legacy JIT code works. It looks like the call to addGlobalMapping should short-circuit the named function look up that I described, but I can't account for why it behaves differently on Mac vs. Linux. I still don't understand how the external pointers persist between writing and reading, but it sounds like you have that worked out somehow. Are you writing LLVM IR to disk or machine code? If I'm not being helpful, feel free to give up on trying to explain things to me. -Andy -----Original Message----- From: llvmdev-bounces at cs.uiuc.edu [mailto:llvmdev-bounces at cs.uiuc.edu] On Behalf Of Paul J. Lucas Sent: Tuesday, November 06, 2012 7:44 AM To: llvmdev at cs.uiuc.edu List Subject: Re: [LLVMdev] Using LLVM to serialize object state -- and performance Thanks for responding. Sorry for the delay in my reply, but I was dealing with hurricane Sandy. Anyway.... My software build produces libmylib.so. The JIT'd function only calls external C functions in libmylib.so and not other JIT'd functions. The C functions are simple thunks to call constructors. For example, given: class BinaryNode : public Node { public: BinaryNode( Node *left, Node *right ); // ... }; there exists a C thunk: void* T_BinaryNode_new_2Pv( void *left, void *right ) { return new BinaryNode( (Node*)left, (Node*)right ); } The JIT'd function is just a sequence of such calls to thunks to build up an object tree. The idea is to generate LLVM code, write it out to disk, terminate execution of the current program's process; then, at some later time, start a new process for the program, read in the previously generated LLVM code from disk, call the JIT function that will reconstitute the state of the tree just as it was. Elsewhere in my code, I keep a set of llvm::Function*'s, one for each thunk. For each function, I use ExecutionEngine::addGlobalMapping() to bind the Function* to the actual thunk. The binding does use Module::getFunction(). Oddly, on Mac OS X, I only have to do this when my program is creating the LLVM code; on Linux, I also have to do it when my program is reading the LLVM code back in and trying to execute it. Hopefully, I've explained this better. You then later wrote:> The default JITMemoryManager implementation uses sys::DynamicLibrary::SearchForAddressOfSymbol to find the function. If you know all of the names and addresses of the functions that will need to be resolved, you can provide a custom memory manager implementation to optimize this external function resolution.Based on my clarification, is this still the best course of action? - Paul On Oct 26, 2012, at 5:32 PM, "Kaylor, Andrew" <andrew.kaylor at intel.com> wrote:> I'm not sure I have a clear picture of what you're JIT'ing. If any of the JIT'ed functions call other JIT'ed functions, it may be difficult to find all the dependencies of a functions and recreate them correctly on a subsequent load. Even if the JIT'ed functions only call non-JIT'ed functions, I think you'd need some confidence that the address of the called functions wasn't being moved. > > It's possible that what you're considering would work, but I don't think it's a scenario that the JIT intends to support. > > It would be possible, however, to use the MCJIT engine and cache its results. It requires some modifications to the MCJIT engine but nothing major (I know because my team has a patch in the works to do this, but it's blocked by some other things at the moment). MCJIT generates complete object images and then uses RuntimeDyld to load them. If you had a hook to save the generated object, you could use RuntimeDyld directly to load it later. There are other ways to generate the object image (i.e. without MCJIT), but I'm not sure it would be easier. > > You basically just need to grab the Buffer that MCJIT::emitObject() has after it calls PM.run() and Buffer->flush() but before it passes it to Dyld.loadObject(). If you prefer, you could copy what MCJIT does and move it somewhere in your own code. There's not a lot to it. > > -Andy > > > -----Original Message----- > From: llvmdev-bounces at cs.uiuc.edu [mailto:llvmdev-bounces at cs.uiuc.edu] On Behalf Of Paul J. Lucas > Sent: Friday, October 26, 2012 4:17 PM > To: llvmdev at cs.uiuc.edu List > Subject: [LLVMdev] Using LLVM to serialize object state -- and performance > > I have a legacy C++ application that constructs a tree of C++ objects (an iterator tree to implement a query language). I am trying to use LLVM to "serialize" the state of this tree to disk for later loading and execution (or "compile" it to disk, if you prefer). > > Each of the C++ iterator objects now has a codegen() member function that adds to the LLVM code of an llvm::Function. The LLVM code generated is a sequence of instructions to set up the arguments for and call the constructor of each C++ object. (I am using C "thunks" that provide a C API to LLVM to make C++ class constructor calls.) Hence, all the LLVM code taken together into a single "reconstitute" function are mostly a sequence of "call" instructions with a few "store" and "getelementptr" instructions here and there -- fairly straight-forward LLVM code. > > I then write out the LLVM IR code to disk and, at some later time, read it back in with ParseIR(), do getPointerToFunction(), execute that function, and the C++ iterator tree has been reconstituted. > > This all works, but the JIT compile step is *slow*. For a sequence of about 8000 LLVM instructions (most of which are "call"), it takes several seconds to execute. > > It occurred to me that I don't really want JIT compiling. I really want to compile the LLVM code to machine code and write that to disk so that when I read it back, I can just run it. The "reconstitute" function is only ever run once per query invocation, so there's no benefit from JIT compiling it since it will never be run a second or subsequent time. > > Questions: > > * Is what I'm doing with LLVM a "reasonable" thing to do with LLVM? > * If so, how can I speed it up? By generating machine code? If so, how? > > I've looked at the source for llc, but that apparently only generates assembly source code, not object code. > > - Paul > > > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev_______________________________________________ LLVM Developers mailing list LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
Paul J. Lucas
2012-Nov-06 21:51 UTC
[LLVMdev] Using LLVM to serialize object state -- and performance
On Nov 6, 2012, at 11:49 AM, "Kaylor, Andrew" <andrew.kaylor at intel.com> wrote:> I think you may have gone beyond what I understand in how the legacy JIT code works. It looks like the call to addGlobalMapping should short-circuit the named function look up that I described ...Well, I first look for the function by name and, if I didn't find it, then I call addGlobalMapping()> Are you writing LLVM IR to disk or machine code?Currently IR. How can I write machine code? - Paul
Paul J. Lucas
2012-Nov-07 15:12 UTC
[LLVMdev] Using LLVM to serialize object state -- and performance
On Nov 6, 2012, at 11:49 AM, "Kaylor, Andrew" <andrew.kaylor at intel.com> wrote:> I think you may have gone beyond what I understand in how the legacy JIT code works. It looks like the call to addGlobalMapping should short-circuit the named function look up that I described ...Well, I first look for the function by name and, if I didn't find it, then I call addGlobalMapping(). But that's not where the time is going. Here: https://dl.dropbox.com/u/46791180/callgraph.pdf is a call graph generated by kcachegrind. I still don't understand all the numbers (and this PDF seems not to include commas where it should), but if you look at the left fork, the bottom two ovals, "Schedule..." is called 16K times and "setHeightToAtLeas..." is called 37K times. On the right fork, RAGreed... is called 35K times. Those are far too many calls to *anything* for a simple sequence of "call" LLVM instructions. Something seems horribly wrong. - Paul
Possibly Parallel Threads
- [LLVMdev] Using LLVM to serialize object state -- and performance
- [LLVMdev] Using LLVM to serialize object state -- and performance
- [LLVMdev] Using LLVM to serialize object state -- and performance
- [LLVMdev] Catching C++ exceptions, cleaning up, rethrowing
- [LLVMdev] Using LLVM to serialize object state -- and performance