Below is an outline of various usage models for MCJIT that I put together based on conversations at last month's LLVM Developer Meeting. If you're using or thinking about using MCJIT and your use case doesn't seem to fit in one of the categories below then either I didn't talk to you or I didn't understand what you're doing. In any case, I'd like to see this get worked into a shape suitable for inclusion in the LLVM documentation. I imagine it serving as a guide both to those who are new to using MCJIT and to those who are developing and maintaining MCJIT. If you're using MCJIT the latter (yes, the latter) case is particularly important to you right now as having your use case properly represented in this document is the best way to ensure that it is adequately considered when changes are made to MCJIT and when the decision is made as to when we are ready to deprecate the old JIT engine (probably in the 3.5 release, BTW). So here's what I'm asking for: if you are currently using MCJIT or considering using MCJIT, can you please find the use case that best fits your program and comment on how well the outline describes it. If you understand what I'm saying below but you see something that is missing, please let me know. If you aren't sure what I'm saying or you don't know how MCJIT might address your particular issues, please let me know that too. If you think my outline is too sketchy and you need me to elaborate before you can provide meaningful feedback, please let me know about that. If you think it's the best piece of documentation you've read all year and you can't wait to read it again, that's good information too. Thanks in advance for any and all feedback. -Andy ------------------------------------------------------------------------------------------ Models for MCJIT use 1. Interactive dynamic code generation - user types code which is compiled as needed for execution - example: Kaleidoscope - compilation speed probably isn't critical - use one MCJIT instance with many modules - create new modules on compilation - MCJIT handles linking between modules - external references still need prototypes - we can at least provide a module pass to automate it - memory overhead may be an issue but MCJIT can fix that - see model 2 for pre-defined library - if processing a large script pre-optimize before passing modules to MCJIT 2. Code generation for external target execution - client generates code to be injected into an external process - example: LLDB expression evaluation - target may be another local or remote - target architecture may not match host architecture - may use one or more instances of MCJIT (client preference) - MCJIT handles address remapping on request - custom memory manager handles code/data transfer - speed/memory requirements may vary 3. Large pre-defined module compilation and execution - code/IR is loaded from disk and prepared for execution - example: Intel(R) OpenCL SDK - compilation speed matters but isn't critical - initial startup time is somewhat important - execution speed is critical - memory consumption isn't an issue - tool integration may be important - use one MCJIT instance with multiple (but usually) few modules - use object caching for commonly used code - for very large, sparsely used libraries pre-link modules - object and archive support may be useful 4. Hot function replacement - client uses MCJIT to optimize frequently executed code - example: WebKit - compilation time is not critical - execution speed is critical - steady state memory consumption is very important - client handles pre-JIT interpretation/execution - MCJIT instances may be created as needed - custom memory manager transfers code memory ownership after compilation - MCJIT instance is deleted when no longer needed - client handles function replacement and lifetime management 5. On demand "one-time" execution - client provides a library of code which is used by small, disposable functions - example: database query? - initial load time isn't important - execution time is critical - if library code is fixed, load as shared library - if library code must be generated use a separate instance of MCJIT to hold the library - this instance can support multiple modules - use a custom memory manager to link with functions in this module - object caching and archive support may be useful in this case - if inlining/lto is more important than compile time keep library in an IR module and pre-link just before invoking MCJIT - create one instance of MCJIT as needed and destroy after execution -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20131209/3364f142/attachment.html>
On Dec 9, 2013, at 11:08 AM, Kaylor, Andrew <andrew.kaylor at intel.com> wrote:> Below is an outline of various usage models for MCJIT that I put together based on conversations at last month’s LLVM Developer Meeting. If you’re using or thinking about using MCJIT and your use case doesn’t seem to fit in one of the categories below then either I didn’t talk to you or I didn’t understand what you’re doing. > > In any case, I’d like to see this get worked into a shape suitable for inclusion in the LLVM documentation. I imagine it serving as a guide both to those who are new to using MCJIT and to those who are developing and maintaining MCJIT. If you’re using MCJIT the latter (yes, the latter) case is particularly important to you right now as having your use case properly represented in this document is the best way to ensure that it is adequately considered when changes are made to MCJIT and when the decision is made as to when we are ready to deprecate the old JIT engine (probably in the 3.5 release, BTW). > > So here’s what I’m asking for: if you are currently using MCJIT or considering using MCJIT, can you please find the use case that best fits your program and comment on how well the outline describes it. If you understand what I’m saying below but you see something that is missing, please let me know. If you aren’t sure what I’m saying or you don’t know how MCJIT might address your particular issues, please let me know that too. If you think my outline is too sketchy and you need me to elaborate before you can provide meaningful feedback, please let me know about that. If you think it’s the best piece of documentation you’ve read all year and you can’t wait to read it again, that’s good information too. > > Thanks in advance for any and all feedback. > > -Andy > > ------------------------------------------------------------------------------------------ > > Models for MCJIT use > > 1. Interactive dynamic code generation > - user types code which is compiled as needed for execution > - example: Kaleidoscope > - compilation speed probably isn't critical > - use one MCJIT instance with many modules > - create new modules on compilation > - MCJIT handles linking between modules > - external references still need prototypes > - we can at least provide a module pass to automate it > - memory overhead may be an issue but MCJIT can fix that > - see model 2 for pre-defined library > - if processing a large script pre-optimize before passing modules to MCJIT > > 2. Code generation for external target execution > - client generates code to be injected into an external process > - example: LLDB expression evaluation > - target may be another local or remote > - target architecture may not match host architecture > - may use one or more instances of MCJIT (client preference) > - MCJIT handles address remapping on request > - custom memory manager handles code/data transfer > - speed/memory requirements may vary > > 3. Large pre-defined module compilation and execution > - code/IR is loaded from disk and prepared for execution > - example: Intel(R) OpenCL SDK > - compilation speed matters but isn't critical > - initial startup time is somewhat important > - execution speed is critical > - memory consumption isn't an issue > - tool integration may be important > - use one MCJIT instance with multiple (but usually) few modules > - use object caching for commonly used code > - for very large, sparsely used libraries pre-link modules > - object and archive support may be useful > > 4. Hot function replacement > - client uses MCJIT to optimize frequently executed code > - example: WebKit > - compilation time is not critical > - execution speed is critical > - steady state memory consumption is very important > - client handles pre-JIT interpretation/execution > - MCJIT instances may be created as needed > - custom memory manager transfers code memory ownership after compilation > - MCJIT instance is deleted when no longer needed > - client handles function replacement and lifetime managementThis part LGTM. Not sure if this is useful, but something that is also interesting in this case is that the LLVM IR never calls declared functions except for intrinsics. All function calls involve a pointer constant planted in IR and then bitcast to the appropriate function pointer type. This implies that the client does all linking, and in WebKit's case, it means that we rely on the patchpoint intrinsic for doing our own relocation magic, separate from RuntimeDyld.> > 5. On demand "one-time" execution > - client provides a library of code which is used by small, disposable functions > - example: database query? > - initial load time isn't important > - execution time is critical > - if library code is fixed, load as shared library > - if library code must be generated use a separate instance of MCJIT to hold the library > - this instance can support multiple modules > - use a custom memory manager to link with functions in this module > - object caching and archive support may be useful in this case > - if inlining/lto is more important than compile time keep library in an IR module and pre-link just before invoking MCJIT > - create one instance of MCJIT as needed and destroy after execution > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20131209/5a9b92c3/attachment.html>
Thanks, Filip. That is useful information. LLDB does something similar for linking, although in their case I'd like to see that change as it shouldn't be necessary. In any event, I do want MCJIT clients to recognize that they can do their own linking if there's a good reason to do so, and of course it's something to keep in mind as a possibility whenever we tinker with the MCJIT/RuntimeDyld implementation. -Andy From: Filip Pizlo [mailto:fpizlo at apple.com] Sent: Monday, December 09, 2013 11:35 AM To: Kaylor, Andrew Cc: Dev Subject: Re: [LLVMdev] [RFC] MCJIT usage models On Dec 9, 2013, at 11:08 AM, Kaylor, Andrew <andrew.kaylor at intel.com<mailto:andrew.kaylor at intel.com>> wrote: Below is an outline of various usage models for MCJIT that I put together based on conversations at last month's LLVM Developer Meeting. If you're using or thinking about using MCJIT and your use case doesn't seem to fit in one of the categories below then either I didn't talk to you or I didn't understand what you're doing. In any case, I'd like to see this get worked into a shape suitable for inclusion in the LLVM documentation. I imagine it serving as a guide both to those who are new to using MCJIT and to those who are developing and maintaining MCJIT. If you're using MCJIT the latter (yes, the latter) case is particularly important to you right now as having your use case properly represented in this document is the best way to ensure that it is adequately considered when changes are made to MCJIT and when the decision is made as to when we are ready to deprecate the old JIT engine (probably in the 3.5 release, BTW). So here's what I'm asking for: if you are currently using MCJIT or considering using MCJIT, can you please find the use case that best fits your program and comment on how well the outline describes it. If you understand what I'm saying below but you see something that is missing, please let me know. If you aren't sure what I'm saying or you don't know how MCJIT might address your particular issues, please let me know that too. If you think my outline is too sketchy and you need me to elaborate before you can provide meaningful feedback, please let me know about that. If you think it's the best piece of documentation you've read all year and you can't wait to read it again, that's good information too. Thanks in advance for any and all feedback. -Andy ------------------------------------------------------------------------------------------ Models for MCJIT use 1. Interactive dynamic code generation - user types code which is compiled as needed for execution - example: Kaleidoscope - compilation speed probably isn't critical - use one MCJIT instance with many modules - create new modules on compilation - MCJIT handles linking between modules - external references still need prototypes - we can at least provide a module pass to automate it - memory overhead may be an issue but MCJIT can fix that - see model 2 for pre-defined library - if processing a large script pre-optimize before passing modules to MCJIT 2. Code generation for external target execution - client generates code to be injected into an external process - example: LLDB expression evaluation - target may be another local or remote - target architecture may not match host architecture - may use one or more instances of MCJIT (client preference) - MCJIT handles address remapping on request - custom memory manager handles code/data transfer - speed/memory requirements may vary 3. Large pre-defined module compilation and execution - code/IR is loaded from disk and prepared for execution - example: Intel(R) OpenCL SDK - compilation speed matters but isn't critical - initial startup time is somewhat important - execution speed is critical - memory consumption isn't an issue - tool integration may be important - use one MCJIT instance with multiple (but usually) few modules - use object caching for commonly used code - for very large, sparsely used libraries pre-link modules - object and archive support may be useful 4. Hot function replacement - client uses MCJIT to optimize frequently executed code - example: WebKit - compilation time is not critical - execution speed is critical - steady state memory consumption is very important - client handles pre-JIT interpretation/execution - MCJIT instances may be created as needed - custom memory manager transfers code memory ownership after compilation - MCJIT instance is deleted when no longer needed - client handles function replacement and lifetime management This part LGTM. Not sure if this is useful, but something that is also interesting in this case is that the LLVM IR never calls declared functions except for intrinsics. All function calls involve a pointer constant planted in IR and then bitcast to the appropriate function pointer type. This implies that the client does all linking, and in WebKit's case, it means that we rely on the patchpoint intrinsic for doing our own relocation magic, separate from RuntimeDyld. 5. On demand "one-time" execution - client provides a library of code which is used by small, disposable functions - example: database query? - initial load time isn't important - execution time is critical - if library code is fixed, load as shared library - if library code must be generated use a separate instance of MCJIT to hold the library - this instance can support multiple modules - use a custom memory manager to link with functions in this module - object caching and archive support may be useful in this case - if inlining/lto is more important than compile time keep library in an IR module and pre-link just before invoking MCJIT - create one instance of MCJIT as needed and destroy after execution _______________________________________________ LLVM Developers mailing list LLVMdev at cs.uiuc.edu<mailto:LLVMdev at cs.uiuc.edu> http://llvm.cs.uiuc.edu<http://llvm.cs.uiuc.edu/> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20131209/b676eb50/attachment.html>
There are also the uses in this sort of thing: http://en.wikipedia.org/wiki/Gallium3D which covers a bit of 5 but also 3. -eric On Mon, Dec 9, 2013 at 11:08 AM, Kaylor, Andrew <andrew.kaylor at intel.com> wrote:> Below is an outline of various usage models for MCJIT that I put together > based on conversations at last month’s LLVM Developer Meeting. If you’re > using or thinking about using MCJIT and your use case doesn’t seem to fit in > one of the categories below then either I didn’t talk to you or I didn’t > understand what you’re doing. > > > > In any case, I’d like to see this get worked into a shape suitable for > inclusion in the LLVM documentation. I imagine it serving as a guide both > to those who are new to using MCJIT and to those who are developing and > maintaining MCJIT. If you’re using MCJIT the latter (yes, the latter) case > is particularly important to you right now as having your use case properly > represented in this document is the best way to ensure that it is adequately > considered when changes are made to MCJIT and when the decision is made as > to when we are ready to deprecate the old JIT engine (probably in the 3.5 > release, BTW). > > > > So here’s what I’m asking for: if you are currently using MCJIT or > considering using MCJIT, can you please find the use case that best fits > your program and comment on how well the outline describes it. If you > understand what I’m saying below but you see something that is missing, > please let me know. If you aren’t sure what I’m saying or you don’t know > how MCJIT might address your particular issues, please let me know that too. > If you think my outline is too sketchy and you need me to elaborate before > you can provide meaningful feedback, please let me know about that. If you > think it’s the best piece of documentation you’ve read all year and you > can’t wait to read it again, that’s good information too. > > > > Thanks in advance for any and all feedback. > > > > -Andy > > > > ------------------------------------------------------------------------------------------ > > > > Models for MCJIT use > > > > 1. Interactive dynamic code generation > > - user types code which is compiled as needed for execution > > - example: Kaleidoscope > > - compilation speed probably isn't critical > > - use one MCJIT instance with many modules > > - create new modules on compilation > > - MCJIT handles linking between modules > > - external references still need prototypes > > - we can at least provide a module pass to automate it > > - memory overhead may be an issue but MCJIT can fix that > > - see model 2 for pre-defined library > > - if processing a large script pre-optimize before passing modules to > MCJIT > > > > 2. Code generation for external target execution > > - client generates code to be injected into an external process > > - example: LLDB expression evaluation > > - target may be another local or remote > > - target architecture may not match host architecture > > - may use one or more instances of MCJIT (client preference) > > - MCJIT handles address remapping on request > > - custom memory manager handles code/data transfer > > - speed/memory requirements may vary > > > > 3. Large pre-defined module compilation and execution > > - code/IR is loaded from disk and prepared for execution > > - example: Intel(R) OpenCL SDK > > - compilation speed matters but isn't critical > > - initial startup time is somewhat important > > - execution speed is critical > > - memory consumption isn't an issue > > - tool integration may be important > > - use one MCJIT instance with multiple (but usually) few modules > > - use object caching for commonly used code > > - for very large, sparsely used libraries pre-link modules > > - object and archive support may be useful > > > > 4. Hot function replacement > > - client uses MCJIT to optimize frequently executed code > > - example: WebKit > > - compilation time is not critical > > - execution speed is critical > > - steady state memory consumption is very important > > - client handles pre-JIT interpretation/execution > > - MCJIT instances may be created as needed > > - custom memory manager transfers code memory ownership after > compilation > > - MCJIT instance is deleted when no longer needed > > - client handles function replacement and lifetime management > > > > 5. On demand "one-time" execution > > - client provides a library of code which is used by small, disposable > functions > > - example: database query? > > - initial load time isn't important > > - execution time is critical > > - if library code is fixed, load as shared library > > - if library code must be generated use a separate instance of MCJIT to > hold the library > > - this instance can support multiple modules > > - use a custom memory manager to link with functions in this module > > - object caching and archive support may be useful in this case > > - if inlining/lto is more important than compile time keep library in an > IR module and pre-link just before invoking MCJIT > > - create one instance of MCJIT as needed and destroy after execution > > > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev >
Hi Andy, this looks great. I would echo what some of the other people are saying, that an all-llvm jit compiler (ie something for executing entire source programs, with no baseline interpreter/other jit engine) is missing from the list, and I think it's a common enough case to be worth including or mentioning as a non-goal. The main thing about that use case that feels not covered by the others is fast "-O0" compilation speed; I'm not saying that the burden of improving that should be on MCJIT or you, but it'd be nice to know to what extent it could be relied on. I'm not sure if this is a necessary consequence, but I think it's natural with an all-llvm jit to have your standard library be in llvm as well, which is maybe similar to #3/#5. I got a somewhat-hacky version of cross-module inlining working (it seems possible to make it non-hacky but would involve some refactoring of the LLVM inlining code), which means that the stdlib can stay fixed (you don't have to put new IR in it to do inlining), and thus can get cached, potentially lessening the importance of stdlib compile time. About lazy compilation, I'm still of the opinion that that's better handled outside of MCJIT. For the people asking for it, would it be enough to have a wrapper around MCJIT that automatically splits modules and adds stubs to do lazy compilation? I don't think that would be too hard to add, though it could make the compilation speed situation worse. Anyway, I think it's just too restrictive to bake this kind of functionality into MCJIT itself, especially now that there's patchpoint which adds another dimension along which to customize function replacement. Plus, to truly compile things lazily, IR-generation should probably also be done lazily, which makes the situation even more complicated. On Mon, Dec 9, 2013 at 11:08 AM, Kaylor, Andrew <andrew.kaylor at intel.com>wrote:> Below is an outline of various usage models for MCJIT that I put > together based on conversations at last month’s LLVM Developer Meeting. If > you’re using or thinking about using MCJIT and your use case doesn’t seem > to fit in one of the categories below then either I didn’t talk to you or I > didn’t understand what you’re doing. > > > > In any case, I’d like to see this get worked into a shape suitable for > inclusion in the LLVM documentation. I imagine it serving as a guide both > to those who are new to using MCJIT and to those who are developing and > maintaining MCJIT. If you’re using MCJIT the latter (yes, the latter) case > is particularly important to you right now as having your use case properly > represented in this document is the best way to ensure that it is > adequately considered when changes are made to MCJIT and when the decision > is made as to when we are ready to deprecate the old JIT engine (probably > in the 3.5 release, BTW). > > > > So here’s what I’m asking for: if you are currently using MCJIT or > considering using MCJIT, can you please find the use case that best fits > your program and comment on how well the outline describes it. If you > understand what I’m saying below but you see something that is missing, > please let me know. If you aren’t sure what I’m saying or you don’t know > how MCJIT might address your particular issues, please let me know that > too. If you think my outline is too sketchy and you need me to elaborate > before you can provide meaningful feedback, please let me know about that. > If you think it’s the best piece of documentation you’ve read all year and > you can’t wait to read it again, that’s good information too. > > > > Thanks in advance for any and all feedback. > > > > -Andy > > > > > ------------------------------------------------------------------------------------------ > > > > Models for MCJIT use > > > > 1. Interactive dynamic code generation > > - user types code which is compiled as needed for execution > > - example: Kaleidoscope > > - compilation speed probably isn't critical > > - use one MCJIT instance with many modules > > - create new modules on compilation > > - MCJIT handles linking between modules > > - external references still need prototypes > > - we can at least provide a module pass to automate it > > - memory overhead may be an issue but MCJIT can fix that > > - see model 2 for pre-defined library > > - if processing a large script pre-optimize before passing modules to > MCJIT > > > > 2. Code generation for external target execution > > - client generates code to be injected into an external process > > - example: LLDB expression evaluation > > - target may be another local or remote > > - target architecture may not match host architecture > > - may use one or more instances of MCJIT (client preference) > > - MCJIT handles address remapping on request > > - custom memory manager handles code/data transfer > > - speed/memory requirements may vary > > > > 3. Large pre-defined module compilation and execution > > - code/IR is loaded from disk and prepared for execution > > - example: Intel(R) OpenCL SDK > > - compilation speed matters but isn't critical > > - initial startup time is somewhat important > > - execution speed is critical > > - memory consumption isn't an issue > > - tool integration may be important > > - use one MCJIT instance with multiple (but usually) few modules > > - use object caching for commonly used code > > - for very large, sparsely used libraries pre-link modules > > - object and archive support may be useful > > > > 4. Hot function replacement > > - client uses MCJIT to optimize frequently executed code > > - example: WebKit > > - compilation time is not critical > > - execution speed is critical > > - steady state memory consumption is very important > > - client handles pre-JIT interpretation/execution > > - MCJIT instances may be created as needed > > - custom memory manager transfers code memory ownership after > compilation > > - MCJIT instance is deleted when no longer needed > > - client handles function replacement and lifetime management > > > > 5. On demand "one-time" execution > > - client provides a library of code which is used by small, disposable > functions > > - example: database query? > > - initial load time isn't important > > - execution time is critical > > - if library code is fixed, load as shared library > > - if library code must be generated use a separate instance of MCJIT > to hold the library > > - this instance can support multiple modules > > - use a custom memory manager to link with functions in this module > > - object caching and archive support may be useful in this case > > - if inlining/lto is more important than compile time keep library in > an IR module and pre-link just before invoking MCJIT > > - create one instance of MCJIT as needed and destroy after execution > > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20131209/ec8b5689/attachment.html>
With Julia, we're obviously very much in the first use case. As you know, we pretty much have a working version of Julia on top of MCJIT, but there's still a few kinks to work out, which I'll talk about in a separate email. One think which I remember you asking at the BOF is what MCJIT currently can't do well that the old JIT did, so I'd like to offer up an example. With the old JIT, I used Clang to do dynamic code generation to interface to C++ easily. Now you might argue that is either a problem with clang or rather a misuse of clang, but I'd like to think that we should keep the tools as flexible as possible, so applications like this can emerge. With the old JIT, I'd incrementally compile functions as Clang added them to the Module, but with MCJIT that kind of stuff is rather tricky. What I ended up doing was having Clang emit into a shadow module that never gets codegen'd and when a function is requested, pulling that function and it's closure out of the shadow module into the current MCJIT module. Perhaps functionality like that should be more readily available in base LLVM to be able to use MCJIT with clients not necessarily designed for use with MCJIT. On Mon, Dec 9, 2013 at 1:08 PM, Kaylor, Andrew <andrew.kaylor at intel.com>wrote:> Below is an outline of various usage models for MCJIT that I put > together based on conversations at last month’s LLVM Developer Meeting. If > you’re using or thinking about using MCJIT and your use case doesn’t seem > to fit in one of the categories below then either I didn’t talk to you or I > didn’t understand what you’re doing. > > > > In any case, I’d like to see this get worked into a shape suitable for > inclusion in the LLVM documentation. I imagine it serving as a guide both > to those who are new to using MCJIT and to those who are developing and > maintaining MCJIT. If you’re using MCJIT the latter (yes, the latter) case > is particularly important to you right now as having your use case properly > represented in this document is the best way to ensure that it is > adequately considered when changes are made to MCJIT and when the decision > is made as to when we are ready to deprecate the old JIT engine (probably > in the 3.5 release, BTW). > > > > So here’s what I’m asking for: if you are currently using MCJIT or > considering using MCJIT, can you please find the use case that best fits > your program and comment on how well the outline describes it. If you > understand what I’m saying below but you see something that is missing, > please let me know. If you aren’t sure what I’m saying or you don’t know > how MCJIT might address your particular issues, please let me know that > too. If you think my outline is too sketchy and you need me to elaborate > before you can provide meaningful feedback, please let me know about that. > If you think it’s the best piece of documentation you’ve read all year and > you can’t wait to read it again, that’s good information too. > > > > Thanks in advance for any and all feedback. > > > > -Andy > > > > > ------------------------------------------------------------------------------------------ > > > > Models for MCJIT use > > > > 1. Interactive dynamic code generation > > - user types code which is compiled as needed for execution > > - example: Kaleidoscope > > - compilation speed probably isn't critical > > - use one MCJIT instance with many modules > > - create new modules on compilation > > - MCJIT handles linking between modules > > - external references still need prototypes > > - we can at least provide a module pass to automate it > > - memory overhead may be an issue but MCJIT can fix that > > - see model 2 for pre-defined library > > - if processing a large script pre-optimize before passing modules to > MCJIT > > > > 2. Code generation for external target execution > > - client generates code to be injected into an external process > > - example: LLDB expression evaluation > > - target may be another local or remote > > - target architecture may not match host architecture > > - may use one or more instances of MCJIT (client preference) > > - MCJIT handles address remapping on request > > - custom memory manager handles code/data transfer > > - speed/memory requirements may vary > > > > 3. Large pre-defined module compilation and execution > > - code/IR is loaded from disk and prepared for execution > > - example: Intel(R) OpenCL SDK > > - compilation speed matters but isn't critical > > - initial startup time is somewhat important > > - execution speed is critical > > - memory consumption isn't an issue > > - tool integration may be important > > - use one MCJIT instance with multiple (but usually) few modules > > - use object caching for commonly used code > > - for very large, sparsely used libraries pre-link modules > > - object and archive support may be useful > > > > 4. Hot function replacement > > - client uses MCJIT to optimize frequently executed code > > - example: WebKit > > - compilation time is not critical > > - execution speed is critical > > - steady state memory consumption is very important > > - client handles pre-JIT interpretation/execution > > - MCJIT instances may be created as needed > > - custom memory manager transfers code memory ownership after > compilation > > - MCJIT instance is deleted when no longer needed > > - client handles function replacement and lifetime management > > > > 5. On demand "one-time" execution > > - client provides a library of code which is used by small, disposable > functions > > - example: database query? > > - initial load time isn't important > > - execution time is critical > > - if library code is fixed, load as shared library > > - if library code must be generated use a separate instance of MCJIT > to hold the library > > - this instance can support multiple modules > > - use a custom memory manager to link with functions in this module > > - object caching and archive support may be useful in this case > > - if inlining/lto is more important than compile time keep library in > an IR module and pre-link just before invoking MCJIT > > - create one instance of MCJIT as needed and destroy after execution > > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20131209/d117e1a9/attachment.html>
Hi Andy, My use case is quite similar to what Keno described. I am using clang + JIT to dynamically compile C++ functions generated in response to user interaction. Generated functions may be unloaded or modified. I would like to break down the old JIT code into three major parts. 1) The old JIT has its own code emitter, which duplicates code from lib/MC and does not generate debug info and other limitations. 2) The old JIT supports lazy compilation. 3) The old JIT has its own function-level "dynamic linker"/memory manager, supporting function replacement with low-overhead JMP stubs. Now 1) is clearly a problem of code duplication. I'm not sure why a different emitter was created for the JIT but it would seem it's possible to reuse the lib/MC code just like MCJIT does. 2) takes much code all over the JIT code and at least for my use case it could just be removed. If 1) and 2) are solved and removed we are left with the (relatively small) "dynamic linker"/hack code only. I'd say the way this linker/loader works is much better fit for use cases such as my and Keno's than a classic linker like the ELF loader. The JIT linker works at function level rather than module level, supports automatic stub generation and relinking management. We could wrap functions in modules for the ELF loader, handle the stubs ourselves etc but this is tricky code as Keno said, requires hard-learned knowledge and feels like teaching an elephant to dance. The ELF loader is really designed for very different requirements. I'd like to see the old JIT function-level "dynamic linker" code preserved somehow as a ready alternative to a "classic" linker, especially useful when used in combination clang to dynamically run C/C++ functions. Yaron 2013/12/10 Keno Fischer <kfischer at college.harvard.edu>> With Julia, we're obviously very much in the first use case. As you know, > we pretty much have a working version of Julia on top of MCJIT, but there's > still a few kinks to work out, which I'll talk about in a separate email. > > One think which I remember you asking at the BOF is what MCJIT currently > can't do well that the old JIT did, so I'd like to offer up an example. > With the old JIT, I used Clang to do dynamic code generation to interface > to C++ easily. Now you might argue that is either a problem with clang or > rather a misuse of clang, but I'd like to think that we should keep the > tools as flexible as possible, so applications like this can emerge. With > the old JIT, I'd incrementally compile functions as Clang added them to the > Module, but with MCJIT that kind of stuff is rather tricky. What I ended up > doing was having Clang emit into a shadow module that never gets codegen'd > and when a function is requested, pulling that function and it's closure > out of the shadow module into the current MCJIT module. Perhaps > functionality like that should be more readily available in base LLVM to be > able to use MCJIT with clients not necessarily designed for use with MCJIT. > > > > > On Mon, Dec 9, 2013 at 1:08 PM, Kaylor, Andrew <andrew.kaylor at intel.com>wrote: > >> Below is an outline of various usage models for MCJIT that I put >> together based on conversations at last month’s LLVM Developer Meeting. If >> you’re using or thinking about using MCJIT and your use case doesn’t seem >> to fit in one of the categories below then either I didn’t talk to you or I >> didn’t understand what you’re doing. >> >> >> >> In any case, I’d like to see this get worked into a shape suitable for >> inclusion in the LLVM documentation. I imagine it serving as a guide both >> to those who are new to using MCJIT and to those who are developing and >> maintaining MCJIT. If you’re using MCJIT the latter (yes, the latter) case >> is particularly important to you right now as having your use case properly >> represented in this document is the best way to ensure that it is >> adequately considered when changes are made to MCJIT and when the decision >> is made as to when we are ready to deprecate the old JIT engine (probably >> in the 3.5 release, BTW). >> >> >> >> So here’s what I’m asking for: if you are currently using MCJIT or >> considering using MCJIT, can you please find the use case that best fits >> your program and comment on how well the outline describes it. If you >> understand what I’m saying below but you see something that is missing, >> please let me know. If you aren’t sure what I’m saying or you don’t know >> how MCJIT might address your particular issues, please let me know that >> too. If you think my outline is too sketchy and you need me to elaborate >> before you can provide meaningful feedback, please let me know about that. >> If you think it’s the best piece of documentation you’ve read all year and >> you can’t wait to read it again, that’s good information too. >> >> >> >> Thanks in advance for any and all feedback. >> >> >> >> -Andy >> >> >> >> >> ------------------------------------------------------------------------------------------ >> >> >> >> Models for MCJIT use >> >> >> >> 1. Interactive dynamic code generation >> >> - user types code which is compiled as needed for execution >> >> - example: Kaleidoscope >> >> - compilation speed probably isn't critical >> >> - use one MCJIT instance with many modules >> >> - create new modules on compilation >> >> - MCJIT handles linking between modules >> >> - external references still need prototypes >> >> - we can at least provide a module pass to automate it >> >> - memory overhead may be an issue but MCJIT can fix that >> >> - see model 2 for pre-defined library >> >> - if processing a large script pre-optimize before passing modules to >> MCJIT >> >> >> >> 2. Code generation for external target execution >> >> - client generates code to be injected into an external process >> >> - example: LLDB expression evaluation >> >> - target may be another local or remote >> >> - target architecture may not match host architecture >> >> - may use one or more instances of MCJIT (client preference) >> >> - MCJIT handles address remapping on request >> >> - custom memory manager handles code/data transfer >> >> - speed/memory requirements may vary >> >> >> >> 3. Large pre-defined module compilation and execution >> >> - code/IR is loaded from disk and prepared for execution >> >> - example: Intel(R) OpenCL SDK >> >> - compilation speed matters but isn't critical >> >> - initial startup time is somewhat important >> >> - execution speed is critical >> >> - memory consumption isn't an issue >> >> - tool integration may be important >> >> - use one MCJIT instance with multiple (but usually) few modules >> >> - use object caching for commonly used code >> >> - for very large, sparsely used libraries pre-link modules >> >> - object and archive support may be useful >> >> >> >> 4. Hot function replacement >> >> - client uses MCJIT to optimize frequently executed code >> >> - example: WebKit >> >> - compilation time is not critical >> >> - execution speed is critical >> >> - steady state memory consumption is very important >> >> - client handles pre-JIT interpretation/execution >> >> - MCJIT instances may be created as needed >> >> - custom memory manager transfers code memory ownership after >> compilation >> >> - MCJIT instance is deleted when no longer needed >> >> - client handles function replacement and lifetime management >> >> >> >> 5. On demand "one-time" execution >> >> - client provides a library of code which is used by small, >> disposable functions >> >> - example: database query? >> >> - initial load time isn't important >> >> - execution time is critical >> >> - if library code is fixed, load as shared library >> >> - if library code must be generated use a separate instance of MCJIT >> to hold the library >> >> - this instance can support multiple modules >> >> - use a custom memory manager to link with functions in this >> module >> >> - object caching and archive support may be useful in this case >> >> - if inlining/lto is more important than compile time keep library in >> an IR module and pre-link just before invoking MCJIT >> >> - create one instance of MCJIT as needed and destroy after execution >> >> _______________________________________________ >> LLVM Developers mailing list >> LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu >> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev >> >> > > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20131210/f64f537d/attachment.html>
I have a personal interest in areas that are a mix of (1) and (6) usage, in particular where certain funcitons are being specified by the command line but other functionality is added from an existing set of IR functions. In this context minimising "total response time" but where the execution time is likely to be much larger than the compilation time. One of the important things is being able to do LTO effectively inlining newly specified code into existing "library IR" and optimizing. In terms of creating/destroying an MCJIT per REPL interaction vs having multiple modules with one MCJIT, I haven't really benchmarked this but I suspect that multiple modules is the way to go since this makes re-running a previous computation to compare it against a new one avoid all the recompilation of the old one. But I need to find time to think this through. On Mon, Dec 9, 2013 at 7:08 PM, Kaylor, Andrew <andrew.kaylor at intel.com>wrote:> Below is an outline of various usage models for MCJIT that I put > together based on conversations at last month’s LLVM Developer Meeting. If > you’re using or thinking about using MCJIT and your use case doesn’t seem > to fit in one of the categories below then either I didn’t talk to you or I > didn’t understand what you’re doing. > > > > In any case, I’d like to see this get worked into a shape suitable for > inclusion in the LLVM documentation. I imagine it serving as a guide both > to those who are new to using MCJIT and to those who are developing and > maintaining MCJIT. If you’re using MCJIT the latter (yes, the latter) case > is particularly important to you right now as having your use case properly > represented in this document is the best way to ensure that it is > adequately considered when changes are made to MCJIT and when the decision > is made as to when we are ready to deprecate the old JIT engine (probably > in the 3.5 release, BTW). > > > > So here’s what I’m asking for: if you are currently using MCJIT or > considering using MCJIT, can you please find the use case that best fits > your program and comment on how well the outline describes it. If you > understand what I’m saying below but you see something that is missing, > please let me know. If you aren’t sure what I’m saying or you don’t know > how MCJIT might address your particular issues, please let me know that > too. If you think my outline is too sketchy and you need me to elaborate > before you can provide meaningful feedback, please let me know about that. > If you think it’s the best piece of documentation you’ve read all year and > you can’t wait to read it again, that’s good information too. > > > > Thanks in advance for any and all feedback. > > > > -Andy > > > > > ------------------------------------------------------------------------------------------ > > > > Models for MCJIT use > > > > 1. Interactive dynamic code generation > > - user types code which is compiled as needed for execution > > - example: Kaleidoscope > > - compilation speed probably isn't critical > > - use one MCJIT instance with many modules > > - create new modules on compilation > > - MCJIT handles linking between modules > > - external references still need prototypes > > - we can at least provide a module pass to automate it > > - memory overhead may be an issue but MCJIT can fix that > > - see model 2 for pre-defined library > > - if processing a large script pre-optimize before passing modules to > MCJIT > > > > 2. Code generation for external target execution > > - client generates code to be injected into an external process > > - example: LLDB expression evaluation > > - target may be another local or remote > > - target architecture may not match host architecture > > - may use one or more instances of MCJIT (client preference) > > - MCJIT handles address remapping on request > > - custom memory manager handles code/data transfer > > - speed/memory requirements may vary > > > > 3. Large pre-defined module compilation and execution > > - code/IR is loaded from disk and prepared for execution > > - example: Intel(R) OpenCL SDK > > - compilation speed matters but isn't critical > > - initial startup time is somewhat important > > - execution speed is critical > > - memory consumption isn't an issue > > - tool integration may be important > > - use one MCJIT instance with multiple (but usually) few modules > > - use object caching for commonly used code > > - for very large, sparsely used libraries pre-link modules > > - object and archive support may be useful > > > > 4. Hot function replacement > > - client uses MCJIT to optimize frequently executed code > > - example: WebKit > > - compilation time is not critical > > - execution speed is critical > > - steady state memory consumption is very important > > - client handles pre-JIT interpretation/execution > > - MCJIT instances may be created as needed > > - custom memory manager transfers code memory ownership after > compilation > > - MCJIT instance is deleted when no longer needed > > - client handles function replacement and lifetime management > > > > 5. On demand "one-time" execution > > - client provides a library of code which is used by small, disposable > functions > > - example: database query? > > - initial load time isn't important > > - execution time is critical > > - if library code is fixed, load as shared library > > - if library code must be generated use a separate instance of MCJIT > to hold the library > > - this instance can support multiple modules > > - use a custom memory manager to link with functions in this module > > - object caching and archive support may be useful in this case > > - if inlining/lto is more important than compile time keep library in > an IR module and pre-link just before invoking MCJIT > > - create one instance of MCJIT as needed and destroy after execution > > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev > >-- cheers, dave tweed__________________________ high-performance computing and machine vision expert: david.tweed at gmail.com "while having code so boring anyone can maintain it, use Python." -- attempted insult seen on slashdot -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20131210/1da26b58/attachment.html>
On 12/9/13 11:08 AM, Kaylor, Andrew wrote:> > Below is an outline of various usage models for MCJIT that I put > together based on conversations at last month's LLVM Developer > Meeting. If you're using or thinking about using MCJIT and your use > case doesn't seem to fit in one of the categories below then either I > didn't talk to you or I didn't understand what you're doing. > > In any case, I'd like to see this get worked into a shape suitable for > inclusion in the LLVM documentation. ... >Thank you for doing this. It's a good step to take and will help both users and maintainers of the MCJIT infrastructure.> > 4. Hot function replacement > > - client uses MCJIT to optimize frequently executed code > > - example: WebKit > > - compilation time is not critical > > - execution speed is critical > > - steady state memory consumption is very important > > - client handles pre-JIT interpretation/execution > > - MCJIT instances may be created as needed > > - custom memory manager transfers code memory ownership after > compilation > > - MCJIT instance is deleted when no longer needed > > - client handles function replacement and lifetime management >This is our currently planned use. A couple of extra requirements: - Linking of declared function names to specific addresses provided at generation time (e.g. getPointerToNamedFunction) - Ability to place generated code at a specific address (either via allocation control, or relocation) - Multiple compiler threads (using different instances of MCJIT) without underlying shared state protected by locks. -- Mentioned elsewhere, just making it explicit for this use case. - Internal errors are cleanly reported to API consumer with internal state restored to well defined "safe" state -- I'm aware this is very much wishful thinking at the moment, but being able to recover from bad compiles would be very nice to have. We're likely to explore external sandboxing as well, but having good library support would be useful. - Debugging Support: - IR is verified before optimization (by default or by option) - IR can be easily dumped during various optimization passes for debugging. Assembly can be dumped. A clarification question: - Do you see MCJIT having any role in inlining decisions in this mode? If so, there's a fair amount of extra support around inline decision tracking to support external lifetime policies. We don't strictly need this, but long term it would simplify our out of code tree substantially. A few things that are currently on our wish list, but that we haven't actually gotten to yet: - Accurate debug information (with full stack traces) - this has been discussed previously on the list. Not sure it would require much extra from the MCJIT infrastructure. - Profile guided optimization (e.g. guarded inlining, type profiles for call sites, edge counters, etc..) - We haven't gotten to the point of considering what parts of this would be in tree vs language specific and thus external. It's also unclear how much this would effect MCJIT directly. - Inline call caching - likely using the patch point mechanism introduced by the Webkit guys. - We've thrown around ideas of a compile server process. This would involve constraints similar to your (2). This hasn't made it past brain storming yet. Yours, Philip -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20131210/dabc682b/attachment.html>
Apparently Analagous Threads
- [LLVMdev] [RFC] MCJIT usage models
- [LLVMdev] Reminder: Please switch to MCJIT, as the old JIT will be removed soon.
- [LLVMdev] Reminder: Please switch to MCJIT, as the old JIT will be removed soon.
- [LLVMdev] [RFC] MCJIT usage models
- [LLVMdev] (Very) small patch for the jit event listener