Justin Holewinski
2012-Apr-28 14:30 UTC
[LLVMdev] [PATCH][RFC] Add llvm.codegen Intrinsic To Support Embedded LLVM IR Code Generation
On Sat, Apr 28, 2012 at 3:16 AM, Tobias Grosser <tobias at grosser.es> wrote:> On 04/28/2012 10:25 AM, Yabin Hu wrote: > >> Hi Justin, >> >> Thanks very much for your comments. >> >> 2012/4/28 Justin Holewinski <justin.holewinski at gmail.com >> <mailto:justin.holewinski@**gmail.com <justin.holewinski at gmail.com>>> >> >> >> On Fri, Apr 27, 2012 at 7:40 PM, Yabin Hu <yabin.hwu at gmail.com >> <mailto:yabin.hwu at gmail.com>> wrote: >> >> The attached patch adds a new Intrinsic named "llvm.codegen" to >> support embedded LLVM IR code generation. The 'llvm.codegen' >> intrinsic uses the LLVM back ends to generate code for embedded >> LLVM IR strings. The code generation target can be same or >> different to the one of the parent module. >> >> >> The original motivation inspiring us to add this intrinsic, is >> to generate code for heterogeneous platform. A test case in the >> patch demos this. In the test case, on a X86 host, we use this >> intrinsic to transform an embedded LLVM IR into a string of PTX >> assembly. We can then employ a PTX execution engine ( on CUDA >> Supported GPU ) to execute the newly generated assembly and copy >> >> back the result later. >> >> >> I have to admit, I'm not sold on this solution. First, there is no >> clear way to pass codegen flags to the back-end. In PTX parlance, >> how would I embed an .ll file and compile to compute_13? >> >> We can handle this by provide a new argument (e.g. a string of >> properly-configured Target Machine) instead of or in addition to the >> Arch type string argument. >> > > I think we may in general discuss the additional information needed for > the back ends and provide the information as parameters. We may want to do > this on demand, in case we agreed on the general usefulness of this > intrinsic.Any solution would need to be able to handle Feature flags (e.g. -mattr=+sm_20), as well as generic llc options (e.g. -regalloc=greedy). What happens when the options conflict with the original options passed to llc? The CodeGenIntrinsic pass would need to emulate all (most?) of llc, but in a way that doesn't interfere with llc's global state. Unfortunately, parameters like "regalloc=" are globals. To do this without massive LLVM changes, you may need to spawn another instance of llc as a separate process.> > > Second, this adds a layer of obfuscation to the system. If I look >> at an .ll file, I expect to see all of the assembly in a reasonably >> clean syntax. If the device code is squashed into a constant array, >> it is much harder to read. >> > > I agree with Justin. The embedded code is not readable within the constant > array. For debugging purposes having the embedded module in separate files > is better. I believe we can achieve this easily by adding a pass that > extracts the embedded LLVM-IR code into separate files. > > > Is the motivation for the intrinsic simply to preserve the ability >> to pipe LLVM commands together on the command-line, e.g. opt | llc? >> I really feel that the cleaner solution is to split the IR into >> separate files, each of which can be processed independently after >> initial generation. >> >> Yes, it is. To preserve such an ability is the main benefit we got from >> this intrinsic. It means we needn't to implement another compiler driver >> or jit tool for our specific purpose. I agree with you that embedded >> llvm ir harms the readability of the .ll file. >> > > I would like to add that embedding the device IR into the host IR fits > very well in the LLVM code generation chain. It obviously makes running > 'opt | llc' possible, but it also enables us to write optimizations that > yield embedded GPU code. > > To write optimizations that yield embedded GPU code, we also looked into > three other approaches: > > 1. Directly create embedded target code (e.g. PTX) > > This would mean the optimization pass extracts device code internally and > directly generate the relevant target code. This approach would require our > generic optimization pass to be directly linked with the specific target > back end. This is an ugly layering violation and, in addition, it causes > major troubles in case the new optimization should be dynamically loaded. >I agree that this isn't desirable. The optimizer should never have to generate device code.> > 2. Extend the LLVM-IR files to support heterogeneous modules > > This would mean we extend LLVM-IR, such that IR for different targets > can be stored within a single IR file. This approach could be integrated > nicely into the LLVM code generation flow and would yield readable LLVM-IR > even for the device code. However, it adds another level of complexity to > the LLVM-IR files and does not only require massive changes in the LLVM > code base, but also in compilers built on top of LLVM-IR. > > 3. Generate two independent LLVM-IR files and pass them around together > > The host and device LLVM-IR modules could be kept in separate files. This > has the benefit of being user readable and not adding additional complexity > to the LLVM-IR files itself. However, separate files do not provide > information about how those files are related. Which files are kernel > files, how.where do they need to be loaded, ...? Also this information > could probably be put into meta-data or could be hard coded > into the generic compiler infrastructure, but this would require > significant additional code. > Another weakness of this approach is that the entire LLVM optimization > chain is currently built under the assumption that a single file/module > passed around. This is most obvious with the 'opt | llc' idiom, but in > general every tool that does currently exist would need to be adapted to > handle multiple files and would possibly even need semantic knowledge about > how to connect/use them together. Just running clang or > draggonegg with -load GPGPUOptimizer.so would not be possible. > > All of the previous approaches require significant changes all over the > code base and would cause trouble with loadable optimization passes. The > intrinsic based approach seems to address most of the previous problems. > > The intrinsic based approach requires little changes restricted to LLVM > itself. It especially works without changes to the established LLVM > optimization chain. 'opt | llc' will work out of the box, but, more > importantly, any LLVM based compiler can directly load a GPGPUOptimzer.so > file to gain a GPU based accelerator. Besides the need to load some runtime > library, no additional knowledge needs to be embedded in individual > compiler implementations, but all the logic of GPGPU code generation can > remain within a single LLVM optimization pass. Another nice feature of the > intrinsic is that the relation between host and device code is explicitly > encoded in the LLVM-IR (with the llvm.codegen function calls). There is no > need to put this information into individual tools and/or to carry it > through meta-data. Instead the precise semantics are directly available > through LLVM-IR. >I just worry about the scalability of this approach. Once you embed the IR, no optimizer can touch it, so this potentially creates problems with pass scheduling. When you generate the IR, you want it to be fully optimized before embedding. Or, you could invoke opt+llc when lowering the llvm.codegen intrinsic.> > Justin: With your proposed two-file approach? What changes would be needed > to add e.g. GPGPU code generation support to clang/dragonegg or > haskell+LLVM? Can you see a way, this can be done without large changes > to each of these users? >To be fair, I'm not necessarily advocating the two-file approach. It has its shortcomings, too. But this is in some sense the crux of the problem. The intrinsic approach is clearly the path of least resistance, especially in the case of the GSoC project. However, I think a more long-term solution involves looking at this problem from the IR level. The current LLVM approach is "one arch in, one arch out". As far as I know, even ARM needs separate modules for ARM vs. Thumb (please correct me if I'm mistaken). Whether the tools are extended to support multiple outputs with some linking information or the IR is extended to support something like per-function target triples, that is a decision that would need to be addressed by the entire LLVM community.> The usage of t his intrinsic is not limited to code generation >> >> for heterogeneous platform. It can also help lots of (run-time) >> optimization and security problems even when the code generation >> target is same as the one of the parent module. >> >> >> How does this help run-time optimization? >> >> We implement this intrinsic by learning the implementation style of >> llvm's garbage collector related intrinsics which support various GC >> strategies. It can help if the ASMGenerator in the patch is revised to >> be able to accept various optimization strategies provided by the user >> of this intrinsic. Then the intrinsic will do what the user wants to the >> input code string. When running the code with lli like jit tools, we can >> choose one optimization strategy at run-time. Though haven't supported >> this currently, we try to make the design as general as we can. The >> essential functionality of this intrinsic is that we get an input code >> string, transform it into a target-specific new one then replace the >> call to the intrinsic. >> > > There may be uses like this, but I am not sure if the llvm.codegen() > intrinsic is the best way to implement this. Even though we made it generic > and it can possibly be used in other ways, I suggest to currently focus on > the use for heterogeneous computing. This is where it is needed today and > where we can easily check if it does what we need. > > > Each call to the intrinsic has two arguments. One is the LLVM IR >> string. The other is the name of the target architecture. When >> running with tools like llc, lli, etc, this intrinsic transforms >> the input LLVM IR string to a new string of assembly code for >> the target architecture firstly. Then the call to the intrinsic >> is replaced by a pointer to the newly generated string. After >> this, we have in our module >> >> >> Is the Arch parameter to llvm.codegen really needed? Since codegen >> happens when lowering the intrinsic, the target architecture must be >> known. But if the target architecture is known, then it should be >> available in the triple for the embedded module. >> >> Yes. It is better that the target data is set correctly in the embedded >> module. It is the user's responsibility to do this. >> > > OK. Why don't we require the triple to be set and remove the arch > parameter again? > > Tobi >-- Thanks, Justin Holewinski -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20120428/7f16f175/attachment.html>
Tobias Grosser
2012-Apr-28 15:27 UTC
[LLVMdev] [PATCH][RFC] Add llvm.codegen Intrinsic To Support Embedded LLVM IR Code Generation
On 04/28/2012 04:30 PM, Justin Holewinski wrote:> We can handle this by provide a new argument (e.g. a string of > properly-configured Target Machine) instead of or in addition to the > Arch type string argument. > > > I think we may in general discuss the additional information needed > for the back ends and provide the information as parameters. We may > want to do this on demand, in case we agreed on the general > usefulness of this intrinsic. > > > Any solution would need to be able to handle Feature flags (e.g. > -mattr=+sm_20), as well as generic llc options (e.g. -regalloc=greedy). > What happens when the options conflict with the original options > passed to llc? The CodeGenIntrinsic pass would need to emulate all > (most?) of llc, but in a way that doesn't interfere with llc's global > state. Unfortunately, parameters like "regalloc=" are globals. To do > this without massive LLVM changes, you may need to spawn another > instance of llc as a separate process.I think feature flags should not be a problem. The function createTargetMachine() takes a feature string. We can get this string as a parameter of the intrinsic and use it to parametrize the target machine. If needed, we can also add parameters to define the relocation model, mcpu, the code model, the optimization level or the target options. All those parameters are not influenced by the command line options of the llc invocation and will, for now, be set to default values for the embedded code generation. We should probably add the most important options now and add others on demand. Which are the options you suggest to be added initially? I suppose we need 1) the feature string and 2) mcpu. Is there anything else you would suggest? regalloc= is different. It is global and consequently influences both host and device code generation. However, to me it is rather a debugging option. It is never set by clang and targets provide a reasonable default based on the optimization level. I believe we can assume that for our use case it is not set. In case it is really necessary to explicitly set the register allocator, the right solution would be to make regalloc a target option.> The intrinsic based approach requires little changes restricted to > LLVM itself. It especially works without changes to the established > LLVM optimization chain. 'opt | llc' will work out of the box, but, > more importantly, any LLVM based compiler can directly load a > GPGPUOptimzer.so file to gain a GPU based accelerator. Besides the > need to load some runtime library, no additional knowledge needs to > be embedded in individual compiler implementations, but all the > logic of GPGPU code generation can remain within a single LLVM > optimization pass. Another nice feature of the intrinsic is that the > relation between host and device code is explicitly encoded in the > LLVM-IR (with the llvm.codegen function calls). There is no need to > put this information into individual tools and/or to carry it > through meta-data. Instead the precise semantics are directly > available through LLVM-IR. > > > I just worry about the scalability of this approach. Once you embed the > IR, no optimizer can touch it, so this potentially creates problems with > pass scheduling. When you generate the IR, you want it to be fully > optimized before embedding. Or, you could invoke opt+llc when lowering > the llvm.codegen intrinsic.Where do you see scalability problems? I agree that the llvm.codegen intrinsic is limited to plain code generation. Meaning it is an embedded llc. I do not expect any part of LLVM to be extended to reason about optimizing the embedded IR. The optimization that created this intrinsic is in charge of optimizing the embedded IR as needed. However, this is not a big problem. A generic LLVM-IR optimization pass can schedule the required optimizations as needed.> Justin: With your proposed two-file approach? What changes would be > needed to add e.g. GPGPU code generation support to clang/dragonegg or > haskell+LLVM? Can you see a way, this can be done without large changes > to each of these users? > > > To be fair, I'm not necessarily advocating the two-file approach. It > has its shortcomings, too. But this is in some sense the crux of the > problem. The intrinsic approach is clearly the path of least > resistance, especially in the case of the GSoC project. However, I > think a more long-term solution involves looking at this problem from > the IR level. The current LLVM approach is "one arch in, one arch out". > As far as I know, even ARM needs separate modules for ARM vs. Thumb > (please correct me if I'm mistaken). Whether the tools are extended to > support multiple outputs with some linking information or the IR is > extended to support something like per-function target triples, that is > a decision that would need to be addressed by the entire LLVM community.I agree that future work can be useful here. However, before spending a large amount of time to engineer a complex solution, I propose to start with the proposed light-weight approach. It is sufficient for our needs and will allow us to get the experience and infrastructure that can help us to choose and implement a more complex later on. Tobi
Justin Holewinski
2012-Apr-28 23:21 UTC
[LLVMdev] [PATCH][RFC] Add llvm.codegen Intrinsic To Support Embedded LLVM IR Code Generation
On Sat, Apr 28, 2012 at 8:27 AM, Tobias Grosser <tobias at grosser.es> wrote:> On 04/28/2012 04:30 PM, Justin Holewinski wrote: > >> We can handle this by provide a new argument (e.g. a string of >> properly-configured Target Machine) instead of or in addition to >> the >> Arch type string argument. >> >> >> I think we may in general discuss the additional information needed >> for the back ends and provide the information as parameters. We may >> want to do this on demand, in case we agreed on the general >> usefulness of this intrinsic. >> >> >> Any solution would need to be able to handle Feature flags (e.g. >> -mattr=+sm_20), as well as generic llc options (e.g. -regalloc=greedy). >> What happens when the options conflict with the original options >> passed to llc? The CodeGenIntrinsic pass would need to emulate all >> (most?) of llc, but in a way that doesn't interfere with llc's global >> state. Unfortunately, parameters like "regalloc=" are globals. To do >> this without massive LLVM changes, you may need to spawn another >> instance of llc as a separate process. >> > > I think feature flags should not be a problem. The function > createTargetMachine() takes a feature string. We can get this string as a > parameter of the intrinsic and use it to parametrize the target machine. If > needed, we can also add parameters to define the relocation model, mcpu, > the code model, the optimization level or the target options. All those > parameters are not influenced by the command line options of the llc > invocation and will, for now, be set to default values for the embedded > code generation. > > We should probably add the most important options now and add others on > demand. Which are the options you suggest to be added initially? I suppose > we need 1) the feature string and 2) mcpu. Is there anything else you would > suggest? > > regalloc= is different. It is global and consequently influences both host > and device code generation. However, to me it is rather a debugging option. > It is never set by clang and targets provide a reasonable default based on > the optimization level. I believe we can > assume that for our use case it is not set. In case it is really necessary > to explicitly set the register allocator, the right solution would be to > make regalloc a target option.The regalloc= option was just an example of the types of flags that can be passed to llc, which are handled as global options instead of target options.> > > The intrinsic based approach requires little changes restricted to >> LLVM itself. It especially works without changes to the established >> LLVM optimization chain. 'opt | llc' will work out of the box, but, >> more importantly, any LLVM based compiler can directly load a >> GPGPUOptimzer.so file to gain a GPU based accelerator. Besides the >> need to load some runtime library, no additional knowledge needs to >> be embedded in individual compiler implementations, but all the >> logic of GPGPU code generation can remain within a single LLVM >> optimization pass. Another nice feature of the intrinsic is that the >> relation between host and device code is explicitly encoded in the >> LLVM-IR (with the llvm.codegen function calls). There is no need to >> put this information into individual tools and/or to carry it >> through meta-data. Instead the precise semantics are directly >> available through LLVM-IR. >> >> >> I just worry about the scalability of this approach. Once you embed the >> IR, no optimizer can touch it, so this potentially creates problems with >> pass scheduling. When you generate the IR, you want it to be fully >> optimized before embedding. Or, you could invoke opt+llc when lowering >> the llvm.codegen intrinsic. >> > > Where do you see scalability problems? > > I agree that the llvm.codegen intrinsic is limited to plain code > generation. Meaning it is an embedded llc. I do not expect any part of LLVM > to be extended to reason about optimizing the embedded IR. The optimization > that created this intrinsic is in charge of optimizing the embedded IR as > needed. However, this is not a big problem. A generic LLVM-IR optimization > pass can schedule the required optimizations as needed.The implicit assumption seems to be that the host code wants the device code as assembly text. What happens when you need to link the device binary and upload it separately? Think automatic SPU codegen on Cell. Is it up to the host program to invoke the other target's linker?> > > Justin: With your proposed two-file approach? What changes would be >> needed to add e.g. GPGPU code generation support to clang/dragonegg or >> haskell+LLVM? Can you see a way, this can be done without large changes >> to each of these users? >> >> >> To be fair, I'm not necessarily advocating the two-file approach. It >> has its shortcomings, too. But this is in some sense the crux of the >> problem. The intrinsic approach is clearly the path of least >> resistance, especially in the case of the GSoC project. However, I >> think a more long-term solution involves looking at this problem from >> the IR level. The current LLVM approach is "one arch in, one arch out". >> As far as I know, even ARM needs separate modules for ARM vs. Thumb >> (please correct me if I'm mistaken). Whether the tools are extended to >> support multiple outputs with some linking information or the IR is >> extended to support something like per-function target triples, that is >> a decision that would need to be addressed by the entire LLVM community. >> > > I agree that future work can be useful here. However, before spending a > large amount of time to engineer a complex solution, I propose to start > with the proposed light-weight approach. It is sufficient for our needs and > will allow us to get the experience and infrastructure that can help us to > choose and implement a more complex later on. >I agree that this approach is the best way to get short-term results, especially for the GSoC project.> > Tobi > >-- Thanks, Justin Holewinski -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20120428/b98be3f3/attachment.html>
Reasonably Related Threads
- [LLVMdev] [PATCH][RFC] Add llvm.codegen Intrinsic To Support Embedded LLVM IR Code Generation
- [LLVMdev] [PATCH][RFC] Add llvm.codegen Intrinsic To Support Embedded LLVM IR Code Generation
- [LLVMdev] [PATCH][RFC] Add llvm.codegen Intrinsic To Support Embedded LLVM IR Code Generation
- [LLVMdev] [PATCH][RFC] Add llvm.codegen Intrinsic To Support Embedded LLVM IR Code Generation
- [LLVMdev] [PATCH][RFC] Add llvm.codegen Intrinsic To Support Embedded LLVM IR Code Generation