On Tuesday 30 December 2008 15:30:35 Chris Lattner wrote:> On Dec 30, 2008, at 6:39 AM, Corbin Simpson wrote: > >> However, the special instrucions cannot directly be mapped to LLVM > >> IR, like > >> "min", the conversion involves in 'extract' the vector, create > >> less-than-compare, create 'select' instruction, and create 'insert- > >> element' > >> instruction. > > Using scalar operations obviously works, but will probably produce > very inefficient code. One positive thing is that all target-specific > operations of supported vector ISAs (Altivec and SSE[1-4] currently) > are exposed either through LLVM IR ops or through target-specific > builtins/intrinsics. This means that you can get access to all the > crazy SSE instructions, but it means that your codegen would have to > handle this target-specific code generation.I think Alex was referring here to a AOS layout which is completely not ready. The currently supported one is SOA layout which eliminates scalar operations.> The direction we're going is to expose more and more vector operations > in LLVM IR. For example, compares and select are currently being > worked on, so you can do a comparison of two vectors which returns a > vector of bools, and use that as the compare value of a select > instruction (selecting between two vectors). This would allow > implementing min and a variety of other operations and is easier for > the codegen to reassemble into a first-class min operation etc. > > I don't know what the status of this is, I think it is partially > implemented but may not be complete yet.Ah, that's good to know!> >> I don't have experience of the new vector instructions in LLVM, and > >> perhaps > >> that's why it makes me feel it's complicated to fold the swizzle and > >> writemask. > > We have really good support for swizzling operations already with the > shuffle_vector instruction. I'm not sure about writemask.With SOA they're rarely used (essentially never unless we "kill" a pixel") [4 x <4 x float> ] {{xxxx, yyyy, zzzz, wwww}, {xxxx, yyyy, zzzz, www}...} so with SOA both shuffles and writemask come down to a simple selection of the element within the array (whether that will be good or bad is yet to be seen based on the code in gpu llvm backends that we'll have)> Sure, it would be very reasonable to make these target-specific > builtins when targeting a GPU, the same way we have target-specific > builtins for SSE.Actually currently the plan is to have essentially a "two pass" LLVM IR. I wanted the first one to never lower any of the GPU instructions so we'd have intrinsics or maybe even just function calls like gallium.lit, gallium.dot, gallium.noise and such. Then gallium should query the driver to figure out which instructions the GPU supports and runs our custom llvm lowering pass that decomposes those into things the GPU supports. Essentially I'd like to make as many complicated things in gallium as possible to make the GPU llvm backends in drivers as simple as possible and this would help us make the pattern matching in the generator /a lot/ easier (matching gallium.lit vs 9+ instructions it would be be decomposed to) and give us a more generic GPU independent layer above. But that hasn't been done yet, I hope to be able to write that code while working on the OpenCL implementation for Gallium. z
On Dec 30, 2008, at 3:03 PM, Zack Rusin wrote:> On Tuesday 30 December 2008 15:30:35 Chris Lattner wrote: >> On Dec 30, 2008, at 6:39 AM, Corbin Simpson wrote: >>>> However, the special instrucions cannot directly be mapped to LLVM >>>> IR, like >>>> "min", the conversion involves in 'extract' the vector, create >>>> less-than-compare, create 'select' instruction, and create 'insert- >>>> element' >>>> instruction. >> >> Using scalar operations obviously works, but will probably produce >> very inefficient code. One positive thing is that all target- >> specific >> operations of supported vector ISAs (Altivec and SSE[1-4] currently) >> are exposed either through LLVM IR ops or through target-specific >> builtins/intrinsics. This means that you can get access to all the >> crazy SSE instructions, but it means that your codegen would have to >> handle this target-specific code generation. > > I think Alex was referring here to a AOS layout which is completely > not ready. > The currently supported one is SOA layout which eliminates scalar > operations.Ok!>> Sure, it would be very reasonable to make these target-specific >> builtins when targeting a GPU, the same way we have target-specific >> builtins for SSE. > > Actually currently the plan is to have essentially a "two pass" LLVM > IR. I > wanted the first one to never lower any of the GPU instructions so > we'd have > intrinsics or maybe even just function calls like gallium.lit, > gallium.dot, > gallium.noise and such. Then gallium should query the driver to > figure out > which instructions the GPU supports and runs our custom llvm > lowering pass > that decomposes those into things the GPU supports.That makes a lot of sense. Note that there is no reason to use actual LLVM intrinsics for this: naming them "gallium.lit" is just as good as "llvm.gallium.lit" for example.> Essentially I'd like to > make as many complicated things in gallium as possible to make the > GPU llvm > backends in drivers as simple as possible and this would help us > make the > pattern matching in the generator /a lot/ easier (matching > gallium.lit vs 9+ > instructions it would be be decomposed to) and give us a more > generic GPU > independent layer above. But that hasn't been done yet, I hope to be > able to > write that code while working on the OpenCL implementation for > Gallium.Makes sense. For the more complex functions (e.g. texture lookup) you can also just compile C code to LLVM IR and use the LLVM inliner to inline the code if you prefer. -Chris
Zack Rusin wrote:>> Sure, it would be very reasonable to make these target-specific >> builtins when targeting a GPU, the same way we have target-specific >> builtins for SSE. > > Actually currently the plan is to have essentially a "two pass" LLVM IR. I > wanted the first one to never lower any of the GPU instructions so we'd have > intrinsics or maybe even just function calls like gallium.lit, gallium.dot, > gallium.noise and such. Then gallium should query the driver to figure out > which instructions the GPU supports and runs our custom llvm lowering pass > that decomposes those into things the GPU supports. Essentially I'd like to > make as many complicated things in gallium as possible to make the GPU llvm > backends in drivers as simple as possible and this would help us make the > pattern matching in the generator /a lot/ easier (matching gallium.lit vs 9+ > instructions it would be be decomposed to) and give us a more generic GPU > independent layer above. But that hasn't been done yet, I hope to be able to > write that code while working on the OpenCL implementation for Gallium.Um, whichever. Honestly, I'm gonna do s/R300VS/R300FS/g on my current work, commit it, and then forget about for the next two months while I get a pipe working. I've got a skeleton that does nothing, and I won't do anything else until we're solid on how to proceed. I'm definitely not very experienced in this area, so I defer to you all. R300 Radeons have insts that operate on vectors, and insts that operate only on the .w of each operand. I don't know how to best represent them. So far, the strange (read: non-LLVM) things seem to be: - No pointers. - No traditional load and store concepts. - Only one type, v4f32. - No modifiable stack, no frame pointers, no calling conventions. - No variable-length loops. I can tell you for sure that the ATI HLSL compiler unwinds and unrolls everything, so that they don't have to deal with call and ret. Other than that, I don't know how to handle this stuff. ~ C.
Chris Lattner wrote:> The direction we're going is to expose more and more vector operations in > LLVM IR. For example, compares and select are currently being worked on, > so you can do a comparison of two vectors which returns a vector of bools, > and use that as the compare value of a select instruction (selecting between > two vectors). This would allow implementing min and a variety of other > operations and is easier for the codegen to reassemble into a first-class > min operation etc.With the motivation of making the codegen easier to reassemble the first-class operations, do you also mean that there will be vector version of add, sub, mul, which are usually supported by many vector GPU? Stephane Marchesin wrote:> So what remains are chips that are natively vector GPUs. The question > is more whether we'll be able to have llvm build up vector > instructions from scalar onesThe reason why I started this thread was looking for some example code doing this? Have we already had any backend in LLVM doing this? It seems not easy to me. Zack Rusin wrote:> I think Alex was referring here to a AOS layout which is completely not > ready. > Actually currently the plan is to have essentially a "two pass" LLVM IR. I > wanted the first one to never lower any of the GPU instructions so we'd have > intrinsics or maybe even just function calls like gallium.lit, gallium.dot, > gallium.noise and such. Then gallium should query the driver to figure out > which instructions the GPU supports and runs our custom llvm lowering pass > that decomposes those into things the GPU supports.If I understand correct, that is to say, the gallium will dynamically build a lowering pass by querying the capability (instructions supported by the GPU)? Instead, isn't it a better approach to have a lowering pass for each GPU and gallium simply uses it?> Essentially I'd like to > make as many complicated things in gallium as possible to make the GPU llvm > backends in drivers as simple as possible and this would help us make the > pattern matching in the generator /a lot/ easier (matching gallium.lit vs 9+ > instructions it would be be decomposed to) and give us a more generic GPU > independent layer above. But that hasn't been done yet, I hope to be able to > write that code while working on the OpenCL implementation for Gallium.This two-pass approach is what I am taking now to write the compiler for a GPU ( sorry but I am not allowed to reveal the name). I don't work on the gallium directly. I am writing a frontend which converts vs_3_0 to LLVM IR. That's why I reference both SOA and AOS code. I think the NDA will allow me (to be confirmed) to contribute only this frontend but not the LLVM backend neither the lowering pass of this GPU. What do you plan to do with SOA and AOS paths in the gallium? (1) Will they eventually be developed independently? So that for a scalar/SIMD GPU, the SOA will be used to generate LLVM IR, and for a vector GPU, AOS is used? (2) At present the difference between SOA and AOS path is not only the layout of the input data. The AOS seems to be more complete for me, though Rusin has said that it's completely not ready and not used in the gallium. Is there a plan to merge/add the support of function/branch and LLVM IR extract/insert/shuffle to the SOA code? By the way, is there any open source frontend which converts GLSL to LLVM IR? Alex.
Seemingly Similar Threads
- [LLVMdev] [Mesa3d-dev] Folding vector instructions
- [LLVMdev] [Mesa3d-dev] Folding vector instructions
- [LLVMdev] [Mesa3d-dev] Folding vector instructions
- [LLVMdev] [Mesa3d-dev] Folding vector instructions
- [Mesa3d-dev] [PATCH] glsl: put varyings in texcoord slots