Hi all, I'm new on the list, so I want to say hello for everybody! I'm from Hungary and writing a LLVM backend for Tile64 processor as my master's thesis. It's a big time pressure on me, so the thesis will probably describe a backend only providing an assembly printer, but the development is likely to be continued beyond the thesis. For now, I've run into a very annoying problem while implementing the calling convention of Tilera architecture. The ABI says that a struct can be passed in registers if it fits in them. Moreover, if a struct gets passed in registers, it must be stored well aligned, i.e. just like as it resides in memory. A padding register must be maintained before a double-word aligned value if needed, and more than one value can be stored in a single register, e.g. two i16 in a i32 register. As I can understand, LLVM is trying to decompose datatypes into smaller components in some circumstances. E.g. decomposing a double into two i32 argument automatically is very useful for me because the processor consists of only i32 registers. However, this decomposition is a nightmare in the case of structs should passed inside registers. Speaking of function arguments, the problem can be mitigated by using a pointer tagged with byval attribute and catch such an argument in a custom CC function. On the other hand, when a function should return a struct, byval can't be used. Of course, there is no problem in case sret-demotion taking place, automatically for too big structs or forced by sret attribute. However, smaller structs get decomposed by default into component elements as returned values. I googled the net all the day, but, unfortunately, can't find a solution. Is there any way to disable this feature of LLVM and get structures as they are when returning them? Besides solutions, any suggestions and ideas are well appreciated :) All the best, David
Hi David,> I'm new on the list, so I want to say hello for everybody!hello!> I'm from Hungary and writing a LLVM backend for Tile64 processor as my > master's thesis. It's a big time pressure on me, so the thesis will > probably describe a backend only providing an assembly printer, but the > development is likely to be continued beyond the thesis. > > For now, I've run into a very annoying problem while implementing the > calling convention of Tilera architecture. The ABI says that a struct > can be passed in registers if it fits in them. Moreover, if a struct > gets passed in registers, it must be stored well aligned, i.e. just > like as it resides in memory. A padding register must be maintained > before a double-word aligned value if needed, and more than one value > can be stored in a single register, e.g. two i16 in a i32 register. > > As I can understand, LLVM is trying to decompose datatypes into smaller > components in some circumstances.Can you please explain more what you are referring to here. LLVM itself shouldn't be changing function parameters or return types unless the function has local (internal) linkage (since in that case ABI requirements don't matter). But perhaps you mean that clang is producing IR with these kinds of transformations already in it? If so, that's normal: front-ends are required to produce ABI conformant IR, so probably clang is producing IR for some other ABI (eg: x86). If so, you will need to teach clang about your ABI. Ciao, Duncan. E.g. decomposing a double into two> i32 argument automatically is very useful for me because the processor > consists of only i32 registers. However, this decomposition is a > nightmare in the case of structs should passed inside registers. > Speaking of function arguments, the problem can be mitigated by using a > pointer tagged with byval attribute and catch such an argument in a > custom CC function. On the other hand, when a function should return a > struct, byval can't be used. > > Of course, there is no problem in case sret-demotion taking place, > automatically for too big structs or forced by sret attribute. > However, smaller structs get decomposed by default into component > elements as returned values. I googled the net all the day, > but, unfortunately, can't find a solution. > > Is there any way to disable this feature of LLVM and get structures > as they are when returning them? > > Besides solutions, any suggestions and ideas are well appreciated :) > > All the best, > David > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
On Wednesday 02 May 2012 09:12:16 Duncan Sands wrote:> > As I can understand, LLVM is trying to decompose datatypes into smaller > > components in some circumstances. > > Can you please explain more what you are referring to here. LLVM itself > shouldn't be changing function parameters or return types unless the > function has local (internal) linkage (since in that case ABI requirements > don't matter).This is in the backend of LLVM itself. When converting the LLVM IR to its DAG representation prior to selection, CodeGen asks the target to take care of function parameters. Unfortunately the only interface it presents for the target code to make that decision is a sequence of MVTs: iN, float, double, vNiM, vNfM. Structs are split into their component members with no indication that they were originally more than that. This has affected a couple more people recently (including me): http://lists.cs.uiuc.edu/pipermail/llvmdev/2012-March/048203.html http://lists.cs.uiuc.edu/pipermail/cfe-commits/Week-of- Mon-20120326/055577.html If this interface could be improved, I believe clang simply apply a function to its QualType and produce an LLVM type which does the right thing. Without that improvement clang will have to use a context-sensitive model to map the whole sequence of arguments. At least, that's the ARM situation. I'm not sure Ivan's can even be solved without an improved interface (well, he could probably co-opt byval pointers too, but that's Just Wrong). This most recent one, I'm not sure about. Whether a struct can be mapped to a sane sequence of iN types probably hinges on the various alignment constraints and whether an argument can be split between regs and memory. (If a split is allowed then you can probably use [N x iM] where the struct has size N*M and alignment M (assuming iM has alignment M), otherwise that would be wrong). And Juhasz David wrote:> the problem can be mitigated by using a > pointer tagged with byval attribute and catch such an argument in a > custom CC function.That's the approach I've currently adopted for some of my work, but It's incomplete for my needs and I'm rather concerned about the performance of what does work: unless we reimplement mem2reg in the backend too, it introduces what amounts to an argument alloca with associated load/store very late on. Tim.
Reasonably Related Threads
- [LLVMdev] structs get decomposed when shouldn't
- [LLVMdev] structs get decomposed when shouldn't
- [LLVMdev] structs get decomposed when shouldn't
- [LLVMdev] structs get decomposed when shouldn't
- [LLVMdev] [llvm-commits] ABI: how to let the backend know that an aggregate should be allocated on stack