Pertti Kellomäki
2008-Apr-03 07:31 UTC
[LLVMdev] Alias analysis and instruction level parallelism
Florian Brandner wrote:> an other option would be to save the GEP instructions and provide a mapping > between them and the lowered arithmetic. it's not very clean but it would be > safe, and probably work well.In my opinion this seems like the obvious way to go. The information is already there, so reverse engineering it back does not sound too clever. Of course if the reverse engineering would provide useful information about pointers that do not originate from GEP instructions, then it might be worthwhile. But my feeling is that at least for our purposes, GEPs really are the interesting stuff. My understanding is that parallelizing C compilers try to convert pointer arithmetic back to array access in order to make sense of them, so essentially turning them into GEPs. Certainly the dependence literature is heavily array based. -- Pertti
Dan Gohman
2008-Apr-03 19:14 UTC
[LLVMdev] Alias analysis and instruction level parallelism
On Apr 3, 2008, at 12:31 AM, Pertti Kellomäki wrote:> Florian Brandner wrote: >> an other option would be to save the GEP instructions and provide a >> mapping >> between them and the lowered arithmetic. it's not very clean but it >> would be >> safe, and probably work well. > > In my opinion this seems like the obvious way to go.I think this is trickier than it sounds; the reason GEPs are lowered is to allow strength-reduction and other things to do transformations on them. It would require those passes to know how to update the mapping. Dan
Pertti Kellomäki
2008-Apr-03 21:20 UTC
[LLVMdev] Alias analysis and instruction level parallelism
Dan Gohman wrote:> I think this is trickier than it sounds; the reason GEPs are lowered > is to > allow strength-reduction and other things to do transformations on them. > It would require those passes to know how to update the mapping.Yes, I do appreciate the amount of work involved, and I am very open to other suggestions. What the backend really needs to know is what loads and stores are independent of each other. When looking at a scheduling region (basic block for now), we already know at the LLVM IR level which loads and stores could potentially be scheduled at the same cycle, so essentially we can anticipate which alias queries the backend would make. An alternative game plan would then be to identify the loads and stores of interest, do the alias queries at the LLVM IR level, and store the independence info somewhere. The back end would then trace the target memory references back to LLVM IR loads and stores, and consult the stored independence information while scheduling. It seems to me that this should be safe, at least if one is careful about what passes are run between the alias analysis and the back end. I cannot think of anything offhand that could make two independent memory references to be dependent, which would need to happen in order for things to go bad. There is a potential combinatorial explosion built into this, but I don't think it would turn up in practice. -- Pertti
Apparently Analagous Threads
- [LLVMdev] Alias analysis and instruction level parallelism
- [LLVMdev] Alias analysis and instruction level parallelism
- [LLVMdev] Alias analysis and instruction level parallelism
- [LLVMdev] Alias analysis and instruction level parallelism
- [LLVMdev] Alias analysis and instruction level parallelism