Hello LLVM developers, Question from a newcomer. I have been looking at the code in LLVMs register allocator (lib/CodeGen/RegAllocGreedy.cpp) and spill slot coloring (lib/CodeGen/StackSlotColoring.cpp). In other graph coloring register allocators that I have worked on, live ranges are pushed onto the coloring stack in order of increasing degree, then colored as they are popped off (meaning that colors are assigned to the most highly constrained live ranges first, then less-constrained live ranges later. There is a discussion of why this makes sense in Preston Briggs thesis, chapter 3. The LLVM allocator does something similar in that live range "size" is taken into account, however size is not exactly the same as interference graph degree in a Chaitin-style allocator. In particular, it is easy to imagine cases where you have two live ranges X and Y where X's size (from LiveInterval::getSize()) is greater than Y's size, but in fact Y is more constrained than X (or would be if you were building a Chaitin-style interference graph). I am wondering what the history/thinking is behind this design decision, and whether folks have experimented with trying enhance the allocators with more precise notion of "constrained-ness" to improve coloring. Thanks Than McIntosh -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160108/3662611e/attachment.html>