> On Mon, Dec 15, 2014 at 9:57 AM, Chad Rosier <mcrosier at codeaurora.org> > wrote: >> All, >> About two months ago I posted a patch that hoisted the hottest case >> statement from a switch statement during ISelLowering. >> >> See: http://reviews.llvm.org/D5786 >> >> Sean was rather adamant about using a Huffman tree (and I agree this is >> a >> more complete solution), so I'd like to put a patch together. > > I think this sounds really cool! > >> That being >> said, I have a few questions. >> >> The current switch lowering code sorts based on case values and is >> lowered >> with a combination of binary trees, lookup tables, bit tests and magic. >> If we lower the switch based on branch probabilities, I think the most >> reasonable approach would be to disable the lookup tables and stick with >> a >> purely tree structure. Does anyone object or have opinions on this >> matter? > > Spontaneously I would have thought the Huffman tree would just replace > the binary trees, i.e. we'd use the branch probabilities to build the > tree differently.The current implementation selects the pivot in such a way that it optimizes for lookup tables. Lowering the binary tree based on branch probabilities doesn't fit well with this approach. However, I think we could start by building a Huffman tree and once we've handled the most heavily weighted nodes, we could fall back to the current logic. Make sense?> For how many hot cases is a Huffman tree faster than a jump table? I > suspect we'll still want to build jump tables when we can.I take back my previous comment; I agree we should have a hybrid approach, per my comments above.> Thanks, > Hans
On Mon, Dec 15, 2014 at 2:26 PM, Chad Rosier <mcrosier at codeaurora.org> wrote:>> On Mon, Dec 15, 2014 at 9:57 AM, Chad Rosier <mcrosier at codeaurora.org> >> wrote: >>> All, >>> About two months ago I posted a patch that hoisted the hottest case >>> statement from a switch statement during ISelLowering. >>> >>> See: http://reviews.llvm.org/D5786 >>> >>> Sean was rather adamant about using a Huffman tree (and I agree this is >>> a >>> more complete solution), so I'd like to put a patch together. >> >> I think this sounds really cool! >> >>> That being >>> said, I have a few questions. >>> >>> The current switch lowering code sorts based on case values and is >>> lowered >>> with a combination of binary trees, lookup tables, bit tests and magic. >>> If we lower the switch based on branch probabilities, I think the most >>> reasonable approach would be to disable the lookup tables and stick with >>> a >>> purely tree structure. Does anyone object or have opinions on this >>> matter? >> >> Spontaneously I would have thought the Huffman tree would just replace >> the binary trees, i.e. we'd use the branch probabilities to build the >> tree differently. > > The current implementation selects the pivot in such a way that it optimizes > for lookup tables. Lowering the binary tree based on branch probabilities > doesn't fit well with this approach. However, I think we could start by > building a Huffman tree and once we've handled the most heavily weighted > nodes, > we could fall back to the current logic. Make sense?I think it would be interesting to find out where the best cut-off between jump table and Huffman tree is. If there's a single hot case, then checking for that first and then doing a jump table is probably beneficial. But how many branches does it take before just going through the jump table is faster? Like Sean says, this probably varies a lot between architectures. I suspect that if the whole switch can be lowered to a jump table, that's the best lowering. Using a Huffman tree sounds like a great solution when a single jump table doesn't work. - Hans
On Mon, Dec 15, 2014 at 2:55 PM, Hans Wennborg <hans at chromium.org> wrote:> > I suspect that if the whole switch can be lowered to a jump table, > that's the best lowering. >A well predicted branch can be better than even a well predicted indirect branch in lots of cases. At the IR level it allows predicate based simplification that may be harder. At the machine level, there are different levels of prediction accuracy, and I think the branch is essentially always likely to do better in terms of icache. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20141215/4eb63f9b/attachment.html>
On Mon, Dec 15, 2014 at 2:26 PM, Chad Rosier <mcrosier at codeaurora.org> wrote:> > > On Mon, Dec 15, 2014 at 9:57 AM, Chad Rosier <mcrosier at codeaurora.org> > > wrote: > >> All, > >> About two months ago I posted a patch that hoisted the hottest case > >> statement from a switch statement during ISelLowering. > >> > >> See: http://reviews.llvm.org/D5786 > >> > >> Sean was rather adamant about using a Huffman tree (and I agree this is > >> a > >> more complete solution), so I'd like to put a patch together. > > > > I think this sounds really cool! > > > >> That being > >> said, I have a few questions. > >> > >> The current switch lowering code sorts based on case values and is > >> lowered > >> with a combination of binary trees, lookup tables, bit tests and magic. > >> If we lower the switch based on branch probabilities, I think the most > >> reasonable approach would be to disable the lookup tables and stick with > >> a > >> purely tree structure. Does anyone object or have opinions on this > >> matter? > > > > Spontaneously I would have thought the Huffman tree would just replace > > the binary trees, i.e. we'd use the branch probabilities to build the > > tree differently. > > The current implementation selects the pivot in such a way that it > optimizes > for lookup tables.This seems like a potential point of friction: for lookup tables (and other techniques) you want to inherently keep the cases sorted by their values, but a Huffman tree does not care about the actual values; it only cares about their relative probabilities. -- Sean Silva> Lowering the binary tree based on branch probabilities > doesn't fit well with this approach. However, I think we could start by > building a Huffman tree and once we've handled the most heavily weighted > nodes, > we could fall back to the current logic. Make sense? > > > For how many hot cases is a Huffman tree faster than a jump table? I > > suspect we'll still want to build jump tables when we can. > > I take back my previous comment; I agree we should have a hybrid approach, > per my comments above. > > > Thanks, > > Hans > > > > > _______________________________________________ > LLVM Developers mailing list > LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20141215/965cd8a2/attachment.html>
> On Mon, Dec 15, 2014 at 2:26 PM, Chad Rosier <mcrosier at codeaurora.org> > wrote: >> >> > On Mon, Dec 15, 2014 at 9:57 AM, Chad Rosier <mcrosier at codeaurora.org> >> > wrote: >> >> All, >> >> About two months ago I posted a patch that hoisted the hottest case >> >> statement from a switch statement during ISelLowering. >> >> >> >> See: http://reviews.llvm.org/D5786 >> >> >> >> Sean was rather adamant about using a Huffman tree (and I agree this >> is >> >> a >> >> more complete solution), so I'd like to put a patch together. >> > >> > I think this sounds really cool! >> > >> >> That being >> >> said, I have a few questions. >> >> >> >> The current switch lowering code sorts based on case values and is >> >> lowered >> >> with a combination of binary trees, lookup tables, bit tests and >> magic. >> >> If we lower the switch based on branch probabilities, I think the >> most >> >> reasonable approach would be to disable the lookup tables and stick >> with >> >> a >> >> purely tree structure. Does anyone object or have opinions on this >> >> matter? >> > >> > Spontaneously I would have thought the Huffman tree would just replace >> > the binary trees, i.e. we'd use the branch probabilities to build the >> > tree differently. >> >> The current implementation selects the pivot in such a way that it >> optimizes >> for lookup tables. > > > This seems like a potential point of friction: for lookup tables (and > other > techniques) you want to inherently keep the cases sorted by their values, > but a Huffman tree does not care about the actual values; it only cares > about their relative probabilities.Exactly! I can think of lots of ways to deal with this "friction", but none of them sit well. As you mentioned, building a Huffman tree and then dealing with corner cases would be one approach. Another, which was Owen's suggestion, would be to peel the hot cases and then fall-back to the current logic.> -- Sean Silva > > >> Lowering the binary tree based on branch probabilities >> doesn't fit well with this approach. However, I think we could start by >> building a Huffman tree and once we've handled the most heavily weighted >> nodes, >> we could fall back to the current logic. Make sense? >> >> > For how many hot cases is a Huffman tree faster than a jump table? I >> > suspect we'll still want to build jump tables when we can. >> >> I take back my previous comment; I agree we should have a hybrid >> approach, >> per my comments above. >> >> > Thanks, >> > Hans >> >> >> >> >> _______________________________________________ >> LLVM Developers mailing list >> LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu >> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev >> >