Finkel, Hal J. via llvm-dev
2016-Dec-18 21:00 UTC
[llvm-dev] llvm (the middle-end) is getting slower, December edition
Sent from my Verizon Wireless 4G LTE DROID On Dec 18, 2016 2:56 PM, via llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> wrote:> > > > On Dec 17, 2016, at 1:35 PM, Davide Italiano via llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> wrote: > > > > First of all, sorry for the long mail. > > Inspired by the excellent analysis Rui did for lld, I decided to do > > the same for llvm. > > I'm personally very interested in build-time for LTO configuration, > > with particular attention to the time spent in the optimizer. > > From our own offline regression testing, one of the biggest culprits in our experience is Instcombine’s known bits calculation. A number of new known bits checks have been added in the past few years (e.g. to infer nuw, nsw, etc on various instructions) and the cost adds up quite a lot, because *the cost is paid even if Instcombine does nothing*, since it’s a significant cost on visiting every relevant instruction.FWIW, I've started working on a patch to add a cache for InstCombine's (ValueTracking's) known-bits calculation. I hope to have it ready for posting soon. -Hal> > This IME is one of the greatest ways performance gets lost: a tiny bit at a time, whenever a new combine/transformation is added that is *expensive to test for*. The test has to be done every time no matter what (and instcombine gets called a lot!), so the cost adds up. > > —escha > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org> > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20161218/cd9cd7f9/attachment.html>
Matthias Braun via llvm-dev
2016-Dec-19 22:32 UTC
[llvm-dev] llvm (the middle-end) is getting slower, December edition
> On Dec 18, 2016, at 1:00 PM, Finkel, Hal J. via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Sent from my Verizon Wireless 4G LTE DROID > > On Dec 18, 2016 2:56 PM, via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: > > > > > > > On Dec 17, 2016, at 1:35 PM, Davide Italiano via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: > > > > > > First of all, sorry for the long mail. > > > Inspired by the excellent analysis Rui did for lld, I decided to do > > > the same for llvm. > > > I'm personally very interested in build-time for LTO configuration, > > > with particular attention to the time spent in the optimizer. > > > > From our own offline regression testing, one of the biggest culprits in our experience is Instcombine’s known bits calculation. A number of new known bits checks have been added in the past few years (e.g. to infer nuw, nsw, etc on various instructions) and the cost adds up quite a lot, because *the cost is paid even if Instcombine does nothing*, since it’s a significant cost on visiting every relevant instruction. > > FWIW, I've started working on a patch to add a cache for InstCombine's (ValueTracking's) known-bits calculation. I hope to have it ready for posting soon.That sounds great! Last time I looked into compiletime ~10 months ago I also saw computeKnownBits as the biggest performance problem. Little things like load/store optimization calling computeknownbits in an attempt to improve the alignment predictions on the loads/store leading to many nodes getting queried over and over again. Feel free to add me as a reviewer! - Matthias -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20161219/d0daef30/attachment.html>
Maybe Matching Threads
- llvm (the middle-end) is getting slower, December edition
- llvm (the middle-end) is getting slower, December edition
- llvm (the middle-end) is getting slower, December edition
- llvm (the middle-end) is getting slower, December edition
- llvm (the middle-end) is getting slower, December edition