Folks, I'm trying to rationalize about optimization levels and maybe we should come up with a document like this: http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html Though, I remember a discussion a few months ago, and some people recommended we had names, rather than numbers, to dissociate the idea that 3 is better than 2. Regardless, would be good to have some guidelines on what goes where, so we don't end up in yet another long discussion about where to put the optimization <insert-name-here>. As far as I can get from our side is: -O3 : throw everything and hope it sticks -O2 : optimized build, but should not explode in code size nor consume all resources while compiling -O1 : optimized debug binaries, don't change the execution order but remove dead code and stuff -O0 : don't touch it -Os : optimize, but don't run passes that could blow up code. Try to be a bit more drastic when removing code. When in doubt, prefer small, not fast code. -Oz : only perform optimizations that reduce code size. Don't even try to run things that could potentially increase code size. I've been thinking about this, and I think, regarding those criteria, it would make sense to use a try/compare/rollback approach to some passes, at least the most dramatic ones. For instance, the vectorizer keeps the old loops hanging, and under Os/Oz, it should be possible to rollback the pass if the end result is bigger. Of course, IR size has little to do with final code size, but that's why we have (and rely so much on) heuristics. AFAIK, for that to work on any pass as they are, we'd have to implement a transactional model on IRBuilder, which is not trivial, but could be done. Does anyone have a strong opinion about this? cheers, --renato -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130606/76d60f3b/attachment.html>
Hi, interesting idea. I'll just note one thing at this point in the discussion: whether it's done by trying to "project what a transformation will do" or by applying transforms with the capability to roll-back, this depends on having a good idea of how a given piece of code (at some level) will actually perform on a piece of real hardware. Without that I suspect other aspects of the how to do optmizations won't work effectively anyway. Cheers, Dave ________________________________________ From: cfe-dev-bounces at cs.uiuc.edu [cfe-dev-bounces at cs.uiuc.edu] On Behalf Of Renato Golin [renato.golin at linaro.org] Sent: Thursday, June 06, 2013 9:40 PM To: LLVM Dev; Clang Dev Subject: [cfe-dev] Meaning of LLVM optimization levels Folks, I'm trying to rationalize about optimization levels and maybe we should come up with a document like this: http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html Though, I remember a discussion a few months ago, and some people recommended we had names, rather than numbers, to dissociate the idea that 3 is better than 2. Regardless, would be good to have some guidelines on what goes where, so we don't end up in yet another long discussion about where to put the optimization <insert-name-here>. As far as I can get from our side is: -O3 : throw everything and hope it sticks -O2 : optimized build, but should not explode in code size nor consume all resources while compiling -O1 : optimized debug binaries, don't change the execution order but remove dead code and stuff -O0 : don't touch it -Os : optimize, but don't run passes that could blow up code. Try to be a bit more drastic when removing code. When in doubt, prefer small, not fast code. -Oz : only perform optimizations that reduce code size. Don't even try to run things that could potentially increase code size. I've been thinking about this, and I think, regarding those criteria, it would make sense to use a try/compare/rollback approach to some passes, at least the most dramatic ones. For instance, the vectorizer keeps the old loops hanging, and under Os/Oz, it should be possible to rollback the pass if the end result is bigger. Of course, IR size has little to do with final code size, but that's why we have (and rely so much on) heuristics. AFAIK, for that to work on any pass as they are, we'd have to implement a transactional model on IRBuilder, which is not trivial, but could be done. Does anyone have a strong opinion about this? cheers, --renato -- IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
Dallman, John
2013-Jun-07 12:53 UTC
[LLVMdev] [cfe-dev] Meaning of LLVM optimization levels
I'm not a LLVM or Clang developer, but I do spend a lot of time teasing software into working with the highest possible optimisation levels where it still works correctly. These guidelines are pretty good, but there are a few details worth considering. It needs to be possible to debug code at any optimisation level. It's acceptable for that to be harder at high optimisation levels, but it should be possible. I find myself doing this when I hit optimizer bugs, and want to make coherent bug reports. The reports are much better if I can work out what's wrong in the generated code. I haven't had to report many problems with Clang ... but I haven't turned up the optimisation all the way either. Related to optimisation levels, it's quite helpful to have a way of controlling optimisation on a function-by-function level. This is very useful when you're trying to work out where in a file with many functions an optimiser problem is happening; it isn't foolproof, but it helps a lot. -- John Dallman From: cfe-dev-bounces at cs.uiuc.edu [mailto:cfe-dev-bounces at cs.uiuc.edu] On Behalf Of Renato Golin Sent: 06 June 2013 21:41 To: LLVM Dev; Clang Dev Subject: [cfe-dev] Meaning of LLVM optimization levels Folks, I'm trying to rationalize about optimization levels and maybe we should come up with a document like this: http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html Though, I remember a discussion a few months ago, and some people recommended we had names, rather than numbers, to dissociate the idea that 3 is better than 2. Regardless, would be good to have some guidelines on what goes where, so we don't end up in yet another long discussion about where to put the optimization <insert-name-here>. As far as I can get from our side is: -O3 : throw everything and hope it sticks -O2 : optimized build, but should not explode in code size nor consume all resources while compiling -O1 : optimized debug binaries, don't change the execution order but remove dead code and stuff -O0 : don't touch it -Os : optimize, but don't run passes that could blow up code. Try to be a bit more drastic when removing code. When in doubt, prefer small, not fast code. -Oz : only perform optimizations that reduce code size. Don't even try to run things that could potentially increase code size. I've been thinking about this, and I think, regarding those criteria, it would make sense to use a try/compare/rollback approach to some passes, at least the most dramatic ones. For instance, the vectorizer keeps the old loops hanging, and under Os/Oz, it should be possible to rollback the pass if the end result is bigger. Of course, IR size has little to do with final code size, but that's why we have (and rely so much on) heuristics. AFAIK, for that to work on any pass as they are, we'd have to implement a transactional model on IRBuilder, which is not trivial, but could be done. Does anyone have a strong opinion about this? cheers, --renato ----------------- Siemens Industry Software Limited is a limited company registered in England and Wales. Registered number: 3476850. Registered office: Faraday House, Sir William Siemens Square, Frimley, Surrey, GU16 8QD. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130607/008c1457/attachment.html>
Renato Golin
2013-Jun-07 16:38 UTC
[LLVMdev] [cfe-dev] Meaning of LLVM optimization levels
On 7 June 2013 13:53, Dallman, John <john.dallman at siemens.com> wrote:> It needs to be possible to debug code at any optimisation level. >Yes, I agree. But after O1, sequential execution is a big impediment for optimizations, and keeping the debug information valid after so many transformations might pose a big penalty on the passes (time & memory). That was the whole idea of metadata being a second-class citizen. Related to optimisation levels, it's quite helpful to have a way of> controlling > > optimisation on a function-by-function level. This is very useful when > you're trying > > to work out where in a file with many functions an optimiser problem is > happening; > > it isn't foolproof, but it helps a lot. >There are already people working on that, and discussions on the list about this very topic. I agree that it would be extremely helpful for debugging large programs. cheers, --renato -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130607/3caa0621/attachment.html>
On Jun 6, 2013, at 1:40 PM, Renato Golin <renato.golin at linaro.org> wrote:> Folks, > > I'm trying to rationalize about optimization levels and maybe we should come up with a document like this: > > http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html > > Though, I remember a discussion a few months ago, and some people recommended we had names, rather than numbers, to dissociate the idea that 3 is better than 2. Regardless, would be good to have some guidelines on what goes where, so we don't end up in yet another long discussion about where to put the optimization <insert-name-here>. > > As far as I can get from our side is: > > -O3 : throw everything and hope it sticks > -O2 : optimized build, but should not explode in code size nor consume all resources while compiling > -O1 : optimized debug binaries, don't change the execution order but remove dead code and stuff > -O0 : don't touch it > -Os : optimize, but don't run passes that could blow up code. Try to be a bit more drastic when removing code. When in doubt, prefer small, not fast code. > -Oz : only perform optimizations that reduce code size. Don't even try to run things that could potentially increase code size.I think that this is a pretty good codification of how things work, but we should separate out the mechanics (e.g. running passes) from the goals (don't blow up code size). Something like this definitely should be in the Clang user docs. The LLVM docs should have something similar but less "GCC command line option" centric. -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130617/142e2825/attachment.html>
Apparently Analagous Threads
- [LLVMdev] [cfe-dev] Meaning of LLVM optimization levels
- [LLVMdev] [cfe-dev] Meaning of LLVM optimization levels
- [LLVMdev] [cfe-dev] Meaning of LLVM optimization levels
- [LLVMdev] [cfe-dev] Meaning of LLVM optimization levels
- [LLVMdev] [cfe-dev] Meaning of LLVM optimization levels