John Reagan via llvm-dev
2019-Mar-29 14:10 UTC
[llvm-dev] Proposal for O1/Og Optimization and Code Generation Pipeline
When I worked on the HPE NonStop compilers for x86 (we used Open64, not LLVM), we adjusted our -O1 to make sure the source display didn't "bounce around" based on feedback from users. We disabled any optimization that would move things across statement boundaries. We also disabled/de-tuned dead store since our DWARF location list support was pretty basic and with the removed store, you'd get the "wrong" answer when you did an examine. We weren't able at the time (they might have improved since then) to always trim the location lists to create the "dead zones". We didn't create an -Og since the NonStop users were already used to having -O1 be different from each prior platform (Itanium, MIPS, etc). Personally, I would have liked an -Og since I think the name "feels" better. For our OpenVMS compilers, we also settled on -O1 (/OPT=LEVEL=1 in DCL speak) for "do whatever you think that won't mess up debugging". And our -O0 still does some basic optimizations (ie, 1+1, if false, etc) We didn't get much push back on performance between -O1 and the next higher setting. I'll be sure to look for Greg's round table John -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20190329/fb6d1041/attachment.sig>
Ron Brender via llvm-dev
2019-Mar-29 16:23 UTC
[llvm-dev] Proposal for O1/Og Optimization and Code Generation Pipeline
When I worked on debugging optimized code technology at DEC (Digital Equipment Corporation for those old enough to recall) back in the late 90's, it became clear that simply picking and choosing optimizations as a way to get sorta decent code and sorta decent debugability is a losing game that by itself can not satisfy either goal particularly well. What is needed is a more fundamental analysis of the optimized code as part of generating the debugging information. We dealt well, I think, with three difficult optimization challenges: 1. Split lifetime variables (plus value propagation) 2. Breakpoints and stepping based on semantic events in the program 3. Function inlining A key premise what that this technology had to work without any limitation on optimization. And it did! A thorough overview of that work was published in "Debugging Optimized Code: Concepts and Implementation on DIGITAL Alpha Systems", Digital Technical Journal, Vol 10, No 1, pp81-99. That journal was probably obscure even at the time, but it is readily available at http://www.dtjcd.vmsresource.org.uk/pdfs/dtj_v10-01_1998.pdf While the original work was performed for OpenVMS on Alpha, most of it was later adapted to DEC's UNIX systems via the ladebug debugger of the time, and even later much was also ported to OpenVMS on Itanium (I64). The bottom line is this: a combination of decent code and decent debugability (-O1 -Og or even -O2 -G) is definitely achievable but it takes more than just tinkering with optimization levels or selective optimization. Ron On 3/29/2019 10:10 AM, John Reagan via llvm-dev wrote:> When I worked on the HPE NonStop compilers for x86 (we used Open64, not > LLVM), we adjusted our -O1 to make sure the source display didn't > "bounce around" based on feedback from users. We disabled any > optimization that would move things across statement boundaries. We > also disabled/de-tuned dead store since our DWARF location list support > was pretty basic and with the removed store, you'd get the "wrong" > answer when you did an examine. We weren't able at the time (they might > have improved since then) to always trim the location lists to create > the "dead zones". > > We didn't create an -Og since the NonStop users were already used to > having -O1 be different from each prior platform (Itanium, MIPS, etc). > Personally, I would have liked an -Og since I think the name "feels" better. > > For our OpenVMS compilers, we also settled on -O1 (/OPT=LEVEL=1 in DCL > speak) for "do whatever you think that won't mess up debugging". And > our -O0 still does some basic optimizations (ie, 1+1, if false, etc) > > We didn't get much push back on performance between -O1 and the next > higher setting. > > I'll be sure to look for Greg's round table > > John > > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev-- Ron Brender Whose favorite airplane is N6119A, a 1979 Cessna T210. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20190329/44db1ed1/attachment.html>
Eric Christopher via llvm-dev
2019-Apr-02 02:05 UTC
[llvm-dev] Proposal for O1/Og Optimization and Code Generation Pipeline
Hi Ron, On Fri, Mar 29, 2019 at 9:23 AM Ron Brender via llvm-dev <llvm-dev at lists.llvm.org> wrote:> > When I worked on debugging optimized code technology at DEC (Digital Equipment Corporation for those old enough to recall) back in the late 90's, it became clear that simply picking and choosing optimizations as a way to get sorta decent code and sorta decent debugability is a losing game that by itself can not satisfy either goal particularly well. What is needed is a more fundamental analysis of the optimized code as part of generating the debugging information. > > We dealt well, I think, with three difficult optimization challenges: > > Split lifetime variables (plus value propagation) > Breakpoints and stepping based on semantic events in the program > Function inlining > > A key premise what that this technology had to work without any limitation on optimization. And it did! > > A thorough overview of that work was published in "Debugging Optimized Code: Concepts and Implementation on DIGITAL Alpha Systems", Digital Technical Journal, Vol 10, No 1, pp81-99. That journal was probably obscure even at the time, but it is readily available at > > http://www.dtjcd.vmsresource.org.uk/pdfs/dtj_v10-01_1998.pdf > > While the original work was performed for OpenVMS on Alpha, most of it was later adapted to DEC's UNIX systems via the ladebug debugger of the time, and even later much was also ported to OpenVMS on Itanium (I64). > > The bottom line is this: a combination of decent code and decent debugability (-O1 -Og or even -O2 -G) is definitely achievable but it takes more than just tinkering with optimization levels or selective optimization. >Thanks for your feedback here. I believe you misunderstand my direction as anything other than a first pass at revamping our optimization in the area around debugging, but I very much do appreciate the commentary and look forward to your work alongside us as well! -eric> Ron > > > On 3/29/2019 10:10 AM, John Reagan via llvm-dev wrote: > > When I worked on the HPE NonStop compilers for x86 (we used Open64, not > LLVM), we adjusted our -O1 to make sure the source display didn't > "bounce around" based on feedback from users. We disabled any > optimization that would move things across statement boundaries. We > also disabled/de-tuned dead store since our DWARF location list support > was pretty basic and with the removed store, you'd get the "wrong" > answer when you did an examine. We weren't able at the time (they might > have improved since then) to always trim the location lists to create > the "dead zones". > > We didn't create an -Og since the NonStop users were already used to > having -O1 be different from each prior platform (Itanium, MIPS, etc). > Personally, I would have liked an -Og since I think the name "feels" better. > > For our OpenVMS compilers, we also settled on -O1 (/OPT=LEVEL=1 in DCL > speak) for "do whatever you think that won't mess up debugging". And > our -O0 still does some basic optimizations (ie, 1+1, if false, etc) > > We didn't get much push back on performance between -O1 and the next > higher setting. > > I'll be sure to look for Greg's round table > > John > > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > > -- > Ron Brender > Whose favorite airplane is N6119A, a 1979 Cessna T210. > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
Possibly Parallel Threads
- Memory overflow during cmake/ninja build
- [RFC] Coding Standards: "prefer `int` for, regular arithmetic, use `unsigned` only for bitmask and when you, intend to rely on wrapping behavior."
- Proposal for O1/Og Optimization and Code Generation Pipeline
- [LLVMdev] Introduction for new consumer of LLVM
- [RFC] Coding Standards: "prefer `int` for, regular arithmetic, use `unsigned` only for bitmask and when you, intend to rely on wrapping behavior."