Brian Gesiak via llvm-dev
2017-Jun-19 23:13 UTC
[llvm-dev] Next steps for optimization remarks?
Hello all, In https://www.youtube.com/watch?v=qq0q1hfzidg, Adam Nemet (cc'ed) describes optimization remarks and some future plans for the project. I had a few follow-up questions: 1. As an example of future work to be done, the talk mentions expanding the set of optimization passes that emit remarks. However, the Clang User Manual mentions that "optimization remarks do not really make sense outside of the major transformations (e.g.: inlining, vectorization, loop optimizations)." [1] I am wondering: which passes exist today that are most in need of supporting optimization remarks? Should all passes emit optimization remarks, or are there indeed passes for which optimization remarks "do not make sense"? 2. I tried running llvm/utils/opt-viewer/opt-viewer.py to produce an HTML dashboard for the optimization remark YAML generated from a large C++ program. Unfortunately, the Python script does not finish, even after over an hour of processing. It appears performance has been brought up before by Bob Haarman (cc'ed), and some optimizations have been made since. [2] I wonder if I'm passing in bad input (6,000+ YAML files -- too many?), or if there's still some room to speed up the opt-viewer.py script? I tried the C++ implementation as well, but that never completed either. [3] Overall I'm excited to make greater use of optimization remarks, and to contribute in any way I can. Please let me know if you have any thoughts on my questions above! [1] https://clang.llvm.org/docs/UsersManual.html#options-to-emit-optimization-reports [2] http://lists.llvm.org/pipermail/llvm-dev/2016-November/107039.html [3] https://reviews.llvm.org/D26723 - Brian Gesiak -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170619/659cfe62/attachment.html>
Hal Finkel via llvm-dev
2017-Jun-19 23:28 UTC
[llvm-dev] Next steps for optimization remarks?
On 06/19/2017 06:13 PM, Brian Gesiak via llvm-dev wrote:> Hello all, > > In https://www.youtube.com/watch?v=qq0q1hfzidg, Adam Nemet (cc'ed) > describes optimization remarks and some future plans for the project. > I had a few follow-up questions: > > 1. As an example of future work to be done, the talk mentions > expanding the set of optimization passes that emit remarks. However, > the Clang User Manual mentions that "optimization remarks do not > really make sense outside of the major transformations (e.g.: > inlining, vectorization, loop optimizations)." [1] I am wondering: > which passes exist today that are most in need of supporting > optimization remarks? Should all passes emit optimization remarks, or > are there indeed passes for which optimization remarks "do not make > sense"?Obviously there is a continuous spectrum of transformation effects between "major" and "minor", and moreover, we have different consumers of the remarks. Remarks that would be too noisy if directly viewed by a human (because Clang prints them all, for example), might make perfect sense if interpreted by some tool. llvm-opt-report, for example, demonstrates how a tool can collect many remarks and aggregate them into a more succinct form. If you're looking for an area to contribute, I'd recommend looking at how to better output (and display) the "why not" of transformations that didn't fire. Memory dependencies that block vectorization and loop-invariant code motion, for example, would be really useful if mapped back to source-level constructs for presentation to the user. -Hal> > 2. I tried running llvm/utils/opt-viewer/opt-viewer.py to produce an > HTML dashboard for the optimization remark YAML generated from a large > C++ program. Unfortunately, the Python script does not finish, even > after over an hour of processing. It appears performance has been > brought up before by Bob Haarman (cc'ed), and some optimizations have > been made since. [2] I wonder if I'm passing in bad input (6,000+ YAML > files -- too many?), or if there's still some room to speed up the > opt-viewer.py script? I tried the C++ implementation as well, but that > never completed either. [3] > > Overall I'm excited to make greater use of optimization remarks, and > to contribute in any way I can. Please let me know if you have any > thoughts on my questions above! > > [1] > https://clang.llvm.org/docs/UsersManual.html#options-to-emit-optimization-reports > [2] http://lists.llvm.org/pipermail/llvm-dev/2016-November/107039.html > [3] https://reviews.llvm.org/D26723 > > - Brian Gesiak > > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev-- Hal Finkel Lead, Compiler Technology and Programming Languages Leadership Computing Facility Argonne National Laboratory -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170619/9b94b8c7/attachment-0001.html>
Brian Gesiak via llvm-dev
2017-Jun-20 00:54 UTC
[llvm-dev] Next steps for optimization remarks?
Hal, thank you! I had forgotten about llvm-opt-report, even though it was mentioned during Adam's talk. I think a tool that parses the YAML, like llvm-opt-report, might be a way to sidestep my HTML generation performance problem as well. I'm most interested in, as you mentioned, transformations that were not applied, and surfacing those to users. Thanks for suggesting vectorization and LICM blockers, I'll look into those. - Brian Gesiak -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170619/1af6cf5e/attachment.html>
Adam Nemet via llvm-dev
2017-Jun-20 08:50 UTC
[llvm-dev] Next steps for optimization remarks?
> On Jun 20, 2017, at 1:13 AM, Brian Gesiak <modocache at gmail.com> wrote: > > Hello all, > > In https://www.youtube.com/watch?v=qq0q1hfzidg <https://www.youtube.com/watch?v=qq0q1hfzidg>, Adam Nemet (cc'ed) describes optimization remarks and some future plans for the project. I had a few follow-up questions: > > 1. As an example of future work to be done, the talk mentions expanding the set of optimization passes that emit remarks. However, the Clang User Manual mentions that "optimization remarks do not really make sense outside of the major transformations (e.g.: inlining, vectorization, loop optimizations)." [1] I am wondering: which passes exist today that are most in need of supporting optimization remarks? Should all passes emit optimization remarks, or are there indeed passes for which optimization remarks "do not make sense”?I think that we want to report most optimizations. Where I think we need to be a bit more careful is missed optimizations. For those, we should try to report cases where there is a good chance that the user may be able to take some action to enable the transformation (e.g. pragma, restrict, source modification or cost model overrides).> > 2. I tried running llvm/utils/opt-viewer/opt-viewer.py to produce an HTML dashboard for the optimization remark YAML generated from a large C++ program. Unfortunately, the Python script does not finish, even after over an hour of processing. It appears performance has been brought up before by Bob Haarman (cc'ed), and some optimizations have been made since. [2] I wonder if I'm passing in bad input (6,000+ YAML files -- too many?), or if there's still some room to speed up the opt-viewer.py script? I tried the C++ implementation as well, but that never completed either. [3]Do you have libYAML installed to get the C parser for YAML; the Python parser is terribly slow? opt-viewer issues a warning if it needs to fall back on the Python parser. We desperately need a progress bar in opt-viewer. Let me know if you want to add it otherwise I will. I filed llvm.org/PR33522 for this. In terms of improving the performance, I am pretty sure the bottleneck is still YAML parsing so: - If PGO is used, we can have a threshold to not even emit remarks on cold code, this should dramatically improve performance, llvm.org/PR33523 - I expect that some sort of binary encoding of YAML would speed up parsing but I haven’t researched this topic yet... - There is a simple tool called opt-stats.py next to the opt-viewer which provide stats on the different types of remarks. We can see which ones are overly noisy and try to reduce the false positive rate. For example, last time I checked the inlining remark that reports missing definition of the callee was the top missed remark. We should not report this for system headers where there is not much the user can do. Adam> > Overall I'm excited to make greater use of optimization remarks, and to contribute in any way I can. Please let me know if you have any thoughts on my questions above! > > [1] https://clang.llvm.org/docs/UsersManual.html#options-to-emit-optimization-reports <https://clang.llvm.org/docs/UsersManual.html#options-to-emit-optimization-reports> > [2] http://lists.llvm.org/pipermail/llvm-dev/2016-November/107039.html <http://lists.llvm.org/pipermail/llvm-dev/2016-November/107039.html> > [3] https://reviews.llvm.org/D26723 <https://reviews.llvm.org/D26723> > > - Brian Gesiak-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170620/e93d704a/attachment.html>
Brian Gesiak via llvm-dev
2017-Jun-27 18:48 UTC
[llvm-dev] Next steps for optimization remarks?
Adam, thanks for all the suggestions! One nice aspect of the `-Rpass` family of options is that I can filter based on what I want. If I only want to see which inlines I missed, I could use `clang -Rpass-missed="inline"`, for example. On the other hand, optimization remark YAML always include remarks from all passes (as far as I can tell), which increases the amount of time it takes opt-viewer.py and other tools to parse. Would you be open to including options to, for example, only emit optimization remarks related to loop vectorization, or to not emit any analysis remarks? Or is it important that the YAML always include all remarks? Let me know what you think! In the meantime, I'll try to add the progress bar you mention in llvm.org/PR33522. Thanks! - Brian Gesiak -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170627/2000005c/attachment.html>
Davide Italiano via llvm-dev
2017-Jul-14 15:21 UTC
[llvm-dev] Next steps for optimization remarks?
On Mon, Jun 19, 2017 at 4:13 PM, Brian Gesiak via llvm-dev <llvm-dev at lists.llvm.org> wrote:> Hello all, > > In https://www.youtube.com/watch?v=qq0q1hfzidg, Adam Nemet (cc'ed) describes > optimization remarks and some future plans for the project. I had a few > follow-up questions: > > 1. As an example of future work to be done, the talk mentions expanding the > set of optimization passes that emit remarks. However, the Clang User Manual > mentions that "optimization remarks do not really make sense outside of the > major transformations (e.g.: inlining, vectorization, loop optimizations)." > [1] I am wondering: which passes exist today that are most in need of > supporting optimization remarks? Should all passes emit optimization > remarks, or are there indeed passes for which optimization remarks "do not > make sense"? > > 2. I tried running llvm/utils/opt-viewer/opt-viewer.py to produce an HTML > dashboard for the optimization remark YAML generated from a large C++ > program. Unfortunately, the Python script does not finish, even after over > an hour of processing. It appears performance has been brought up before by > Bob Haarman (cc'ed), and some optimizations have been made since. [2] I > wonder if I'm passing in bad input (6,000+ YAML files -- too many?), or if > there's still some room to speed up the opt-viewer.py script? I tried the > C++ implementation as well, but that never completed either. [3] > > Overall I'm excited to make greater use of optimization remarks, and to > contribute in any way I can. Please let me know if you have any thoughts on > my questions above! >Hi, I've been asked at $WORK to take a look at `-opt-remarks` , so here are a couple of thoughts. 1) When LTO is on, the output isn't particularly easy to read. I guess this can be mitigated with some filtering approach, I and Simon discussed it offline. 2) Yes, indeed `opt-viewer` takes forever for large testcases to process. I think that it could lead to exploring a better representation than YAML which is, indeed, a little slow to parse. To be honest, I'm torn about this. YAML is definitely really convenient as we already use it somewhere in tree, and it has an easy textual repr. OTOH, it doesn't seem to scale that nicely. 3) There are lots of optimizations which are still missing from the output, in particular PGO remarks (including, e.g. branch info probabilities which still use the old API as far as I can tell [PGOInstrumentation.cpp]) 4) `opt-remarks` heavily relies on the fidelity of the DebugLoc attached to instructions. Things get a little hairy at -O3 (or with -flto) because there are optimizations bugs so transformations don't preserve debuginfo. This is not entirely orthogonal but something can be worked on in parallel (bonus point, this would also help SamplePGO & debuginfo experience). With `-flto` the problem gets amplified more, as expected. 5) I found a couple of issue when trying the support, but I'm actively working on them. https://bugs.llvm.org/show_bug.cgi?id=33773 https://bugs.llvm.org/show_bug.cgi?id=33776 That said, I think optimization remarks support is coming along nicely. -- Davide
Simon Whittaker via llvm-dev
2017-Jul-14 16:50 UTC
[llvm-dev] Next steps for optimization remarks?
>process. I think that it could lead to exploring a better representationthan YAML which is, indeed, a little slow to parse As a datapoint the codebase of a recent PlayStation4 game produces over 10GiB of YAML files, of course I'm tending to run opt-viewer on just a subset of these to get a reasonable workflow. Just looking at one of the graphics libraries, which is a reasonable granularity to examine, we have ~10MiB of object file for x86-64, including debug info. This produces ~70MiB of YAML which takes 48s to parse (optrecord.gather_results) and 25s to produce a total of ~70MiB of HTML (generate_report) on a decent i7 with SSD. Not terrible but probably too slow for our end-users. Brian, did you get time to try out some alternative representations? Although we've not done any finer-grained profiling of the above we also suspect a binary representation might improve things. If you've not looked at this yet we might be able to investigate over the next couple of weeks. If you already have then I'd be happy to test against the codebase above and see what the difference is like. To echo Davide, we don't want to sound too negative - the remarks work is definitely a good direction to be going in and is already useful. Thanks, Simon On Fri, Jul 14, 2017 at 8:21 AM, Davide Italiano via llvm-dev < llvm-dev at lists.llvm.org> wrote:> On Mon, Jun 19, 2017 at 4:13 PM, Brian Gesiak via llvm-dev > <llvm-dev at lists.llvm.org> wrote: > > Hello all, > > > > In https://www.youtube.com/watch?v=qq0q1hfzidg, Adam Nemet (cc'ed) > describes > > optimization remarks and some future plans for the project. I had a few > > follow-up questions: > > > > 1. As an example of future work to be done, the talk mentions expanding > the > > set of optimization passes that emit remarks. However, the Clang User > Manual > > mentions that "optimization remarks do not really make sense outside of > the > > major transformations (e.g.: inlining, vectorization, loop > optimizations)." > > [1] I am wondering: which passes exist today that are most in need of > > supporting optimization remarks? Should all passes emit optimization > > remarks, or are there indeed passes for which optimization remarks "do > not > > make sense"? > > > > 2. I tried running llvm/utils/opt-viewer/opt-viewer.py to produce an > HTML > > dashboard for the optimization remark YAML generated from a large C++ > > program. Unfortunately, the Python script does not finish, even after > over > > an hour of processing. It appears performance has been brought up before > by > > Bob Haarman (cc'ed), and some optimizations have been made since. [2] I > > wonder if I'm passing in bad input (6,000+ YAML files -- too many?), or > if > > there's still some room to speed up the opt-viewer.py script? I tried the > > C++ implementation as well, but that never completed either. [3] > > > > Overall I'm excited to make greater use of optimization remarks, and to > > contribute in any way I can. Please let me know if you have any thoughts > on > > my questions above! > > > > Hi, > I've been asked at $WORK to take a look at `-opt-remarks` , so here > are a couple of thoughts. > > 1) When LTO is on, the output isn't particularly easy to read. I guess > this can be mitigated with some filtering approach, I and Simon > discussed it offline. > > 2) Yes, indeed `opt-viewer` takes forever for large testcases to > process. I think that it could lead to exploring a better > representation than YAML which is, indeed, a little slow to parse. To > be honest, I'm torn about this. > YAML is definitely really convenient as we already use it somewhere in > tree, and it has an easy textual repr. OTOH, it doesn't seem to scale > that nicely. > > 3) There are lots of optimizations which are still missing from the > output, in particular PGO remarks (including, e.g. branch info > probabilities which still use the old API as far as I can tell > [PGOInstrumentation.cpp]) > > 4) `opt-remarks` heavily relies on the fidelity of the DebugLoc > attached to instructions. Things get a little hairy at -O3 (or with > -flto) because there are optimizations bugs so transformations don't > preserve debuginfo. This is not entirely orthogonal but something can > be worked on in parallel (bonus point, this would also help SamplePGO > & debuginfo experience). With `-flto` the problem gets amplified more, > as expected. > > 5) I found a couple of issue when trying the support, but I'm actively > working on them. > https://bugs.llvm.org/show_bug.cgi?id=33773 > https://bugs.llvm.org/show_bug.cgi?id=33776 > > That said, I think optimization remarks support is coming along nicely. > > -- > Davide > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170714/84832ede/attachment.html>
Adam Nemet via llvm-dev
2017-Jul-14 17:10 UTC
[llvm-dev] Next steps for optimization remarks?
> On Jul 14, 2017, at 8:21 AM, Davide Italiano via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > On Mon, Jun 19, 2017 at 4:13 PM, Brian Gesiak via llvm-dev > <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: >> Hello all, >> >> In https://www.youtube.com/watch?v=qq0q1hfzidg, Adam Nemet (cc'ed) describes >> optimization remarks and some future plans for the project. I had a few >> follow-up questions: >> >> 1. As an example of future work to be done, the talk mentions expanding the >> set of optimization passes that emit remarks. However, the Clang User Manual >> mentions that "optimization remarks do not really make sense outside of the >> major transformations (e.g.: inlining, vectorization, loop optimizations)." >> [1] I am wondering: which passes exist today that are most in need of >> supporting optimization remarks? Should all passes emit optimization >> remarks, or are there indeed passes for which optimization remarks "do not >> make sense"? >> >> 2. I tried running llvm/utils/opt-viewer/opt-viewer.py to produce an HTML >> dashboard for the optimization remark YAML generated from a large C++ >> program. Unfortunately, the Python script does not finish, even after over >> an hour of processing. It appears performance has been brought up before by >> Bob Haarman (cc'ed), and some optimizations have been made since. [2] I >> wonder if I'm passing in bad input (6,000+ YAML files -- too many?), or if >> there's still some room to speed up the opt-viewer.py script? I tried the >> C++ implementation as well, but that never completed either. [3] >> >> Overall I'm excited to make greater use of optimization remarks, and to >> contribute in any way I can. Please let me know if you have any thoughts on >> my questions above! >> > > Hi, > I've been asked at $WORK to take a look at `-opt-remarks` , so here > are a couple of thoughts. > > 1) When LTO is on, the output isn't particularly easy to read. I guess > this can be mitigated with some filtering approach, I and Simon > discussed it offline.Can you please elaborate?> > 2) Yes, indeed `opt-viewer` takes forever for large testcases to > process. I think that it could lead to exploring a better > representation than YAML which is, indeed, a little slow to parse. To > be honest, I'm torn about this. > YAML is definitely really convenient as we already use it somewhere in > tree, and it has an easy textual repr. OTOH, it doesn't seem to scale > that nicely.Agreed. We now have a mitigation strategy with -pass-remarks-hotness-threshold but this is something that we may have to solve in the long run.> > 3) There are lots of optimizations which are still missing from the > output, in particular PGO remarks (including, e.g. branch info > probabilities which still use the old API as far as I can tell > [PGOInstrumentation.cpp])Yes, how about we file bugs for each pass that still uses the old API (I am looking at ICP today) and then we can split up the work and then finally remove the old API? Also on exposing PGO info, I have a patch that adds a pass I call HotnessDecorator. The pass emits a remark for each basic block. Then opt-viewer is made aware of these and the remarks are special-cased to show hotness for a line unless there is already a remark on the line. The idea is that since we only show hotness as part of the remark if a block does not contain a remark we don’t see its hotness. E.g.:> > 4) `opt-remarks` heavily relies on the fidelity of the DebugLoc > attached to instructions. Things get a little hairy at -O3 (or with > -flto) because there are optimizations bugs so transformations don't > preserve debuginfo. This is not entirely orthogonal but something can > be worked on in parallel (bonus point, this would also help SamplePGO > & debuginfo experience). With `-flto` the problem gets amplified more, > as expected. > > 5) I found a couple of issue when trying the support, but I'm actively > working on them. > https://bugs.llvm.org/show_bug.cgi?id=33773 <https://bugs.llvm.org/show_bug.cgi?id=33773> > https://bugs.llvm.org/show_bug.cgi?id=33776 <https://bugs.llvm.org/show_bug.cgi?id=33776> > > That said, I think optimization remarks support is coming along nicely.Yes, I’ve been really happy with the progress. Thanks for all the help from everybody! Adam> > -- > Davide > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org> > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev <http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev>-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170714/4cb96531/attachment-0001.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-1.tiff Type: image/tiff Size: 58614 bytes Desc: not available URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170714/4cb96531/attachment-0001.tiff>
Adam Nemet via llvm-dev
2017-Jul-14 17:20 UTC
[llvm-dev] Next steps for optimization remarks?
[Resending with smaller image to stay within the size threshold of llvm-dev]> On Jul 14, 2017, at 8:21 AM, Davide Italiano via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: > > On Mon, Jun 19, 2017 at 4:13 PM, Brian Gesiak via llvm-dev > <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: >> Hello all, >> >> In https://www.youtube.com/watch?v=qq0q1hfzidg <https://www.youtube.com/watch?v=qq0q1hfzidg>, Adam Nemet (cc'ed) describes >> optimization remarks and some future plans for the project. I had a few >> follow-up questions: >> >> 1. As an example of future work to be done, the talk mentions expanding the >> set of optimization passes that emit remarks. However, the Clang User Manual >> mentions that "optimization remarks do not really make sense outside of the >> major transformations (e.g.: inlining, vectorization, loop optimizations)." >> [1] I am wondering: which passes exist today that are most in need of >> supporting optimization remarks? Should all passes emit optimization >> remarks, or are there indeed passes for which optimization remarks "do not >> make sense"? >> >> 2. I tried running llvm/utils/opt-viewer/opt-viewer.py to produce an HTML >> dashboard for the optimization remark YAML generated from a large C++ >> program. Unfortunately, the Python script does not finish, even after over >> an hour of processing. It appears performance has been brought up before by >> Bob Haarman (cc'ed), and some optimizations have been made since. [2] I >> wonder if I'm passing in bad input (6,000+ YAML files -- too many?), or if >> there's still some room to speed up the opt-viewer.py script? I tried the >> C++ implementation as well, but that never completed either. [3] >> >> Overall I'm excited to make greater use of optimization remarks, and to >> contribute in any way I can. Please let me know if you have any thoughts on >> my questions above! >> > > Hi, > I've been asked at $WORK to take a look at `-opt-remarks` , so here > are a couple of thoughts. > > 1) When LTO is on, the output isn't particularly easy to read. I guess > this can be mitigated with some filtering approach, I and Simon > discussed it offline.Can you please elaborate?> > 2) Yes, indeed `opt-viewer` takes forever for large testcases to > process. I think that it could lead to exploring a better > representation than YAML which is, indeed, a little slow to parse. To > be honest, I'm torn about this. > YAML is definitely really convenient as we already use it somewhere in > tree, and it has an easy textual repr. OTOH, it doesn't seem to scale > that nicely.Agreed. We now have a mitigation strategy with -pass-remarks-hotness-threshold but this is something that we may have to solve in the long run.> > 3) There are lots of optimizations which are still missing from the > output, in particular PGO remarks (including, e.g. branch info > probabilities which still use the old API as far as I can tell > [PGOInstrumentation.cpp])Yes, how about we file bugs for each pass that still uses the old API (I am looking at ICP today) and then we can split up the work and then finally remove the old API? Also on exposing PGO info, I have a patch that adds a pass I call HotnessDecorator. The pass emits a remark for each basic block. Then opt-viewer is made aware of these and the remarks are special-cased to show hotness for a line unless there is already a remark on the line. The idea is that since we only show hotness as part of the remark if a block does not contain a remark we don’t see its hotness. E.g.:> > 4) `opt-remarks` heavily relies on the fidelity of the DebugLoc > attached to instructions. Things get a little hairy at -O3 (or with > -flto) because there are optimizations bugs so transformations don't > preserve debuginfo. This is not entirely orthogonal but something can > be worked on in parallel (bonus point, this would also help SamplePGO > & debuginfo experience). With `-flto` the problem gets amplified more, > as expected. > > 5) I found a couple of issue when trying the support, but I'm actively > working on them. > https://bugs.llvm.org/show_bug.cgi?id=33773 <https://bugs.llvm.org/show_bug.cgi?id=33773> > https://bugs.llvm.org/show_bug.cgi?id=33776 <https://bugs.llvm.org/show_bug.cgi?id=33776> > > That said, I think optimization remarks support is coming along nicely.Yes, I’ve been really happy with the progress. Thanks for all the help from everybody! Adam> > -- > Davide > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org> > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev <http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev>-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170714/006976bd/attachment-0001.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-1.tiff Type: image/tiff Size: 41476 bytes Desc: not available URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170714/006976bd/attachment-0001.tiff>