similar to: Filter optimization remarks by the hotness of the code region

Displaying 20 results from an estimated 7000 matches similar to: "Filter optimization remarks by the hotness of the code region"

2016 May 11
2
Filter optimization remarks by the hotness of the code region
Hi Hal, > On May 10, 2016, at 5:39 PM, Hal Finkel <hfinkel at anl.gov> wrote: > > Hi Adam, > > I think would be a really useful feature to have. I don't think that the backend should be responsible for filtering, but should pass the relative hotness information to the frontend. Given that these diagnostics are not just going to be used for -Rpass and friends, but also
2016 May 11
4
Filter optimization remarks by the hotness of the code region
> On May 11, 2016, at 3:37 AM, Hal Finkel <hfinkel at anl.gov> wrote: > > ----- Original Message ----- >> From: "Adam Nemet" <anemet at apple.com> >> To: "Hal Finkel" <hfinkel at anl.gov> >> Cc: "llvm-dev (llvm-dev at lists.llvm.org)" <llvm-dev at lists.llvm.org> >> Sent: Wednesday, May 11, 2016 1:15:42 AM
2016 Jun 27
0
Filter optimization remarks by the hotness of the code region
> On May 11, 2016, at 10:45 AM, Adam Nemet via llvm-dev <llvm-dev at lists.llvm.org> wrote: > >> >> On May 11, 2016, at 3:37 AM, Hal Finkel <hfinkel at anl.gov <mailto:hfinkel at anl.gov>> wrote: >> >> ----- Original Message ----- >>> From: "Adam Nemet" <anemet at apple.com <mailto:anemet at apple.com>> >>>
2017 Jun 09
3
Showing hotness in LLVM optimization remarks using AutoFDO sampling profile data?
Hello! (+cc Adam Nemet, since he presented on optimization remarks at LLVM Dev Mtg 2016) I have a large C++ program, which I am compiling using a sampling profile generated via perf and AutoFDO. I'd like to use this profile in order to show the hotness of each code path that is displayed in the new optimization remarks viewer tool ( https://www.youtube.com/watch?v=qq0q1hfzidg). It seems,
2016 May 18
1
Optimization remarks for non-temporal stores
Hi, There was a recent discussion on generating non-temporal stores automatically[1]. This is a hard problem to get right. What seems like a much easier but still useful problem to solve is to provide diagnostics to the user where NT stores *may* help. Then the user can add the corresponding builtins and see if they are beneficial. My hope is that with the work to add profile-driven
2017 Jun 19
8
Next steps for optimization remarks?
Hello all, In https://www.youtube.com/watch?v=qq0q1hfzidg, Adam Nemet (cc'ed) describes optimization remarks and some future plans for the project. I had a few follow-up questions: 1. As an example of future work to be done, the talk mentions expanding the set of optimization passes that emit remarks. However, the Clang User Manual mentions that "optimization remarks do not really make
2017 Jun 27
2
Next steps for optimization remarks?
Adam, thanks for all the suggestions! One nice aspect of the `-Rpass` family of options is that I can filter based on what I want. If I only want to see which inlines I missed, I could use `clang -Rpass-missed="inline"`, for example. On the other hand, optimization remark YAML always include remarks from all passes (as far as I can tell), which increases the amount of time it takes
2018 Aug 14
2
optimization remarks
Hi, I am trying to compare the loop vectorizers effectiveness for different targets relative to each other. That way, I am hoping to find loops that are not vectorized - but could be - on my target by finding other targets doing this successfully. With some luck, there might be something in the Target files that could be fixed with improved vectorization as a result... I would like to do
2017 Jul 14
2
Next steps for optimization remarks?
> On Jul 14, 2017, at 10:22 AM, Davide Italiano <davide at freebsd.org> wrote: > > On Fri, Jul 14, 2017 at 10:10 AM, Adam Nemet <anemet at apple.com <mailto:anemet at apple.com>> wrote: >> >> >> On Jul 14, 2017, at 8:21 AM, Davide Italiano via llvm-dev <llvm-dev at lists.llvm.org> wrote: >> >> On Mon, Jun 19, 2017 at 4:13 PM, Brian
2017 Jul 14
3
Next steps for optimization remarks?
> On Jul 14, 2017, at 8:21 AM, Davide Italiano via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > On Mon, Jun 19, 2017 at 4:13 PM, Brian Gesiak via llvm-dev > <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: >> Hello all, >> >> In https://www.youtube.com/watch?v=qq0q1hfzidg, Adam Nemet (cc'ed) describes >>
2018 Jun 05
2
How to get optimization remarks while testing with lnt in llvm
Hi, I'm new to llvm and am trying to run benchmarks from the test-suite using lnt to check loop-vectorization for various benchmarks. Test are compiling and executing fine, but I am not getting optimization remarks while using flags like -Rpass-missed=loop-vectorize and -Rpass-analysis=loop-vectorize I've tried running it like this: lnt runtest test-suite --sandbox SANDBOX --cc
2017 Sep 17
2
RFC: Use closures to delay construction of optimization remarks
> On Sep 16, 2017, at 4:49 PM, Sean Silva <chisophugis at gmail.com> wrote: > > Actually maybe something like: > > if (auto &E = ORE.emitMissed(DEBUG_TYPE)) { > E.emit(...) << ...; > } Well, the point of this interface was exactly to avoid writing a conditional. If you’re willing to use a conditional you can already write this: if
2017 Sep 19
0
RFC: Use closures to delay construction of optimization remarks
Sean, hopefully you’re OK with that reasoning. I went ahead and committed this in r313691. > On Sep 16, 2017, at 10:43 PM, Adam Nemet <anemet at apple.com> wrote: > > >> On Sep 16, 2017, at 4:49 PM, Sean Silva <chisophugis at gmail.com <mailto:chisophugis at gmail.com>> wrote: >> >> Actually maybe something like: >> >> if (auto &E
2017 Sep 16
3
RFC: Use closures to delay construction of optimization remarks
Another alternative could be: ORE.emitMissed(DEBUG_TYPE, ...) << ... Then the first line of emitMissed does a check if it is enabled and if not then returns a dummy stream that does nothing for operator<< (and short-circuits all the stream operations) On Sep 15, 2017 2:21 PM, "Adam Nemet via llvm-dev" <llvm-dev at lists.llvm.org> wrote: For better readability we
2020 Jan 06
2
Question about opt-report strings
Hi all, I tried to poke my head into opt-report a while ago and didn't get very far. Now I'm looking at it again. I'm not sure I understand everything that's in place so my question here may be misguided. I'm trying to understand the way strings are handled. When a remark is emitted, it seems that the string is constructed on the fly based on streaming inputs. For example,
2017 Jun 28
3
Next steps for optimization remarks?
> On Wed, Jun 28, 2017 at 8:13 AM, Hal Finkel <hfinkel at anl.gov> wrote: > > I don't object to adding some kind of filtering option, but in general it won't help. An important goal here is to provide analysis (and other) tools to users that present this information at a higher level. The users won't, and shouldn't, know exactly what kinds of messages the tools use.
2016 Jun 07
2
[LLVMdev] LLVM loop vectorizer
Hi Alex, This has been very recently fixed by Hal. See http://reviews.llvm.org/rL270771 Adam > On Jun 4, 2016, at 3:13 AM, Alex Susu via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Hello. > Mikhail, I come back to this older thread. > I need to do a few changes to LoopVectorize.cpp. > > One of them is related to figuring out the exact C source line
2016 Oct 10
2
On Loop Distribution pass
> On Oct 10, 2016, at 2:50 PM, Hal Finkel <hfinkel at anl.gov> wrote: > > > From: "Dangeti Tharun kumar via llvm-dev" <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> > To: llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org> > Cc: "Santanu Das" <cs15mtech11018 at iith.ac.in <mailto:cs15mtech11018 at
2019 Jan 13
2
Problem using BlockFrequencyInfo's getBlockProfileCount
Hey, I am trying to use the BlockFrequencyInfoWrapperPass to obtain hotness information in my LLVM pass. Attached is a minimal example of code which creates a SIGSEGV. The pass calls AU.addRequired<BlockFrequencyInfoWrapperPass>(); in getAnalysisUsage(..). The problem exists with changed and unchanged IR. The binary is instrumented like this: clang input.bc -fprofile-generate -o
2017 Oct 18
2
How to emit opt report when using LTO
Hi, I'm using clang frontend. I'm interested in some particular hot loop in my code and I emit a report from vectorizer optimizations passes. I receive nice output if passing -Rpass* flags as long as I'm building without LTO? But with -flto it just prints nothing. Is there a way to emit opt reports when using LTO? For now I can only approximate about whether my the loop will be