similar to: RFC: Extending optimization reporting

Displaying 20 results from an estimated 20000 matches similar to: "RFC: Extending optimization reporting"

2019 May 08
2
RFC: Extending optimization reporting
Hi Adam, Thanks for your input. If I understand correctly, you’re saying that we can handle the loop versioning issue by explicitly identifying new loops as they are created. So, the unswitching optimization, for example, would report that it unswitched loop-0 at source location X, creating loop-1 and loop-2, and then later the vectorizer would report that it was unable to vectorize loop-1 at
2017 Jun 19
8
Next steps for optimization remarks?
Hello all, In https://www.youtube.com/watch?v=qq0q1hfzidg, Adam Nemet (cc'ed) describes optimization remarks and some future plans for the project. I had a few follow-up questions: 1. As an example of future work to be done, the talk mentions expanding the set of optimization passes that emit remarks. However, the Clang User Manual mentions that "optimization remarks do not really make
2017 Jul 14
2
Next steps for optimization remarks?
> On Jul 14, 2017, at 10:22 AM, Davide Italiano <davide at freebsd.org> wrote: > > On Fri, Jul 14, 2017 at 10:10 AM, Adam Nemet <anemet at apple.com <mailto:anemet at apple.com>> wrote: >> >> >> On Jul 14, 2017, at 8:21 AM, Davide Italiano via llvm-dev <llvm-dev at lists.llvm.org> wrote: >> >> On Mon, Jun 19, 2017 at 4:13 PM, Brian
2017 Jul 14
3
Next steps for optimization remarks?
> On Jul 14, 2017, at 8:21 AM, Davide Italiano via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > On Mon, Jun 19, 2017 at 4:13 PM, Brian Gesiak via llvm-dev > <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: >> Hello all, >> >> In https://www.youtube.com/watch?v=qq0q1hfzidg, Adam Nemet (cc'ed) describes >>
2020 Jan 06
2
Question about opt-report strings
Hi all, I tried to poke my head into opt-report a while ago and didn't get very far. Now I'm looking at it again. I'm not sure I understand everything that's in place so my question here may be misguided. I'm trying to understand the way strings are handled. When a remark is emitted, it seems that the string is constructed on the fly based on streaming inputs. For example,
2017 Jun 27
2
Next steps for optimization remarks?
Adam, thanks for all the suggestions! One nice aspect of the `-Rpass` family of options is that I can filter based on what I want. If I only want to see which inlines I missed, I could use `clang -Rpass-missed="inline"`, for example. On the other hand, optimization remark YAML always include remarks from all passes (as far as I can tell), which increases the amount of time it takes
2018 Jun 05
2
How to get optimization remarks while testing with lnt in llvm
Hi, I'm new to llvm and am trying to run benchmarks from the test-suite using lnt to check loop-vectorization for various benchmarks. Test are compiling and executing fine, but I am not getting optimization remarks while using flags like -Rpass-missed=loop-vectorize and -Rpass-analysis=loop-vectorize I've tried running it like this: lnt runtest test-suite --sandbox SANDBOX --cc
2017 Jun 09
3
Showing hotness in LLVM optimization remarks using AutoFDO sampling profile data?
Hello! (+cc Adam Nemet, since he presented on optimization remarks at LLVM Dev Mtg 2016) I have a large C++ program, which I am compiling using a sampling profile generated via perf and AutoFDO. I'd like to use this profile in order to show the hotness of each code path that is displayed in the new optimization remarks viewer tool ( https://www.youtube.com/watch?v=qq0q1hfzidg). It seems,
2016 May 11
4
Filter optimization remarks by the hotness of the code region
> On May 11, 2016, at 3:37 AM, Hal Finkel <hfinkel at anl.gov> wrote: > > ----- Original Message ----- >> From: "Adam Nemet" <anemet at apple.com> >> To: "Hal Finkel" <hfinkel at anl.gov> >> Cc: "llvm-dev (llvm-dev at lists.llvm.org)" <llvm-dev at lists.llvm.org> >> Sent: Wednesday, May 11, 2016 1:15:42 AM
2017 Jun 28
3
Next steps for optimization remarks?
> On Wed, Jun 28, 2017 at 8:13 AM, Hal Finkel <hfinkel at anl.gov> wrote: > > I don't object to adding some kind of filtering option, but in general it won't help. An important goal here is to provide analysis (and other) tools to users that present this information at a higher level. The users won't, and shouldn't, know exactly what kinds of messages the tools use.
2016 May 11
2
Filter optimization remarks by the hotness of the code region
Hi Hal, > On May 10, 2016, at 5:39 PM, Hal Finkel <hfinkel at anl.gov> wrote: > > Hi Adam, > > I think would be a really useful feature to have. I don't think that the backend should be responsible for filtering, but should pass the relative hotness information to the frontend. Given that these diagnostics are not just going to be used for -Rpass and friends, but also
2017 Aug 28
5
[5.0.0 Release] Please write release notes
I'm sorry, but I don't think LLDB has any release notes. On Sat, Aug 26, 2017 at 9:49 PM, Kamil Rytarowski <n54 at gmx.com> wrote: > LLDB: > > Switched the NetBSD platform to new remote tracing capable framework. > > Preliminary support for tracing NetBSD(/amd64) processes and core files > with a single thread. > > On 25.08.2017 02:44, Hans Wennborg via
2017 Aug 25
3
[5.0.0 Release] Please write release notes
Thanks! r311738. On Thu, Aug 24, 2017 at 4:51 PM, Adam Nemet <anemet at apple.com> wrote: > Hi Hans, > > Opt-viewer is now installed rather than being an internal-only tool so here it goes: > > A new tool opt-viewer.py has been added to visualize optimization remarks in HTML. The tool processes the YAML files produced by clang with the -fsave-optimization-record option. >
2017 Aug 18
2
[5.0.0 Release] Please write release notes
Dear everyone, We're a couple of release candidates into the process, and the release notes are not in very good shape: http://prereleases.llvm.org/5.0.0/#rc2 If you committed anything noteworthy in the last six months, or saw someone else do it, please consider adding it to the release notes. People do read them. If you're responsible for a specific CPU target, please help give those
2017 May 05
2
Idea for Open Project : Smarter way of dumping LLVM IR with -emit-after-all
> On May 5, 2017, at 8:49 AM, Hal Finkel <hfinkel at anl.gov> wrote: > > > > On 05/05/2017 10:44 AM, vivek pandya via llvm-dev wrote: >> Hello LLVM Devs, >> >> I have an idea to improve effectiveness of IR dump with -emit-after-all based on Adam Nemet's 2016 LLVM Dev presentation. >> I think we can track changes in each function, basic block and
2016 Nov 17
2
Rewriting opt-viewer in C++
Adam, The test case was the Python-3.6.0b3 release, 234 input YAML files. The large majority of time is spent with processing the file input. Next ranked was rendering output. Moving the files to a tmpfs partition didn’t change the time significantly (but I would expect that experiment would yield different results with libYAML). original, single-threaded: processed input files
2017 May 05
2
Idea for Open Project : Smarter way of dumping LLVM IR with -emit-after-all
Hello LLVM Devs, I have an idea to improve effectiveness of IR dump with -emit-after-all based on Adam Nemet's 2016 LLVM Dev presentation. I think we can track changes in each function, basic block and instructions by dumping it to YAML files (initially) then track changes done by each pass incrementally as it is done in optimization remark emitter. Once we have required information in YAML
2018 Feb 05
2
Dumping the static stack reservation sizes for functions
I would like to be able to emit a list of functions by name and their fixed stack reservation size information, so that a programmer can gauge how much stack they are likely to need in tightly constrained embedded systems. Despite the rich number of options, the only option I can find that is even relatively close is: -warn-stack-size=<uint> Is there some existing way of getting this
2018 Feb 12
1
Pattern not recognized as reduction
Reduction Not Captured By LLVM CODE_1 ------------------------------------------------------------ ------------------------------------------------------------ -------------------- #include <stdio.h> int main() { int sum[1000]={1,2,3,4}; for (int i=1;i<1000;i++) { sum[0] +=sum[i-1]; } }
2019 Jul 27
2
Help on Optimization Remarks
Dear llvm-dev community, I am trying to analyze the optimization remarks generated through clang using -fsave-optimization-remark with -O3. --- !Analysis Pass: loop-vectorize Name: CFGNotUnderstood DebugLoc: { File: c-ray-mt.c, Line: 177, Column: 2 } Function: main Args: - String: 'loop not vectorized: ' - String: loop control flow is not understood by vectorizer I tried to look for