similar to: [LLVMdev] Proposal: Improvements to Performance Tracking Infrastructure.

Displaying 20 results from an estimated 40000 matches similar to: "[LLVMdev] Proposal: Improvements to Performance Tracking Infrastructure."

2013 Nov 13
1
[LLVMdev] Proposal: Improvements to Performance Tracking Infrastructure.
Great summary Kristof ! I do not know how frequent is the addition of a new benchmark, but this would disrupt the compile time measurement. On the other hand, we just want to see a (hopefully negative) slope and ignore steps due to new benchmark being added. Cheers, -- Arnaud On Wed, Nov 13, 2013 at 2:14 PM, Kristof Beyls <kristof.beyls at arm.com>wrote: > Hi, > > > >
2013 Oct 29
0
[LLVMdev] [RFC] Performance tracking and benchmarking infrastructure BoF
Hi, Next week at the developers meeting, I'm chairing a BoF session on improving our performance tracking and benchmarking infrastructure. I'd like to make the most out of the 45 minute slot. Therefore, I'd like to start the discussion a bit earlier here, giving everyone who can't come to the BoF a chance to put in their 2 cents. At the same time, I hope this will also give me a
2013 Nov 13
0
[LLVMdev] Proposal: Improvements to Performance Tracking Infrastructure.
On 13 November 2013 13:14, Kristof Beyls <kristof.beyls at arm.com> wrote: > b) evaluate if the main running time of the benchmark is caused by > running > > code compiled or by something else, e.g. file IO. Programs > dominated by > > file IO shouldn't be used to track performance changes over time. > > The proposal to resolve this is
2014 Apr 29
4
[LLVMdev] RFC:LNT Improvements
Dear all, Following the Benchmarking BOF from 2013 US dev meeting, I’d like to propose some improvements to the LNT performance tracking software. The most significant issue with current implementation is that the report is filled with extremely noisy values. Hence it is hard to notice performance improvements or regressions. After investigation of LNT and the LLVM test suite, I propose
2014 Apr 30
2
[LLVMdev] RFC:LNT Improvements
On 30 April 2014 07:50, Tobias Grosser <tobias at grosser.es> wrote: > In general, I see such changes as a second step. First, we want to have a > system in place that allows us to reliably detect if a benchmark is noisy or > not, second we want to increase the number of benchmarks that are not noisy > and where we can use the results. I personally use the test-suite for
2015 May 18
2
[LLVMdev] Proposal: change LNT’s regression detection algorithm and how it is used to reduce false positives
Hi Chris and others! I totally support any work in this direction. In the current state LNT’s regression detection system is too noisy, which makes it almost impossible to use in some cases. If after each run a developer gets a dozen of ‘regressions’, none of which happens to be real, he/she won’t care about such reports after a while. We clearly need to filter out as much noise as we can - and
2014 Aug 02
2
[LLVMdev] Dev Meeting BOF: Performance Tracking
On 2 August 2014 00:40, Renato Golin <renato.golin at linaro.org> wrote: > If memory serves me well (it doesn't), these are the list of things we > agreed on making, and their progress: > > 1. Performance-specific test-suite: a group of specific benchmarks > that should be tracked with the LNT infrastructure. Hal proposed to > look at this, but other people helped
2018 Feb 26
0
New LLD performance builder
Hello Rafael, > It seems the produced lld binary is not being statically linked. Hm. It should. But it seems couple config params are missing. Fixed. Thanks for catching this! > Is lld-speed-test in a tmpfs? Correct. All the benchmarking tips from https://www.llvm.org/docs/Benchmarking.html have been applied to that bot. > Is lld-benchmark.py a copy of lld/utils/benchmark.py?
2018 Feb 22
2
New LLD performance builder
Thanks a lot for setting this up! By using the "mean as aggregation" option one can see the noise in the results better: http://lnt.llvm.org/db_default/v4/link/graph?switch_min_mean=yes&moving_window_size=10&plot.9=1.9.7&submit=Update There are a few benchmarknig tips in https://www.llvm.org/docs/Benchmarking.html. For example, from looking at
2014 Apr 30
2
[LLVMdev] RFC:LNT Improvements
On 30 April 2014 10:21, Tobias Grosser <tobias at grosser.es> wrote: > To my understanding, the first patches should just improve LNT to report how > reliable the results are it reports. So there is no way that this can effect > the test suite runs, which means I do not see why we would want to delay > such changes. > > In fact, if we have a good idea which kernels are
2014 Oct 16
4
[LLVMdev] Performance regression on ARM
Folks, First win of the benchmark buildbot! http://llvm.org/perf/db_default/v4/nts/graph?plot.0=49.128.2&highlight_run=31861 It seems mandel-2 had a huge regression a few commits ago, and based on a quick look, it may have to do with the inst combine changes. I haven't investigated yet, but this is the first time I spot regressions on test-suite, so I'd like to first congratulate
2013 Jun 27
0
[LLVMdev] [LNT] Question about results reliability in LNT infrustructure
Hi Chris, Amazing that someone is finally looking at that with a proper background. You're much better equipped than I am to deal with that, so I'll trust you on your judgements, as I haven't paid much attention to benchmarks, more correctness. Some comments inline. On 27 June 2013 19:14, Chris Matthews <chris.matthews at apple.com> wrote: > 1) Some benchmarks are bi-modal
2013 Sep 17
4
[LLVMdev] [Polly] Compile-time and Execution-time analysis for the SCEV canonicalization
Now, we come to more evaluations on http://188.40.87.11:8000/db_default/v4/nts/recent_activity I mainly care about the compile-time and execution time impact for the following cases: pBasic (run 45): clang -O3 -load LLVMPolly.so pNoGenSCEV (run 44): clang -O3 -load LLVMPolly.so -polly-codegen-scev -polly -polly-optimizer=none -polly-code-generator=none pNoGenSCEV_nocan (run 47): same option
2014 Aug 01
11
[LLVMdev] Dev Meeting BOF: Performance Tracking
All, I'm curious to know if anyone is interested in tracking performance (compile-time and/or execution-time) from a community perspective? This is a much loftier goal then just supporting build bots. If so, I'd be happy to propose a BOF at the upcoming Dev Meeting. Chad
2018 Feb 16
0
New LLD performance builder
Hello George, Sorry, somehow hit a send button too soon. Please ignore the previous e-mail. The bot does 10 runs for each of the benchmarks (those dots in the logs are meaningful). We can increase the number of runs if proven that this would significantly increase the accuracy. I didn't see the increase in accuracy when have been staging the bot, which would justify the extra time and larger
2014 Apr 30
4
[LLVMdev] RFC:LNT Improvements
On 30/04/2014 16:20, Yi Kong wrote: > Hi Tobias, Renato, > > Thanks for your attention to my RFC. > On 30 April 2014 07:50, Tobias Grosser <tobias at grosser.es> wrote: > >> - Show and graph total compile time > >> There is no obvious way to scale up the compile time of > >> individual benchmarks, so total time is the best thing we can do to >
2017 Feb 28
2
Noisy benchmark results?
> On Feb 27, 2017, at 1:36 AM, Kristof Beyls via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Hi Mikael, > > Some noisiness in benchmark results is expected, but the numbers you see seem to be higher than I'd expect. > A number of tricks people use to get lower noise results are (with the lnt runtest nt command line options to enable it between brackets): > *
2011 Nov 16
0
[LLVMdev] [cfe-dev] Performance Tracking
On 16 Nov 2011, at 19:45, Matthieu Monrocq wrote: > Many thanks David, it had been a while (6 months I guess) since the last benchmark I saw and I was wondering how the new Clang/LLVM compared to GCC! > > One comment though, the graphs are great, however the alternance of "less is better"/"more is better" makes for a difficult read: it's not obvious at a glance
2011 Nov 14
0
[LLVMdev] Performance Tracking
On Nov 14, 2011, at 10:46 AM, David Chisnall wrote: > Hello Everyone, > > I've been looking at benchmarks of LLVM recently, and overall they look pretty good. Aside from things that use OpenMP or benefit from autovectorisation, Clang/LLVM and GCC seem to come fairly close, with no overall winner. Nice. Thanks. > > But: there do seem to have been a number of performance
2015 May 27
0
[LLVMdev] Proposal: change LNT’s regression detection algorithm and how it is used to reduce false positives
Lets try this on the whole test suite? > On May 26, 2015, at 7:05 PM, Sean Silva <chisophugis at gmail.com> wrote: > > Update: in that same block of 10,000 LLVM/Clang revisions, this the number of distinct SHA1 hashes for the binaries of the following benchmarks: > > 7 MultiSource/Applications/aha/aha > 2 MultiSource/Benchmarks/BitBench/drop3/drop3 > 10