similar to: [LLVMdev] how to create LNT server for LLVM test-results

Displaying 20 results from an estimated 7000 matches similar to: "[LLVMdev] how to create LNT server for LLVM test-results"

2013 Mar 06
0
[LLVMdev] how to create LNT server for LLVM test-results
Hi All I konw that I can generate a report.json file by LNT, but I have some doubt that what can I do for getting a perfect analysis like this http://llvm.org/perf/db_default/v4/nts/9093 . Whether you have done something about analyzing LLVM test-results or generate a perfect analysis via LNT itself . If you know,can you give me some idea? For example, give me an instance to create LNT
2019 Nov 20
3
LNT debuginfo-statistics not running?
Hi llvm-dev@ LNT produces statistics and graphs (such as [0]) of debuginfo metrics, such as number of source variables with locations. It looks like these haven't run [1] since the move from svn to git -- are there any plans to get these running again? I find it highly useful to identify what commits have affected variable locations and how significant an affect. [0]
2019 Nov 20
4
LNT debuginfo-statistics not running?
The debug info statistics bot is triggered by this job: http://green.lab.llvm.org/green/job/clang-stage2-Rthinlto/ which unfortunately hasn't been green in a very long time (>1mo). Alex/Azhar, do you know what's blocking that job? -- adrian > On Nov 20, 2019, at 9:46 AM, David Blaikie <dblaikie at gmail.com> wrote: > > +usual debug info folks (but I think in this case
2017 Jul 31
1
[LNT] new server instance http://lnt.llvm.org seems unstable
The run page problem were triggered by one of my commits (sorry) and should be mitigated now, see the thread at http://lists.llvm.org/pipermail/llvm-dev/2017-July/115971.html <http://lists.llvm.org/pipermail/llvm-dev/2017-July/115971.html> I don't know about the submission problems, could they just an occasional network problem or are they a common phenomenon? Chris did some
2010 Dec 06
0
[LLVMdev] LNT somewhere hosted and used?
On Dec 6, 2010, at 10:40 AM, Tobias Grosser wrote: > Hi, > > I have been following the development of the /zorg/trunk/lnt project for > a while and am wondering if there is some regular LLVM performance > testing using LNT that can be accessed online? Are there any plans to > create an officially used web service for this like e.g the llvm buildbots? I have a nightly tester
2010 Dec 07
1
[LLVMdev] LNT somewhere hosted and used?
On 12/06/2010 03:33 PM, Bob Wilson wrote: > > On Dec 6, 2010, at 10:40 AM, Tobias Grosser wrote: > >> Hi, >> >> I have been following the development of the /zorg/trunk/lnt project for >> a while and am wondering if there is some regular LLVM performance >> testing using LNT that can be accessed online? Are there any plans to >> create an officially
2015 May 18
2
[LLVMdev] Proposal: change LNT’s regression detection algorithm and how it is used to reduce false positives
Hi Chris and others! I totally support any work in this direction. In the current state LNT’s regression detection system is too noisy, which makes it almost impossible to use in some cases. If after each run a developer gets a dozen of ‘regressions’, none of which happens to be real, he/she won’t care about such reports after a while. We clearly need to filter out as much noise as we can - and
2017 Jul 31
2
[LNT] new server instance http://lnt.llvm.org seems unstable
Hi, The new LNT server instance http://lnt.llvm.org seems to fail in many cases. Any entrance to a 'Run page' (e.g. http://lnt.llvm.org/db_default/v4/nts/62475) and lately also many perf bots result submissions (e.g. http://lab.llvm.org:8014/builders/clang-native-arm-lnt-perf/builds/2262/steps/test-suite/logs/stdio ) fails with: "500 Internal Server Error". Any ideas? Thanks,
2014 Jan 17
2
[LLVMdev] Why is the default LNT aggregation function min instead of mean
Hi, I am currently investigating how to ensure that LNT only shows relevant performance regressions for the -O3 performance tests I am running. One question that came up here is why the default aggregate function for LNT is 'min' instead of 'mean'. This looks a little surprising from the statistical point, but also from looking at my test results picking 'min' seems
2013 Feb 19
0
[LLVMdev] ARM LNT test-suite Buildbot
On Tue, Feb 19, 2013 at 7:36 AM, Renato Golin <renato.golin at linaro.org>wrote: > On 19 February 2013 15:16, Arnold Schwaighofer <aschwaighofer at apple.com>wrote: > >> Do you have a base run with vectorization turned off? So we could see >> where we are degrading things? >> > > I wanted to, but after a few failed attempts, I couldn't pass the option
2014 Jan 17
2
[LLVMdev] Why is the default LNT aggregation function min instead of mean
Is it the case that you converge on the min faster than the mean? Right now there is no way to set a per-tester aggregation function. I had spent a little time trying to detect regressions using k-means clustering. It looked promising. That was outside LNT though. On Jan 16, 2014, at 11:28 PM, Tobias Grosser <tobias at grosser.es> wrote: > On 01/17/2014 03:09 AM, David Blaikie
2013 Feb 19
3
[LLVMdev] ARM LNT test-suite Buildbot
On 19 February 2013 15:16, Arnold Schwaighofer <aschwaighofer at apple.com>wrote: > Do you have a base run with vectorization turned off? So we could see > where we are degrading things? > I wanted to, but after a few failed attempts, I couldn't pass the option to clang to disable vectorization. I don't want to make Galina reconfig the master every time, so I set up a
2013 Feb 21
0
[LLVMdev] LNT Database Access
Hi, I want to get access to the results at: http://llvm.org/perf/db_default/v4/nts/machine/10 So I could do some analysis on it. Is there a way to dump the data or to query it in any way? Also, can I change the "base" system to point to a new build that I know is better/more appropriate? My problem is that LNT has some nice ways of identifying performance regressions and errors
2014 Jan 07
3
[LLVMdev] New -O3 Performance tester - Use hardware to get reliable numbers
Hi, I would like to announce a new set of LNT -O3 performance testers. In a discussion titled "Question about results reliability in LNT infrustructure" Anton suggested that one way to get statistically reliable test results from the LNT infrastructure is to use a larger sample size (5-10) as well as a more robust statistical test (Wilcoxon/Mann-Whitney). Another requirement to
2016 Sep 07
2
Benchmarks for LLVM-generated Binaries
Hi Eric, Yeah, I know about Externals and SPEC specifically. But as far as I understand, you have to have kind of description of the tests in test-suite even if you don’t provide the source codes - that’s what I would like to avoid. I.e. you have to have CMakeLists.txt and other files in place all the time, open to everyone. Now, imagine I have a small testsuite, which probably is not very
2013 Feb 19
1
[LLVMdev] ARM LNT test-suite Buildbot
Hi Renato, I'm playing with A15 bots too (running Ubuntu). This is probably what you want to have predictable performance: Disable auto-resetting the CPU scaling governor to ondemand: sudo update-rc.d -f ondemand remove Then add this to /etc/rc.local: # Disable power management. for cpu in `find /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor`; do echo performance > $cpu
2014 Jan 17
2
[LLVMdev] Why is the default LNT aggregation function min instead of mean
Right - you usually won't see a normal distribution in the noise of test results. You'll see results clustered around the lower bound with a long tail of slower and slower results. Depending on how many samples you do it might be appropriate to take the mean of the best 3, for example - but the general approach of taking the fastest N does have some basis in any case. Not necessarily the
2017 Jan 24
3
[InstCombine] rL292492 affected LoopVectorizer and caused 17.30%/11.37% perf regressions on Cortex-A53/Cortex-A15 LNT machines
> On Jan 24, 2017, at 7:18 AM, Sanjay Patel <spatel at rotateright.com> wrote: > > > > On Mon, Jan 23, 2017 at 10:53 PM, Mehdi Amini <mehdi.amini at apple.com <mailto:mehdi.amini at apple.com>> wrote: > >> On Jan 23, 2017, at 3:48 PM, Sanjay Patel via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: >>
2013 Jun 30
3
[LLVMdev] [LNT] Question about results reliability in LNT infrustructure
On 06/28/2013 01:19 PM, Renato Golin wrote: > On 28 June 2013 19:45, Chris Matthews <chris.matthews at apple.com> > wrote: > >> Given this tradeoff I think we want to tend towards false positives >> (over false negatives) strictly as a matter of compiler quality. >> > > False hits are not binary, but (at least) two-dimensional. You can't > say it's
2016 Apr 22
2
RFC: LNT/Test-suite support for custom metrics and test parameterization
On 21 Apr 2016, at 17:44, Sergey Yakoushkin <sergey.yakoushkin at gmail.com<mailto:sergey.yakoushkin at gmail.com>> wrote: Hi Kristof, The way we use LNT, we would run different configuration (e.g. -O3 vs -Os) as different "machines" in LNT's model. O2/O3 is indeed bad example. We're also using different machines for Os/O3 - such parameters apply to all