search for: compile_tim

Displaying 20 results from an estimated 20 matches for "compile_tim".

Did you mean: compile_time
2013 Aug 12
1
[LLVMdev] [FastPolly]: Update of Polly's performance on LLVM test-suite
...in fact a good baseline - especially as we did not spend  >too much time optimising this. Yes, we should look into the compile-execution performance trade-off.  I have summarized some benchmarks (compile-time overhead is more than 200%) as follows: SingleSource/Benchmarks/Shootout/nestedloop, compile_time(+6355.56%), execution_time(-99.21%) SingleSource/Benchmarks/Polybench/stencils/seidel-2d/seidel-2d, compile_time(+1275.00%), execution_time (0%) SingleSource/Benchmarks/Shootout-C++/nestedloop, compile_time(+1155.56%), execution_time(-99.23%) MultiSource/Benchmarks/ASC_Sequoia/AMGmk/AMGmk, comp...
2013 Aug 11
0
[LLVMdev] [FastPolly]: Update of Polly's performance on LLVM test-suite
On 08/10/2013 06:59 PM, Star Tan wrote: > Hi all, > > I have evaluated Polly's performance on LLVM test-suite with latest LLVM (r188054) and Polly (r187981). Results can be viewed on: http://188.40.87.11:8000. Hi Star Tan, thanks for the update. > There are mainly five new tests and each test is run with 10 samples: > clang (run id = 27): clang -O3 > pollyBasic (run id =
2013 Aug 11
2
[LLVMdev] [FastPolly]: Update of Polly's performance on LLVM test-suite
Hi all, I have evaluated Polly's performance on LLVM test-suite with latest LLVM (r188054) and Polly (r187981).  Results can be viewed on: http://188.40.87.11:8000. There are mainly five new tests and each test is run with 10 samples: clang (run id = 27):  clang -O3 pollyBasic (run id = 28):  clang -O3 -load LLVMPolly.so pollyNoGen (run id = 29):  pollycc -O3 -mllvm -polly-optimizer=none
2012 Nov 05
2
[LLVMdev] New benchmark in test-suite
Hi Daniel, I'm trying to add LivermoreLoops test to the benchmark suite (tar ball attached), but I'm getting the error below: --- Tested: 2 tests -- FAIL: SingleSource/Benchmarks/LivermoreLoops/lloops.compile_time (1 of 2) FAIL: SingleSource/Benchmarks/LivermoreLoops/lloops.execution_time (2 of 2) When I use the option to only run this test: --only-test SingleSource/Benchmarks/LivermoreLoops I can see the binary and if I execute, it runs (still missing bits and bobs, but ignore that for now). I've se...
2012 Nov 06
0
[LLVMdev] New benchmark in test-suite
Hey Renato, You are right, the failure on compile_time indicates that the test isn't even building. As provided, the tests don't actually define the cpuida() or calculateMHz() functions so that seems expected to me. The compile failures end up getting buried in the logs, but they will either be in the test.log file in the top-level sandbox di...
2017 Feb 27
8
Noisy benchmark results?
...se this? As a bonus question, if I instead run the benchmarks with an added -m32: lnt runtest nt --sandbox SANDBOX --cflag=-m32 --cc <path-to-my-clang> --test-suite /data/repo/test-suite -j 8 I get three failures: --- Tested: 2465 tests -- FAIL: MultiSource/Applications/ClamAV/clamscan.compile_time (1 of 2465) FAIL: MultiSource/Applications/ClamAV/clamscan.execution_time (494 of 2465) FAIL: MultiSource/Benchmarks/DOE-ProxyApps-C/XSBench/XSBench.execution_time (495 of 2465) Is this known/expected or do I do something stupid? Thanks, Mikael
2012 Nov 07
1
[LLVMdev] New benchmark in test-suite
On 6 November 2012 22:34, Daniel Dunbar <daniel at zuster.org> wrote: > You are right, the failure on compile_time indicates that the test isn't > even building. As provided, the tests don't actually define the cpuida() or > calculateMHz() functions so that seems expected to me. I defined both functions as NOPs. I got what it was. The original makefile had a "-o lloops" on the compile...
2017 Feb 28
2
Noisy benchmark results?
...e benchmarks with an added -m32: >> lnt runtest nt --sandbox SANDBOX --cflag=-m32 --cc <path-to-my-clang> --test-suite /data/repo/test-suite -j 8 >> >> I get three failures: >> >> --- Tested: 2465 tests -- >> FAIL: MultiSource/Applications/ClamAV/clamscan.compile_time (1 of 2465) >> FAIL: MultiSource/Applications/ClamAV/clamscan.execution_time (494 of 2465) >> FAIL: MultiSource/Benchmarks/DOE-ProxyApps-C/XSBench/XSBench.execution_time (495 of 2465) >> >> Is this known/expected or do I do something stupid? >> >> Thanks, >...
2017 Feb 27
3
Noisy benchmark results?
...rks with an added -m32: > > lnt runtest nt --sandbox SANDBOX --cflag=-m32 --cc <path-to-my-clang> > --test-suite /data/repo/test-suite -j 8 > > > > I get three failures: > > > > --- Tested: 2465 tests -- > > FAIL: MultiSource/Applications/ClamAV/clamscan.compile_time (1 of 2465) > > FAIL: MultiSource/Applications/ClamAV/clamscan.execution_time (494 of > 2465) > > FAIL: MultiSource/Benchmarks/DOE-ProxyApps-C/XSBench/XSBench.execution_time > (495 of 2465) > > > > Is this known/expected or do I do something stupid? > > > &gt...
2017 Jul 05
2
Performance metrics with LLVM
...be their schemas. This will still require server access, but will be less scary than editing the DB directly. A lot of this boils down to naming, and how the data is later presented. For instance, in some places we have elected to store the new link time metric as a differently named test in the compile_time metric (foo.c vs foo.c.link). When you do this, those are presented side by side in the data listing views, that is handy. Each metric is given a section in the run reports, you can imagine what that might look like with 50 metrics. We might need to do some UI redesign to make the run reports s...
2016 Apr 27
3
RFC: LNT/Test-suite support for custom metrics and test parameterization
...xible. > > There is no opportunity to run another test-suite except simple one. > > 2. Performance is quite bad when database has a lot of records. > > For example, rendering graph is too slow. On green-dragon-07-x86_64-O3-flto:42 SingleSource/Benchmarks/Shootout/objinst compile_time need for rendering 191.8 seconds. > > 3. It’s difficult to add new features which need queries to sample table in database(if we use BLOB field for custom metrics). > > Queries will be needed for more complex analysis. For example, if we would like to add some additional check...
2016 Apr 26
2
RFC: LNT/Test-suite support for custom metrics and test parameterization
...make it scaleable: * What "attribute" of the test is this metric measuring? For example, both "exec_time" and "score" measure the same attribute; performance of the generated code. It's superfluous to have them displayed in separate tables. However mem_size and compile_time both measure completely different aspects of the test. * Is this metric useful to display at the top level? or should it only be exposed when more data about a test result is requested? * An example of this is the pass statistics. I don't want my daily report view cluttered by the time s...
2015 May 15
6
[LLVMdev] Proposal: change LNT’s regression detection algorithm and how it is used to reduce false positives
tl;dr in low data situations we don’t look at past information, and that increases the false positive regression rate. We should look at the possibly incorrect recent past runs to fix that. Motivation: LNT’s current regression detection system has false positive rate that is too high to make it useful. With test suites as large as the llvm “test-suite” a single report will show hundreds of
2016 Apr 26
3
RFC: LNT/Test-suite support for custom metrics and test parameterization
...> > There is no opportunity to run another test-suite except simple one. > > 2. Performance is quite bad when database has a lot of records. > > For example, rendering graph is too slow. On > green-dragon-07-x86_64-O3-flto:42 > SingleSource/Benchmarks/Shootout/objinst compile_time need for rendering > 191.8 seconds. > > 3. It’s difficult to add new features which need queries to sample > table in database(if we use BLOB field for custom metrics). > > Queries will be needed for more complex analysis. For example, if we would > like to add some addi...
2015 May 18
2
[LLVMdev] Proposal: change LNT’s regression detection algorithm and how it is used to reduce false positives
...39;m running the Cortex-A53 performance tracker on is a big.LITTLE > system with 2 Cortex-A57s and 4 Cortex-A53s. To build the benchmark binaries, > I'm using all cores, to make the turn-around time of the bot as fast as possible. > However, this leads to huge noise levels on the "compile_time" metric, as sometimes > a binary gets compiled on a Cortex-A53 and sometimes on a Cortex-A57. For this > board specifically, it just shouldn't be reporting compile_time at all, since the > numbers are meaningless from a performance-tracking use case. > > > Another thou...
2016 Apr 22
2
RFC: LNT/Test-suite support for custom metrics and test parameterization
On 22 Apr 2016, at 11:14, Mehdi Amini <mehdi.amini at apple.com<mailto:mehdi.amini at apple.com>> wrote: On Apr 22, 2016, at 12:45 AM, Kristof Beyls via llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> wrote: On 21 Apr 2016, at 17:44, Sergey Yakoushkin <sergey.yakoushkin at gmail.com<mailto:sergey.yakoushkin at gmail.com>> wrote: Hi
2016 Apr 25
4
FW: RFC: LNT/Test-suite support for custom metrics and test parameterization
...make it scaleable: * What "attribute" of the test is this metric measuring? For example, both "exec_time" and "score" measure the same attribute; performance of the generated code. It's superfluous to have them displayed in separate tables. However mem_size and compile_time both measure completely different aspects of the test. * Is this metric useful to display at the top level? or should it only be exposed when more data about a test result is requested? * An example of this is the pass statistics. I don't want my daily report view cluttered by the time s...
2016 May 13
4
RFC: LNT/Test-suite support for custom metrics and test parameterization
...flexible. > > There is no opportunity to run another test-suite except simple one. > > 2. Performance is quite bad when database has a lot of records. > > For example, rendering graph is too slow. On green-dragon-07-x86_64-O3-flto:42 SingleSource/Benchmarks/Shootout/objinst compile_time need for rendering 191.8 seconds. > > 3. It’s difficult to add new features which need queries to sample table in database(if we use BLOB field for custom metrics). > > Queries will be needed for more complex analysis. For example, if we would like to add some additional check fo...
2017 Jul 05
2
Performance metrics with LLVM
> On Jul 4, 2017, at 2:02 AM, Tobias Grosser <tobias.grosser at inf.ethz.ch> wrote: > >> On Tue, Jul 4, 2017, at 09:48 AM, Kristof Beyls wrote: >> Hi Tobias, >> >> The metrics that you can collect in LNT are fixed per "test suite". >> There are 2 such "test suite"s defined in LNT at the moment: nts and >> compile. >> For
2014 May 04
12
[LLVMdev] [RFC] Benchmarking subset of the test suite
At the LLVM Developers' Meeting in November, I promised to work on isolating a subset of the current test suite that is useful for benchmarking. Having looked at this in more detail, most of the applications and benchmarks in the test suite are useful for benchmarking, and so I think that a better way of phrasing it is that we should construct a list of programs in the test suite that are not