search for: exec_time

Displaying 12 results from an estimated 12 matches for "exec_time".

2012 May 08
4
Axes value format
Hi all, I have some graphs where the values on the X and Y axes are by default in exponent form like 2e+05 or 1.0e+07. Is it possible to make them in a more readable form like 10M for 1.0e+07 or 200K for 2e+05? Thanks and Regards, - vihan
2020 Aug 18
3
[RFC] Switching to MemorySSA-backed Dead Store Elimination (aka cross-bb DSE)
> On Aug 18, 2020, at 16:59, Michael Kruse <llvmdev at meinersbur.de> wrote: > > Thanks for all the work. The reductions in stores look promising. Do you also have performance numbers how much this improves the execution time? Did you observe any regressions where MSSA resulted in fewer removed stores? I did not gather numbers for execution time yet, but I’ll try to share some
2016 Apr 22
2
RFC: LNT/Test-suite support for custom metrics and test parameterization
On 22 Apr 2016, at 11:14, Mehdi Amini <mehdi.amini at apple.com<mailto:mehdi.amini at apple.com>> wrote: On Apr 22, 2016, at 12:45 AM, Kristof Beyls via llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> wrote: On 21 Apr 2016, at 17:44, Sergey Yakoushkin <sergey.yakoushkin at gmail.com<mailto:sergey.yakoushkin at gmail.com>> wrote: Hi
2020 Aug 19
2
[RFC] Switching to MemorySSA-backed Dead Store Elimination (aka cross-bb DSE)
...ere are some execution time results for ARM64 with -O3 -flto with the > MemorySSA-DSE compared against the current DSE implementation for CINT2006 > (negative % means reduction in execution time with MemorySSA-DSE). This > excludes small changes within the noise (<= 0.5%) > > > Exec_time number of stores removed > test-suite...T2006/456.hmmer/456.hmmer.test -1.6%. + 70.8% > test-suite.../CINT2006/403.gcc/403.gcc.test -1.4%. + 35.7% > test-suite...0.perlbench/400.perlbench.test -1.2%. + 33.2% > test-suite...3.xalancbmk/483.xalancbmk.test...
2016 Apr 25
4
FW: RFC: LNT/Test-suite support for custom metrics and test parameterization
...new data. Currently every new metric gets its own separate table in the report/run views, and this does not scale well at all. I think we need some more concepts in the metric system to make it scaleable: * What "attribute" of the test is this metric measuring? For example, both "exec_time" and "score" measure the same attribute; performance of the generated code. It's superfluous to have them displayed in separate tables. However mem_size and compile_time both measure completely different aspects of the test. * Is this metric useful to display at the top level?...
2016 Mar 24
2
[test-suite] r261857 - [cmake] Add support for arbitrary metrics
...s.append(runtime) >>>>>>> @@ -128,6 +140,8 @@ class TestSuiteTest(FileBasedTest): >>>>>>> result = lit.Test.Result(Test.PASS, output) >>>>>>> if len(runtimes) > 0: >>>>>>> result.addMetric('exec_time', >>>>>>> lit.Test.toMetricValue(runtimes[0])) >>>>>>> + for metric, values in metrics.items(): >>>>>>> + result.addMetric(metric, >>>>>>> lit.Test.toMetricValue(values[0])) >>&g...
2016 Mar 24
0
[test-suite] r261857 - [cmake] Add support for arbitrary metrics
...) >>>>>>>> @@ -128,6 +140,8 @@ class TestSuiteTest(FileBasedTest): >>>>>>>> result = lit.Test.Result(Test.PASS, output) >>>>>>>> if len(runtimes) > 0: >>>>>>>> result.addMetric('exec_time', >>>>>>>> lit.Test.toMetricValue(runtimes[0])) >>>>>>>> + for metric, values in metrics.items(): >>>>>>>> + result.addMetric(metric, >>>>>>>> lit.Test.toMetricValue(values...
2016 Mar 24
1
[test-suite] r261857 - [cmake] Add support for arbitrary metrics
...t;>>>>> @@ -128,6 +140,8 @@ class TestSuiteTest(FileBasedTest): >>>>>>>>> result = lit.Test.Result(Test.PASS, output) >>>>>>>>> if len(runtimes) > 0: >>>>>>>>> result.addMetric('exec_time', >>>>>>>>> lit.Test.toMetricValue(runtimes[0])) >>>>>>>>> + for metric, values in metrics.items(): >>>>>>>>> + result.addMetric(metric, >>>>>>>>> lit.Test.toMe...
2016 Apr 26
2
RFC: LNT/Test-suite support for custom metrics and test parameterization
...new data. Currently every new metric gets its own separate table in the report/run views, and this does not scale well at all. I think we need some more concepts in the metric system to make it scaleable: * What "attribute" of the test is this metric measuring? For example, both "exec_time" and "score" measure the same attribute; performance of the generated code. It's superfluous to have them displayed in separate tables. However mem_size and compile_time both measure completely different aspects of the test. * Is this metric useful to display at the top level?...
2016 Apr 27
3
RFC: LNT/Test-suite support for custom metrics and test parameterization
...own separate table in the report/run views, and this does not scale well at all. > > > > I think we need some more concepts in the metric system to make it scaleable: > > > > * What "attribute" of the test is this metric measuring? For example, both "exec_time" and "score" measure the same attribute; performance of the generated code. It's superfluous to have them displayed in separate tables. However mem_size and compile_time both measure completely different aspects of the test. > > * Is this metric useful to display at the...
2016 Apr 26
3
RFC: LNT/Test-suite support for custom metrics and test parameterization
...arate table in the report/run views, and this does not scale > well at all. > > > > I think we need some more concepts in the metric system to make it > scaleable: > > > > * What "attribute" of the test is this metric measuring? For example, > both "exec_time" and "score" measure the same attribute; performance of the > generated code. It's superfluous to have them displayed in separate tables. > However mem_size and compile_time both measure completely different aspects > of the test. > > * Is this metric useful to d...
2016 May 13
4
RFC: LNT/Test-suite support for custom metrics and test parameterization
...ets its own separate table in the report/run views, and this does not scale well at all. > > > > I think we need some more concepts in the metric system to make it scaleable: > > > > * What "attribute" of the test is this metric measuring? For example, both "exec_time" and "score" measure the same attribute; performance of the generated code. It's superfluous to have them displayed in separate tables. However mem_size and compile_time both measure completely different aspects of the test. > > * Is this metric useful to display at the t...