Displaying 20 results from an estimated 30000 matches similar to: "Using Google Benchmark Library"
2018 May 25
0
Using Google Benchmark Library
2018-05-25 13:49 GMT-05:00 Pankaj Kukreja via llvm-dev
<llvm-dev at lists.llvm.org>:
> Hi,
> I am adding some benchmarks to the test-suite as a part of my GSoC project.
> I am planning to use the google benchmark library on some benchmarks. I
> would like to know your opinion/suggestion on how I should proceed with this
> library and how the design should be(like limiting the
2018 May 29
2
Using Google Benchmark Library
Not going into all the detail, but from my side the big question is whether the benchmarks inner loop is small/fine grained enough that stabilization with google benchmark doesn't lead to dozens of seconds benchmark runtimes. Given that you typically see thousandsd or millions of invocations for small functions...
> On May 29, 2018, at 2:06 PM, Michael Kruse via llvm-dev <llvm-dev at
2018 May 30
0
Using Google Benchmark Library
On Wed, May 30, 2018 at 4:07 AM, mbraun via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
> Not going into all the detail, but from my side the big question is
> whether the benchmarks inner loop is small/fine grained enough that
> stabilization with google benchmark doesn't lead to dozens of seconds
> benchmark runtimes. Given that you typically see thousandsd or millions
2018 May 27
2
Using Google Benchmark Library
> On 26 May 2018, at 06:09, Michael Kruse via llvm-dev <llvm-dev at lists.llvm.org> wrote:
>
> 2018-05-25 13:49 GMT-05:00 Pankaj Kukreja via llvm-dev
> <llvm-dev at lists.llvm.org>:
>> Hi,
>> I am adding some benchmarks to the test-suite as a part of my GSoC project.
>> I am planning to use the google benchmark library on some benchmarks. I
>> would
2018 May 29
0
Using Google Benchmark Library
Thanks for your remarks.
2018-05-27 5:19 GMT-05:00 Dean Michael Berris <dean.berris at gmail.com>:
> I think you might run into artificial overhead here if you’re not careful. In particular you might run into:
>
> - Missed in-lining opportunity in the benchmark. If you expect the kernels to be potentially inlined, this might be a problem.
For the kind of benchmarks we have in
2015 May 15
6
[LLVMdev] Proposal: change LNT’s regression detection algorithm and how it is used to reduce false positives
tl;dr in low data situations we don’t look at past information, and that increases the false positive regression rate. We should look at the possibly incorrect recent past runs to fix that.
Motivation: LNT’s current regression detection system has false positive rate that is too high to make it useful. With test suites as large as the llvm “test-suite” a single report will show hundreds of
2017 Feb 28
2
Noisy benchmark results?
> On Feb 27, 2017, at 1:36 AM, Kristof Beyls via llvm-dev <llvm-dev at lists.llvm.org> wrote:
>
> Hi Mikael,
>
> Some noisiness in benchmark results is expected, but the numbers you see seem to be higher than I'd expect.
> A number of tricks people use to get lower noise results are (with the lnt runtest nt command line options to enable it between brackets):
> *
2017 Feb 27
8
Noisy benchmark results?
Hi,
I'm trying to run the benchmark suite:
http://llvm.org/docs/TestingGuide.html#test-suite-quickstart
I'm doing it the lnt way, as described at:
http://llvm.org/docs/lnt/quickstart.html
I don't know what to expect but the results seems to be quite noisy and
unstable. E.g I've done two runs on two different commits that only
differ by a space in CODE_OWNERS.txt on my 12
2017 Feb 27
3
Noisy benchmark results?
Two other things:
1) I get massively more stable execution times on 16.04 than on 14.04 on
both x86 and ARM because 16.04 does far fewer gratuitous moves from one
core to another, even without explicit pinning.
2) turn off ASLR: "echo 0 > /proc/sys/kernel/randomize_va_space". As well
as getting stable addresses for debugging repeatability, it also stabilizes
execution time
2013 Jun 27
0
[LLVMdev] [LNT] Question about results reliability in LNT infrustructure
On Jun 27, 2013, at 9:27 AM, Renato Golin <renato.golin at linaro.org> wrote:
> On 27 June 2013 17:05, Tobias Grosser <tobias at grosser.es> wrote:
> We are looking for a good way/value to show the reliability of individual results in the UI. Do you have some experience, what a good measure of the reliability of test results is?
>
> Hi Tobi,
>
> I had a look at
2013 Jun 27
7
[LLVMdev] [LNT] Question about results reliability in LNT infrustructure
There are a few things we have looked at with LNT runs, so I will share the insights we have had so far. A lot of the problems we have are artificially created by our test protocols instead of the compiler changes themselves. I have been doing a lot of large sample runs of single benchmarks to characterize them better. Some key points:
1) Some benchmarks are bi-modal or multi-modal, single
2013 Jun 27
2
[LLVMdev] [LNT] Question about results reliability in LNT infrustructure
On 27 June 2013 17:05, Tobias Grosser <tobias at grosser.es> wrote:
> We are looking for a good way/value to show the reliability of individual
> results in the UI. Do you have some experience, what a good measure of the
> reliability of test results is?
>
Hi Tobi,
I had a look at this a while ago, but never got around to actually work on
it. My idea was to never use
2013 Jun 27
0
[LLVMdev] [LNT] Question about results reliability in LNT infrustructure
Just forwarding this to the list, my original reply was bounced.
On Jun 27, 2013, at 11:14 AM, Chris Matthews <chris.matthews at apple.com> wrote:
> There are a few things we have looked at with LNT runs, so I will share the insights we have had so far. A lot of the problems we have are artificially created by our test protocols instead of the compiler changes themselves. I have been
2018 Jul 27
2
Proposal: pull benchmark library to the LLVM main repository
As a part of upcoming new Clangd symbol index implementation, we would like
to start support benchmarks of different Clangd pieces, such as index
queries and code completion.
There are already two projects in the LLVM tree using google/benchmark
library while keeping its source code in-tree: libcxx
(libcxx/utils/google-benchmark) and test-suite
(test-suite/MicroBenchmarks/libs/benchmark-1.3.0).
2018 Jul 28
2
[cfe-dev] Proposal: pull benchmark library to the LLVM main repository
I'm happy to have this in the main LLVM repositiory.
The version in the test suite should likely stay there because the test
suite should be buildable w/o LLVM itself -- it is largely a distinct
thing. We re-use lit, but not much else from LLVM, and we wouldn't want to
install the benchmark library the way we do lit.
One interesting point: we should have some way of running the in-tree
2018 Aug 02
2
[cfe-dev] Proposal: pull benchmark library to the LLVM main repository
Thank you very much for the feedback!
What Chandler said about test-suite totally makes sense to me since it's
also excluded from LLVM git monorepo. I will try to land benchmark library
to LLVM core repo and update it to the latest version.
I have not been doing much CMake/project structure before, but I'll start
looking into that next week. I'll reach out to Dominic if anything goes
2016 Sep 01
3
Benchmarks for LLVM-generated Binaries
Hi,
I've lately been wondering where benchmarks for LLVM-generated binaries are hosted, and whether they're tracked over time. I'm asking because I'm thinking of where to put some benchmarks I've written using the open source Google benchmarking library [0] to test certain costs of XRay-instrumented binaries, the XRay runtime, and other related measurements (effect of
2017 Jul 20
8
[RFC] Add IR level interprocedural outliner for code size.
I’m River and I’m a compiler engineer at PlayStation. Recently, I’ve been
working on an interprocedural outlining (code folding) pass for code size
improvement at the IR level. We hit a couple of use cases that the current
code size solutions didn’t handle well enough. Outlining is one of the
avenues that seemed potentially beneficial.
-- Algorithmic Approach --
The general implementation can be
2017 Aug 28
2
Buildbot can't submit results to LNT server
Hi,
I have recently moved the clang-native-arm-lnt-perf bot from the nt
producer to the test-suite producer. It seems to be working fine but
it doesn't manage to submit the results to
http://lnt.llvm.org/submitRun.
If you scroll down to the bottom of [1], you can see this error message:
2017-08-28 07:06:32: submitting result to 'http://lnt.llvm.org/submitRun'
error: lnt server:
2018 Feb 26
2
Compiling a benchmark to IR (either from test-suite, or other benchmarks)
Hello all.
I'm in need of a benchmark that can be compiled to IR or bytecode. I found
the test-suite project (https://llvm.org/docs/TestSuiteMakefileGuide.html)
and thought a benchmark in that project might work. However, I'm having
trouble figuring out how to actually compile any of the benchmarks to IR or
bytecode. Using cmake and make I can compile them to binaries, but at no
point do