Dean Michael Berris via llvm-dev
2016-Sep-01 06:45 UTC
[llvm-dev] Benchmarks for LLVM-generated Binaries
Hi, I've lately been wondering where benchmarks for LLVM-generated binaries are hosted, and whether they're tracked over time. I'm asking because I'm thinking of where to put some benchmarks I've written using the open source Google benchmarking library [0] to test certain costs of XRay-instrumented binaries, the XRay runtime, and other related measurements (effect of patching/unpatching of various-sized functions, etc.) As I can certainly publish the numbers I get from the benchmarks, it's not as good as having the benchmarks available somewhere that others can run and verify for themselves (and scrutinise to improve accuracy). I asked on IRC (#llvm) and Chandler suggested that I ask on the list too. Questions: - Is the test-suite repository the right place to put these generated-code benchmarks? - Are there any objections to using a later version of the Google benchmarking library [0] in the test-suite? - Are the docs on the Testing Infrastructure Guide still relevant and up-to-date, and is that a good starting point for exploration here? Cheers [0] https://github.com/google/benchmark [1] http://llvm.org/docs/TestingGuide.html -- Dean
Renato Golin via llvm-dev
2016-Sep-01 15:14 UTC
[llvm-dev] Benchmarks for LLVM-generated Binaries
On 1 September 2016 at 07:45, Dean Michael Berris via llvm-dev <llvm-dev at lists.llvm.org> wrote:> I've lately been wondering where benchmarks for LLVM-generated binaries are hosted, and whether they're tracked over time.Hi Dean, Do you mean Perf? http://llvm.org/perf/ Example, ARM and AArch64 tracking performance at: http://llvm.org/perf/db_default/v4/nts/machine/41 http://llvm.org/perf/db_default/v4/nts/machine/46> - Is the test-suite repository the right place to put these generated-code benchmarks?I believe there would be the best place, yes.> - Are there any objections to using a later version of the Google benchmarking library [0] in the test-suite?While this looks like a very nice tool set, I wonder how we're going to integrate it. Checking it out in the test-suite wouldn't be the best option (version rot), but neither would be requiring people to install it before running the test-suite, especially if the installation process isn't as easy as "apt-get install", like all the other dependencies.> - Are the docs on the Testing Infrastructure Guide still relevant and up-to-date, and is that a good starting point for exploration here?Unfortunately, that's mostly for the "make check" tests, not for the test-suite. The test-suite execution is covered by LNT's doc (http://llvm.org/docs/lnt), but it's mostly about LNT internals and not the test-suite itself. However, it's not that hard to understand the test-suite structure. To add new tests, you just need to find a suitable place { (SingleSource / MultiSource) / Benchmarks / YourBench } and copy ( CMakeFiles.txt, Makefile, lit.local.cfg ), change to your needs, and it should be done. cheers, --renato
Dean Michael Berris via llvm-dev
2016-Sep-02 01:13 UTC
[llvm-dev] Benchmarks for LLVM-generated Binaries
> On 2 Sep 2016, at 01:14, Renato Golin <renato.golin at linaro.org> wrote: > > On 1 September 2016 at 07:45, Dean Michael Berris via llvm-dev > <llvm-dev at lists.llvm.org> wrote: >> I've lately been wondering where benchmarks for LLVM-generated binaries are hosted, and whether they're tracked over time. > > Hi Dean, > > Do you mean Perf? > > http://llvm.org/perf/ > > Example, ARM and AArch64 tracking performance at: > > http://llvm.org/perf/db_default/v4/nts/machine/41 > http://llvm.org/perf/db_default/v4/nts/machine/46 >Awesome stuff, thanks Renato!> >> - Is the test-suite repository the right place to put these generated-code benchmarks? > > I believe there would be the best place, yes. > > >> - Are there any objections to using a later version of the Google benchmarking library [0] in the test-suite? > > While this looks like a very nice tool set, I wonder how we're going > to integrate it. > > Checking it out in the test-suite wouldn't be the best option (version > rot), but neither would be requiring people to install it before > running the test-suite, especially if the installation process isn't > as easy as "apt-get install", like all the other dependencies. >I think it should be possible to have a snapshot of it included. I don't know what the licensing implications are (I'm not a lawyer, but I know someone who is -- paging Danny Berlin). I'm not as concerned about falling behind on versions there though mostly because it should be trivial to update it if we need it. Though like you, I agree this isn't the best way of doing it. :)> >> - Are the docs on the Testing Infrastructure Guide still relevant and up-to-date, and is that a good starting point for exploration here? > > Unfortunately, that's mostly for the "make check" tests, not for the > test-suite. The test-suite execution is covered by LNT's doc > (http://llvm.org/docs/lnt), but it's mostly about LNT internals and > not the test-suite itself. > > However, it's not that hard to understand the test-suite structure. To > add new tests, you just need to find a suitable place { (SingleSource > / MultiSource) / Benchmarks / YourBench } and copy ( CMakeFiles.txt, > Makefile, lit.local.cfg ), change to your needs, and it should be > done. >Thanks -- this doesn't tell me how to run the test though... I could certainly do it by hand (i.e. build the executables and run it) and I suspect I'm not alone in wanting to be able to do this easily through the CMake+Ninja (or other generator) workflow. Do you know if someone is working on that aspect? Cheers -- Dean
Michael Zolotukhin via llvm-dev
2016-Sep-06 23:07 UTC
[llvm-dev] Benchmarks for LLVM-generated Binaries
> On Sep 1, 2016, at 8:14 AM, Renato Golin via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > On 1 September 2016 at 07:45, Dean Michael Berris via llvm-dev > <llvm-dev at lists.llvm.org> wrote: >> I've lately been wondering where benchmarks for LLVM-generated binaries are hosted, and whether they're tracked over time. > > Hi Dean, > > Do you mean Perf? > > http://llvm.org/perf/ > > Example, ARM and AArch64 tracking performance at: > > http://llvm.org/perf/db_default/v4/nts/machine/41 > http://llvm.org/perf/db_default/v4/nts/machine/46 > > >> - Is the test-suite repository the right place to put these generated-code benchmarks? > > I believe there would be the best place, yes. > > >> - Are there any objections to using a later version of the Google benchmarking library [0] in the test-suite? > > While this looks like a very nice tool set, I wonder how we're going > to integrate it. > > Checking it out in the test-suite wouldn't be the best option (version > rot), but neither would be requiring people to install it before > running the test-suite, especially if the installation process isn't > as easy as "apt-get install", like all the other dependencies. > > >> - Are the docs on the Testing Infrastructure Guide still relevant and up-to-date, and is that a good starting point for exploration here? > > Unfortunately, that's mostly for the "make check" tests, not for the > test-suite. The test-suite execution is covered by LNT's doc > (http://llvm.org/docs/lnt), but it's mostly about LNT internals and > not the test-suite itself. > > However, it's not that hard to understand the test-suite structure. To > add new tests, you just need to find a suitable place { (SingleSource > / MultiSource) / Benchmarks / YourBench } and copy ( CMakeFiles.txt, > Makefile, lit.local.cfg ), change to your needs, and it should be > done.Hi Renato and others, Is it possible/how hard will it be to make the testsuite kind of extendable? Like, I copy a folder with some tests that comply to some rules (e.g. have CMakeLists.txt, lit.local.cfg, etc.) and run them via the standard infrastructure without changing anything in test-suite files? The motivation of this question is the following: we (and probably many other companies too) have internal tests that we can’t share, but still want to track. Currently, the process of adding them to the existing test-suite is not clear to me (or at least not very well documented), and while I can figure it out, it would be cool if we can streamline this process. Ideally, I’d like to see this process like this: 1) Add following files to your benchmark suite: 1.a) CMakeLists.txt having this and that target doing this and that. 1.b) lit.local.cfg script having this and that. … 2) Make sure the test report results in the following format/provide a wrapper script to convert results to the specified form. /* TODO: Results format is specified here */ 3) Run your tests using the standard LNT command, like "lnt runtest … --only-test=External/MyTestSuite/TestA” If that’s already implemented, then I’ll be glad to help with documentation, and if not, I can try implementing it. What do you think? Thanks, Michael> > cheers, > --renato > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
Maybe Matching Threads
- Benchmarks for LLVM-generated Binaries
- Benchmarks for LLVM-generated Binaries
- Benchmarks for LLVM-generated Binaries
- [LLVMdev] Proposal: change LNT’s regression detection algorithm and how it is used to reduce false positives
- RFC: LNT/Test-suite support for custom metrics and test parameterization