Dean Michael Berris via llvm-dev
2016-Sep-02 01:13 UTC
[llvm-dev] Benchmarks for LLVM-generated Binaries
> On 2 Sep 2016, at 01:14, Renato Golin <renato.golin at linaro.org> wrote: > > On 1 September 2016 at 07:45, Dean Michael Berris via llvm-dev > <llvm-dev at lists.llvm.org> wrote: >> I've lately been wondering where benchmarks for LLVM-generated binaries are hosted, and whether they're tracked over time. > > Hi Dean, > > Do you mean Perf? > > http://llvm.org/perf/ > > Example, ARM and AArch64 tracking performance at: > > http://llvm.org/perf/db_default/v4/nts/machine/41 > http://llvm.org/perf/db_default/v4/nts/machine/46 >Awesome stuff, thanks Renato!> >> - Is the test-suite repository the right place to put these generated-code benchmarks? > > I believe there would be the best place, yes. > > >> - Are there any objections to using a later version of the Google benchmarking library [0] in the test-suite? > > While this looks like a very nice tool set, I wonder how we're going > to integrate it. > > Checking it out in the test-suite wouldn't be the best option (version > rot), but neither would be requiring people to install it before > running the test-suite, especially if the installation process isn't > as easy as "apt-get install", like all the other dependencies. >I think it should be possible to have a snapshot of it included. I don't know what the licensing implications are (I'm not a lawyer, but I know someone who is -- paging Danny Berlin). I'm not as concerned about falling behind on versions there though mostly because it should be trivial to update it if we need it. Though like you, I agree this isn't the best way of doing it. :)> >> - Are the docs on the Testing Infrastructure Guide still relevant and up-to-date, and is that a good starting point for exploration here? > > Unfortunately, that's mostly for the "make check" tests, not for the > test-suite. The test-suite execution is covered by LNT's doc > (http://llvm.org/docs/lnt), but it's mostly about LNT internals and > not the test-suite itself. > > However, it's not that hard to understand the test-suite structure. To > add new tests, you just need to find a suitable place { (SingleSource > / MultiSource) / Benchmarks / YourBench } and copy ( CMakeFiles.txt, > Makefile, lit.local.cfg ), change to your needs, and it should be > done. >Thanks -- this doesn't tell me how to run the test though... I could certainly do it by hand (i.e. build the executables and run it) and I suspect I'm not alone in wanting to be able to do this easily through the CMake+Ninja (or other generator) workflow. Do you know if someone is working on that aspect? Cheers -- Dean
Renato Golin via llvm-dev
2016-Sep-02 01:19 UTC
[llvm-dev] Benchmarks for LLVM-generated Binaries
On 2 September 2016 at 02:13, Dean Michael Berris <dean.berris at gmail.com> wrote:> I think it should be possible to have a snapshot of it included. I don't know what the licensing implications are (I'm not a lawyer, but I know someone who is -- paging Danny Berlin).The test-suite has a very large number of licenses (compared to LLVM), so licensing should be less of a problem there. Though Dan can help more than I can. :)> I'm not as concerned about falling behind on versions there though mostly because it should be trivial to update it if we need it. Though like you, I agree this isn't the best way of doing it. :)If we start using it more (maybe we should, at least for the benchmarks, I've been long wanting to do something decent there), then we'd need to add a proper update procedure. I'm fine with some checkout if it's a stable release, not trunk, as it would make things a lot easier to update later (patch releases, new releases, etc).> Thanks -- this doesn't tell me how to run the test though... I could certainly do it by hand (i.e. build the executables and run it) and I suspect I'm not alone in wanting to be able to do this easily through the CMake+Ninja (or other generator) workflow.Ah, no, that helped you adding your test. :)> Do you know if someone is working on that aspect?http://llvm.org/docs/lnt/quickstart.html This is *exactly* what Perf (the monitoring website) does, so you're sure to get the same result on both sides if you run it locally like that. I do. You can choose to run down to a specific test/benchmark, so it's quick and easy to use while developing, too. cheers, --renato
Dean Michael Berris via llvm-dev
2016-Sep-02 01:27 UTC
[llvm-dev] Benchmarks for LLVM-generated Binaries
> On 2 Sep 2016, at 11:19, Renato Golin <renato.golin at linaro.org> wrote: > > On 2 September 2016 at 02:13, Dean Michael Berris <dean.berris at gmail.com> wrote: >> I think it should be possible to have a snapshot of it included. I don't know what the licensing implications are (I'm not a lawyer, but I know someone who is -- paging Danny Berlin). > > The test-suite has a very large number of licenses (compared to LLVM), > so licensing should be less of a problem there. Though Dan can help > more than I can. :) >Cool, let's wait for what Danny thinks on the patch I'll be preparing. :)> >> I'm not as concerned about falling behind on versions there though mostly because it should be trivial to update it if we need it. Though like you, I agree this isn't the best way of doing it. :) > > If we start using it more (maybe we should, at least for the > benchmarks, I've been long wanting to do something decent there), then > we'd need to add a proper update procedure. > > I'm fine with some checkout if it's a stable release, not trunk, as it > would make things a lot easier to update later (patch releases, new > releases, etc). >SGTM.> > >> Thanks -- this doesn't tell me how to run the test though... I could certainly do it by hand (i.e. build the executables and run it) and I suspect I'm not alone in wanting to be able to do this easily through the CMake+Ninja (or other generator) workflow. > > Ah, no, that helped you adding your test. :) > > >> Do you know if someone is working on that aspect? > > http://llvm.org/docs/lnt/quickstart.html > > This is *exactly* what Perf (the monitoring website) does, so you're > sure to get the same result on both sides if you run it locally like > that. I do. >Ah, cool. That works for me. :)> You can choose to run down to a specific test/benchmark, so it's quick > and easy to use while developing, too. >Awesome stuff, thanks Renato! -- Dean