Matthias Braun via llvm-dev
2016-Sep-14 17:08 UTC
[llvm-dev] Benchmarks for LLVM-generated Binaries
Have you seen the prototype for googlebenchmark integration I did in the past: https://reviews.llvm.org/D18428 <https://reviews.llvm.org/D18428> (though probably out of date for todays test-suite) +1 for copying the googlebechmark into the test-suite. However I do not think this should simply go into MultiSource: We currently have a number of additional plugins in the lit test runner such as measuring the runtime of the benchmark executable, determining code size, we still plan to add a mode to run benchmarks multiple times, we run the bechmark under perf (or iOS specific tools) to collect performance counters… Many of those are questionable measurements for a googlebenchmark executable which has varying runtime because it runs the test more/less often. We really should introduce a new benchmarking mode for this. - Matthias> On Sep 14, 2016, at 8:29 AM, Eric Christopher via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > > > On Wed, Sep 14, 2016 at 8:23 AM Mehdi Amini via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: > >> On Sep 14, 2016, at 12:50 AM, Dean Michael Berris via llvm-dev <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: >> >> I'm working on this now, and I had a few more questions below for Renato and the list in general. Please see inline below. >> >>> On 2 Sep 2016, at 11:27, Dean Michael Berris <dean.berris at gmail.com <mailto:dean.berris at gmail.com>> wrote: >>> >>>> On 2 Sep 2016, at 11:19, Renato Golin <renato.golin at linaro.org <mailto:renato.golin at linaro.org>> wrote: >>>> >>>> On 2 September 2016 at 02:13, Dean Michael Berris <dean.berris at gmail.com <mailto:dean.berris at gmail.com>> wrote: >>>>> I think it should be possible to have a snapshot of it included. I don't know what the licensing implications are (I'm not a lawyer, but I know someone who is -- paging Danny Berlin). >>>> >>>> The test-suite has a very large number of licenses (compared to LLVM), >>>> so licensing should be less of a problem there. Though Dan can help >>>> more than I can. :) >>>> >>> >>> Cool, let's wait for what Danny thinks on the patch I'll be preparing. :) >>> >>>> >>>>> I'm not as concerned about falling behind on versions there though mostly because it should be trivial to update it if we need it. Though like you, I agree this isn't the best way of doing it. :) >>>> >>>> If we start using it more (maybe we should, at least for the >>>> benchmarks, I've been long wanting to do something decent there), then >>>> we'd need to add a proper update procedure. >>>> >>>> I'm fine with some checkout if it's a stable release, not trunk, as it >>>> would make things a lot easier to update later (patch releases, new >>>> releases, etc). >>>> >>> >>> SGTM. >>> >> >> Is there a preference on where to place the library? I had a look at {SingleSource/MultiSource}/Benchmarks/ and I didn't find a common location for libraries used. I'm tempted to create a top-level "libs" directory that will host common libraries but I'm also fine with just having the benchmark library living alongside the XRay benchmarks. >> >> So two options here: >> >> 1) libs/googlebenchmark/ >> 2) MultiSource/Benchmarks/XRay/googlebench/ > > This is something that may be used (or is intended) to be used by others in the future, the first option makes it easier (or encouraging at least). > > > +1 to this. > > Looks like there is reasonably active development going on right now (primarily by EricWF who is also contributing to llvm), so we'll probably want to coordinate how and how often we sync with top of tree. (Probably more often than google unittests :) > > -eric > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org> > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev <http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev>-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160914/1d55b8d0/attachment.html>
Dean Michael Berris via llvm-dev
2016-Sep-15 01:57 UTC
[llvm-dev] Benchmarks for LLVM-generated Binaries
Thanks everyone, I'll go with the "libs/" as a top-level directory in test-suite.> On 15 Sep 2016, at 03:08, Matthias Braun <matze at braunis.de> wrote: > > Have you seen the prototype for googlebenchmark integration I did in the past: > > https://reviews.llvm.org/D18428 (though probably out of date for todays test-suite) >Not yet, but thanks for the pointer Matthias!> +1 for copying the googlebechmark into the test-suite. > > However I do not think this should simply go into MultiSource: We currently have a number of additional plugins in the lit test runner such as measuring the runtime of the benchmark executable, determining code size, we still plan to add a mode to run benchmarks multiple times, we run the bechmark under perf (or iOS specific tools) to collect performance counters… Many of those are questionable measurements for a googlebenchmark executable which has varying runtime because it runs the test more/less often. > We really should introduce a new benchmarking mode for this. >Sounds good to me, but probably something for later down the road.> - Matthias > >> On Sep 14, 2016, at 8:29 AM, Eric Christopher via llvm-dev <llvm-dev at lists.llvm.org> wrote: >> >> >> >> On Wed, Sep 14, 2016 at 8:23 AM Mehdi Amini via llvm-dev <llvm-dev at lists.llvm.org> wrote: >> >>> On Sep 14, 2016, at 12:50 AM, Dean Michael Berris via llvm-dev <llvm-dev at lists.llvm.org> wrote: >>> >>> I'm working on this now, and I had a few more questions below for Renato and the list in general. Please see inline below. >>> >>>> On 2 Sep 2016, at 11:27, Dean Michael Berris <dean.berris at gmail.com> wrote: >>>> >>>>> On 2 Sep 2016, at 11:19, Renato Golin <renato.golin at linaro.org> wrote: >>>>> >>>>> On 2 September 2016 at 02:13, Dean Michael Berris <dean.berris at gmail.com> wrote: >>>>>> I think it should be possible to have a snapshot of it included. I don't know what the licensing implications are (I'm not a lawyer, but I know someone who is -- paging Danny Berlin). >>>>> >>>>> The test-suite has a very large number of licenses (compared to LLVM), >>>>> so licensing should be less of a problem there. Though Dan can help >>>>> more than I can. :) >>>>> >>>> >>>> Cool, let's wait for what Danny thinks on the patch I'll be preparing. :) >>>> >>>>> >>>>>> I'm not as concerned about falling behind on versions there though mostly because it should be trivial to update it if we need it. Though like you, I agree this isn't the best way of doing it. :) >>>>> >>>>> If we start using it more (maybe we should, at least for the >>>>> benchmarks, I've been long wanting to do something decent there), then >>>>> we'd need to add a proper update procedure. >>>>> >>>>> I'm fine with some checkout if it's a stable release, not trunk, as it >>>>> would make things a lot easier to update later (patch releases, new >>>>> releases, etc). >>>>> >>>> >>>> SGTM. >>>> >>> >>> Is there a preference on where to place the library? I had a look at {SingleSource/MultiSource}/Benchmarks/ and I didn't find a common location for libraries used. I'm tempted to create a top-level "libs" directory that will host common libraries but I'm also fine with just having the benchmark library living alongside the XRay benchmarks. >>> >>> So two options here: >>> >>> 1) libs/googlebenchmark/ >>> 2) MultiSource/Benchmarks/XRay/googlebench/ >> >> This is something that may be used (or is intended) to be used by others in the future, the first option makes it easier (or encouraging at least). >> >> >> +1 to this. >> >> Looks like there is reasonably active development going on right now (primarily by EricWF who is also contributing to llvm), so we'll probably want to coordinate how and how often we sync with top of tree. (Probably more often than google unittests :) >>This sounds good to me too. Happy to get involved in ongoing efforts there too. Cheers -- Dean
Matthias Braun via llvm-dev
2016-Sep-15 02:44 UTC
[llvm-dev] Benchmarks for LLVM-generated Binaries
> On Sep 14, 2016, at 6:57 PM, Dean Michael Berris via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Thanks everyone, I'll go with the "libs/" as a top-level directory in test-suite. > >> On 15 Sep 2016, at 03:08, Matthias Braun <matze at braunis.de <mailto:matze at braunis.de>> wrote: >> >> Have you seen the prototype for googlebenchmark integration I did in the past: >> >> https://reviews.llvm.org/D18428 <https://reviews.llvm.org/D18428> (though probably out of date for todays test-suite) >> > > Not yet, but thanks for the pointer Matthias! > >> +1 for copying the googlebechmark into the test-suite. >> >> However I do not think this should simply go into MultiSource: We currently have a number of additional plugins in the lit test runner such as measuring the runtime of the benchmark executable, determining code size, we still plan to add a mode to run benchmarks multiple times, we run the bechmark under perf (or iOS specific tools) to collect performance counters… Many of those are questionable measurements for a googlebenchmark executable which has varying runtime because it runs the test more/less often. >> We really should introduce a new benchmarking mode for this. >> > > Sounds good to me, but probably something for later down the road.Well if you just put googlebenchmark executables into the MultiSource directory then the lit runner will just measure the runtime of the executable which is worse than "for (int i = 0; i < LARGE_NUMBER; ++i) myfunc();" because googlebenchmark will use a varying number of runs depending on noise levels/confidence. When running googlebenchmarks we should disable the external time measurements and have a lit plugin in place which parses the googlebenchmark output (that old patch has that). I believe this can only really work when you create a new toplevel directory for which we apply different benchmarking rules. - Matthias> >> - Matthias >> >>> On Sep 14, 2016, at 8:29 AM, Eric Christopher via llvm-dev <llvm-dev at lists.llvm.org> wrote: >>> >>> >>> >>> On Wed, Sep 14, 2016 at 8:23 AM Mehdi Amini via llvm-dev <llvm-dev at lists.llvm.org> wrote: >>> >>>> On Sep 14, 2016, at 12:50 AM, Dean Michael Berris via llvm-dev <llvm-dev at lists.llvm.org> wrote: >>>> >>>> I'm working on this now, and I had a few more questions below for Renato and the list in general. Please see inline below. >>>> >>>>> On 2 Sep 2016, at 11:27, Dean Michael Berris <dean.berris at gmail.com> wrote: >>>>> >>>>>> On 2 Sep 2016, at 11:19, Renato Golin <renato.golin at linaro.org> wrote: >>>>>> >>>>>> On 2 September 2016 at 02:13, Dean Michael Berris <dean.berris at gmail.com> wrote: >>>>>>> I think it should be possible to have a snapshot of it included. I don't know what the licensing implications are (I'm not a lawyer, but I know someone who is -- paging Danny Berlin). >>>>>> >>>>>> The test-suite has a very large number of licenses (compared to LLVM), >>>>>> so licensing should be less of a problem there. Though Dan can help >>>>>> more than I can. :) >>>>>> >>>>> >>>>> Cool, let's wait for what Danny thinks on the patch I'll be preparing. :) >>>>> >>>>>> >>>>>>> I'm not as concerned about falling behind on versions there though mostly because it should be trivial to update it if we need it. Though like you, I agree this isn't the best way of doing it. :) >>>>>> >>>>>> If we start using it more (maybe we should, at least for the >>>>>> benchmarks, I've been long wanting to do something decent there), then >>>>>> we'd need to add a proper update procedure. >>>>>> >>>>>> I'm fine with some checkout if it's a stable release, not trunk, as it >>>>>> would make things a lot easier to update later (patch releases, new >>>>>> releases, etc). >>>>>> >>>>> >>>>> SGTM. >>>>> >>>> >>>> Is there a preference on where to place the library? I had a look at {SingleSource/MultiSource}/Benchmarks/ and I didn't find a common location for libraries used. I'm tempted to create a top-level "libs" directory that will host common libraries but I'm also fine with just having the benchmark library living alongside the XRay benchmarks. >>>> >>>> So two options here: >>>> >>>> 1) libs/googlebenchmark/ >>>> 2) MultiSource/Benchmarks/XRay/googlebench/ >>> >>> This is something that may be used (or is intended) to be used by others in the future, the first option makes it easier (or encouraging at least). >>> >>> >>> +1 to this. >>> >>> Looks like there is reasonably active development going on right now (primarily by EricWF who is also contributing to llvm), so we'll probably want to coordinate how and how often we sync with top of tree. (Probably more often than google unittests :) >>> > > This sounds good to me too. Happy to get involved in ongoing efforts there too. > > Cheers > > -- Dean > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org> > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev <http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev>-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160914/1e1e39d7/attachment.html>
Mehdi Amini via llvm-dev
2016-Sep-15 03:04 UTC
[llvm-dev] Benchmarks for LLVM-generated Binaries
> On Sep 14, 2016, at 6:57 PM, Dean Michael Berris <dean.berris at gmail.com> wrote: > > Thanks everyone, I'll go with the "libs/" as a top-level directory in test-suite. > >> On 15 Sep 2016, at 03:08, Matthias Braun <matze at braunis.de <mailto:matze at braunis.de>> wrote: >> >> Have you seen the prototype for googlebenchmark integration I did in the past: >> >> https://reviews.llvm.org/D18428 <https://reviews.llvm.org/D18428> (though probably out of date for todays test-suite) >> > > Not yet, but thanks for the pointer Matthias! > >> +1 for copying the googlebechmark into the test-suite. >> >> However I do not think this should simply go into MultiSource: We currently have a number of additional plugins in the lit test runner such as measuring the runtime of the benchmark executable, determining code size, we still plan to add a mode to run benchmarks multiple times, we run the bechmark under perf (or iOS specific tools) to collect performance counters… Many of those are questionable measurements for a googlebenchmark executable which has varying runtime because it runs the test more/less often. >> We really should introduce a new benchmarking mode for this. >> > > Sounds good to me, but probably something for later down the road.I think these benchmarks should not run by default as long as there is no proper integration to report the “correct” timing in lit. Otherwise it’ll pollute the reports / database. — Mehdi> >> - Matthias >> >>> On Sep 14, 2016, at 8:29 AM, Eric Christopher via llvm-dev <llvm-dev at lists.llvm.org> wrote: >>> >>> >>> >>> On Wed, Sep 14, 2016 at 8:23 AM Mehdi Amini via llvm-dev <llvm-dev at lists.llvm.org> wrote: >>> >>>> On Sep 14, 2016, at 12:50 AM, Dean Michael Berris via llvm-dev <llvm-dev at lists.llvm.org> wrote: >>>> >>>> I'm working on this now, and I had a few more questions below for Renato and the list in general. Please see inline below. >>>> >>>>> On 2 Sep 2016, at 11:27, Dean Michael Berris <dean.berris at gmail.com> wrote: >>>>> >>>>>> On 2 Sep 2016, at 11:19, Renato Golin <renato.golin at linaro.org> wrote: >>>>>> >>>>>> On 2 September 2016 at 02:13, Dean Michael Berris <dean.berris at gmail.com> wrote: >>>>>>> I think it should be possible to have a snapshot of it included. I don't know what the licensing implications are (I'm not a lawyer, but I know someone who is -- paging Danny Berlin). >>>>>> >>>>>> The test-suite has a very large number of licenses (compared to LLVM), >>>>>> so licensing should be less of a problem there. Though Dan can help >>>>>> more than I can. :) >>>>>> >>>>> >>>>> Cool, let's wait for what Danny thinks on the patch I'll be preparing. :) >>>>> >>>>>> >>>>>>> I'm not as concerned about falling behind on versions there though mostly because it should be trivial to update it if we need it. Though like you, I agree this isn't the best way of doing it. :) >>>>>> >>>>>> If we start using it more (maybe we should, at least for the >>>>>> benchmarks, I've been long wanting to do something decent there), then >>>>>> we'd need to add a proper update procedure. >>>>>> >>>>>> I'm fine with some checkout if it's a stable release, not trunk, as it >>>>>> would make things a lot easier to update later (patch releases, new >>>>>> releases, etc). >>>>>> >>>>> >>>>> SGTM. >>>>> >>>> >>>> Is there a preference on where to place the library? I had a look at {SingleSource/MultiSource}/Benchmarks/ and I didn't find a common location for libraries used. I'm tempted to create a top-level "libs" directory that will host common libraries but I'm also fine with just having the benchmark library living alongside the XRay benchmarks. >>>> >>>> So two options here: >>>> >>>> 1) libs/googlebenchmark/ >>>> 2) MultiSource/Benchmarks/XRay/googlebench/ >>> >>> This is something that may be used (or is intended) to be used by others in the future, the first option makes it easier (or encouraging at least). >>> >>> >>> +1 to this. >>> >>> Looks like there is reasonably active development going on right now (primarily by EricWF who is also contributing to llvm), so we'll probably want to coordinate how and how often we sync with top of tree. (Probably more often than google unittests :) >>> > > This sounds good to me too. Happy to get involved in ongoing efforts there too. > > Cheers > > -- Dean-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20160914/fc62f136/attachment.html>
Sebastian Pop via llvm-dev
2016-Sep-15 13:39 UTC
[llvm-dev] Benchmarks for LLVM-generated Binaries
On Wed, Sep 14, 2016 at 1:08 PM, Matthias Braun via llvm-dev <llvm-dev at lists.llvm.org> wrote:> However I do not think this should simply go into MultiSource: We currently > have a number of additional plugins in the lit test runner such as measuring > the runtime of the benchmark executable, determining code size, we still > plan to add a mode to run benchmarks multiple times, we run the bechmark > under perf (or iOS specific tools) to collect performance counters… Many of > those are questionable measurements for a googlebenchmark executable which > has varying runtime because it runs the test more/less often. > We really should introduce a new benchmarking mode for this.Some of the metrics reported by "-mllvm -stats" may be good indicators of runtime performance.