Matthias Braun via llvm-dev
2017-Jul-05 15:48 UTC
[llvm-dev] Performance metrics with LLVM
> On Jul 4, 2017, at 2:02 AM, Tobias Grosser <tobias.grosser at inf.ethz.ch> wrote: > >> On Tue, Jul 4, 2017, at 09:48 AM, Kristof Beyls wrote: >> Hi Tobias, >> >> The metrics that you can collect in LNT are fixed per "test suite". >> There are 2 such "test suite"s defined in LNT at the moment: nts and >> compile. >> For more details on this, see >> http://llvm.org/docs/lnt/concepts.html#test-suites. >> >> AFAIK, If you need to collect different metrics, you'll need to define a >> new "test suite". I'm afraid I don't really know what is needed for that. >> I'm guessing you may need to write some LNT code to do so, but I'm not >> sure. Hopefully Matthias or Chris will be able to explain how to do that. >> >> We probably should investigate how to make it easier to define new >> "test-suite"s more easily. Or at least make it easier to record different >> sets of metrics more easily, without having to change the LNT code or a >> running LNT server instance. >> The question on recording a different set of metrics has come up on this >> list before, so it seems like it's an issue people do run into from time >> to time. > > Hi Kristof, > > thanks for your fast reply. This is a very helpful summary that confirms > my current understanding in parts. I never run the "compile" test suite, > so I am not sure how much of the statistics interface is used by it (if > at all). I somehow had the feeling something else might exist, as the > cmake test-suite > runner dumps some of the statistics to stdout. Would be interested to > read if Chris or Matthias have more insights.I often run the testsuite without LNT. Lit -o dumps the output to a json file. If your goal is just some A/B testing (rather than tracking continuously with CI systems) then something simple like test-suite/utils/compare.py is enough to view and compare lit result files. For future LNT plans: You also asked at an interesting moment: I am polishing a commit to LNT right now that makes it easier to define custom schemas or create new ones. Though that is only part of the solution, as even with the new schema the runner needs to be adapted to actually collect/transform all values. I think we also will not start collecting all the llvm stats by default in the current system; with a few thousand runs in the database its slightly sluggish already, I don't think adding 10x more metrics to the database helps there. Of course once it is easier to modify schemas you could setup special instances with extended schemas that maybe track fewer instances/runs. - Matthias> Best, > Tobias > >> Thanks, >> >> Kristof >> >> >> On 4 Jul 2017, at 08:27, Tobias Grosser >> <tobias.grosser at inf.ethz.ch<mailto:tobias.grosser at inf.ethz.ch>> wrote: >> >> Dear all, >> >> I wanted to gather LLVM statistics with lnt and found a nice flag, but >> am unsure how such statistics can be made available in the LNT web >> interface. >> >> --cmake-define=TEST_SUITE_COLLECT_STATS=ON >> >> which allows me to gather all the LLVM "-stats" output. On top of this I >> see that the LNT cmake test-suite also dumps code-size statistics when >> running, that look as follows: >> >> size: 10848 >> size..bss: 48 >> size..comment: 218 >> size..ctors: 16 >> size..data: 4 >> size..dtors: 16 >> size..dynamic: 416 >> size..dynsym: 168 >> size..eh_frame: 172 >> size..eh_frame_hdr: 44 >> >> I can find all these statistics in a file called: >> >> /scratch/leone/grosser/base/sandbox/test-2017-07-04_06-14-43/outputTd2xPU.json >> >> but they do not appear in: >> >> /scratch/leone/grosser/base/sandbox/test-2017-07-04_06-14-43/report.json >> >> and in fact do not seem to be submitted to the LNT server. >> >> Matthias added support for TEST_SUITE_COLLECT_STATS a while ago, but I >> am unsure how it is expected to be used. A google search did not find >> any relevant documentation. Is anybody using this feature today? >> >> Best, >> Tobias >>
Tobias Grosser via llvm-dev
2017-Jul-05 16:20 UTC
[llvm-dev] Performance metrics with LLVM
On Wed, Jul 5, 2017, at 05:48 PM, Matthias Braun via llvm-dev wrote:> > > > On Jul 4, 2017, at 2:02 AM, Tobias Grosser <tobias.grosser at inf.ethz.ch> wrote: > > > >> On Tue, Jul 4, 2017, at 09:48 AM, Kristof Beyls wrote: > >> Hi Tobias, > >> > >> The metrics that you can collect in LNT are fixed per "test suite". > >> There are 2 such "test suite"s defined in LNT at the moment: nts and > >> compile. > >> For more details on this, see > >> http://llvm.org/docs/lnt/concepts.html#test-suites. > >> > >> AFAIK, If you need to collect different metrics, you'll need to define a > >> new "test suite". I'm afraid I don't really know what is needed for that. > >> I'm guessing you may need to write some LNT code to do so, but I'm not > >> sure. Hopefully Matthias or Chris will be able to explain how to do that. > >> > >> We probably should investigate how to make it easier to define new > >> "test-suite"s more easily. Or at least make it easier to record different > >> sets of metrics more easily, without having to change the LNT code or a > >> running LNT server instance. > >> The question on recording a different set of metrics has come up on this > >> list before, so it seems like it's an issue people do run into from time > >> to time. > > > > Hi Kristof, > > > > thanks for your fast reply. This is a very helpful summary that confirms > > my current understanding in parts. I never run the "compile" test suite, > > so I am not sure how much of the statistics interface is used by it (if > > at all). I somehow had the feeling something else might exist, as the > > cmake test-suite > > runner dumps some of the statistics to stdout. Would be interested to > > read if Chris or Matthias have more insights. > > I often run the testsuite without LNT. Lit -o dumps the output to a json > file. If your goal is just some A/B testing (rather than tracking > continuously with CI systems) then something simple like > test-suite/utils/compare.py is enough to view and compare lit result > files.Right. That's what I have been seeing.> For future LNT plans: > > You also asked at an interesting moment: I am polishing a commit to LNT > right now that makes it easier to define custom schemas or create new > ones. Though that is only part of the solution, as even with the new > schema the runner needs to be adapted to actually collect/transform all > values.Great. Would be glad to follow the patch review.> I think we also will not start collecting all the llvm stats by default > in the current system; with a few thousand runs in the database its > slightly sluggish already, I don't think adding 10x more metrics to the > database helps there. Of course once it is easier to modify schemas you > could setup special instances with extended schemas that maybe track > fewer instances/runs.I wonder if the sluggishness comes from trying to display all of these or from storing them. If they would be in a separate table that is only accessed when needed, maybe storing would not have such a large cost. Best, Tobias
Chris Matthews via llvm-dev
2017-Jul-05 21:09 UTC
[llvm-dev] Performance metrics with LLVM
The test-suite schema is defined in the DB. It is not hard to extend if you have server access. The docs detail the steps to add a new metric: http://llvm.org/docs/lnt/importing_data.html#custom-test-suites I have setup several custom suites. It works. Matthias is working on something to make that even easier by having the test suites self describe their schemas. This will still require server access, but will be less scary than editing the DB directly. A lot of this boils down to naming, and how the data is later presented. For instance, in some places we have elected to store the new link time metric as a differently named test in the compile_time metric (foo.c vs foo.c.link). When you do this, those are presented side by side in the data listing views, that is handy. Each metric is given a section in the run reports, you can imagine what that might look like with 50 metrics. We might need to do some UI redesign to make the run reports sane. I think LNT will have a hard time collecting all the stats right now. There are 2722 source files in the llvm test-suite right now, and many hundred stats. Especially pages like the run reports which still do inline calculations are going to be slow. We do cache all the stuff needed to render the pages quickly now, but that pages are not updated to use that.> On Jul 5, 2017, at 9:20 AM, Tobias Grosser via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > On Wed, Jul 5, 2017, at 05:48 PM, Matthias Braun via llvm-dev wrote: >> >> >>> On Jul 4, 2017, at 2:02 AM, Tobias Grosser <tobias.grosser at inf.ethz.ch> wrote: >>> >>>> On Tue, Jul 4, 2017, at 09:48 AM, Kristof Beyls wrote: >>>> Hi Tobias, >>>> >>>> The metrics that you can collect in LNT are fixed per "test suite". >>>> There are 2 such "test suite"s defined in LNT at the moment: nts and >>>> compile. >>>> For more details on this, see >>>> http://llvm.org/docs/lnt/concepts.html#test-suites. >>>> >>>> AFAIK, If you need to collect different metrics, you'll need to define a >>>> new "test suite". I'm afraid I don't really know what is needed for that. >>>> I'm guessing you may need to write some LNT code to do so, but I'm not >>>> sure. Hopefully Matthias or Chris will be able to explain how to do that. >>>> >>>> We probably should investigate how to make it easier to define new >>>> "test-suite"s more easily. Or at least make it easier to record different >>>> sets of metrics more easily, without having to change the LNT code or a >>>> running LNT server instance. >>>> The question on recording a different set of metrics has come up on this >>>> list before, so it seems like it's an issue people do run into from time >>>> to time. >>> >>> Hi Kristof, >>> >>> thanks for your fast reply. This is a very helpful summary that confirms >>> my current understanding in parts. I never run the "compile" test suite, >>> so I am not sure how much of the statistics interface is used by it (if >>> at all). I somehow had the feeling something else might exist, as the >>> cmake test-suite >>> runner dumps some of the statistics to stdout. Would be interested to >>> read if Chris or Matthias have more insights. >> >> I often run the testsuite without LNT. Lit -o dumps the output to a json >> file. If your goal is just some A/B testing (rather than tracking >> continuously with CI systems) then something simple like >> test-suite/utils/compare.py is enough to view and compare lit result >> files. > > Right. That's what I have been seeing. > >> For future LNT plans: >> >> You also asked at an interesting moment: I am polishing a commit to LNT >> right now that makes it easier to define custom schemas or create new >> ones. Though that is only part of the solution, as even with the new >> schema the runner needs to be adapted to actually collect/transform all >> values. > > Great. Would be glad to follow the patch review. > >> I think we also will not start collecting all the llvm stats by default >> in the current system; with a few thousand runs in the database its >> slightly sluggish already, I don't think adding 10x more metrics to the >> database helps there. Of course once it is easier to modify schemas you >> could setup special instances with extended schemas that maybe track >> fewer instances/runs. > > I wonder if the sluggishness comes from trying to display all of these > or from storing them. If they would be in a separate table that is only > accessed when needed, maybe storing would not have such a large cost. > > Best, > Tobias > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org> > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev <http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev>-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170705/0124b8d0/attachment-0001.html>