>Hello everyone, > >I have added a new public LLD performance builder at >http://lab.llvm.org:8011/builders/lld-perf-testsuite. >It builds LLVM and LLD by the latest releaed Clang and runs a set of >perfromance tests. > >The builder is reliable. Please pay attention on the failures. > >The performance statistics are here: >http://lnt.llvm.org/db_default/v4/link/recent_activity > >Thanks > >GalinaGreat news, thanks ! Looking on results I am not sure how to explain them though. For example r325313 fixes "use after free", it should not give any performance slowdowns or boosts. Though if I read results right, they show 23.65% slowdown for time of linking linux kernel (http://lnt.llvm.org/db_default/v4/link/104). I guess such variation can happen for example if bot do only single link iteration for tests, so that final time is just a error mostly probably. task-clock results are available for "linux-kernel" and "llvm-as-fsds" only and all other tests has blank field. Should it mean there was no noticable difference in results ? Also, "Graph" and "Matrix" buttons whatever they should do show errors atm. ("Nothing to graph." and "Not Found: Request requires some data arguments."). Best regards, George | Developer | Access Softek, Inc
Galina Kistanova via llvm-dev
2018-Feb-16 20:27 UTC
[llvm-dev] New LLD performance builder
Hello George, The bot does 10 runs for each of the benchmarks (those dots in the logs are meaningful). It seems the statistics quite stable if you would look over number of revisions. For example, if one would take a look at the linux-kernel branches - http://lnt.llvm.org/db_default/v4/link/graph?plot.0=1.12.2&highlight_run=104, it gets obvious that the number of branches increased significantly as a result of the r325313. The metric is very stable around the impacted commit. As the number of branches has increased, the related metrics regress as well, like branch-misses> >I'm sure you have checked that, but, just in case, here is the link to the LNT doc. Besides reporting to the lnt.llvm.org, each build contains in the log all the reported data, so you could process it whatever you want and find helpful. Thanks Galina On Fri, Feb 16, 2018 at 1:55 AM, George Rimar via llvm-dev < llvm-dev at lists.llvm.org> wrote:> >Hello everyone, > > > >I have added a new public LLD performance builder at > >http://lab.llvm.org:8011/builders/lld-perf-testsuite. > >It builds LLVM and LLD by the latest releaed Clang and runs a set of > >perfromance tests. > > > >The builder is reliable. Please pay attention on the failures. > > > >The performance statistics are here: > >http://lnt.llvm.org/db_default/v4/link/recent_activity > > > >Thanks > > > >Galina > > Great news, thanks ! > > Looking on results I am not sure how to explain them though. > > For example r325313 fixes "use after free", it should not give any > performance > slowdowns or boosts. Though if I read results right, they show 23.65% > slowdown > for time of linking linux kernel (http://lnt.llvm.org/db_ > default/v4/link/104). > > I guess such variation can happen for example if bot do only single link > iteration for tests, > so that final time is just a error mostly probably. > > task-clock results are available for "linux-kernel" and "llvm-as-fsds" > only and all other > tests has blank field. Should it mean there was no noticable difference in > results ? > > Also, "Graph" and "Matrix" buttons whatever they should do show errors atm. > ("Nothing to graph." and "Not Found: Request requires some data > arguments."). > > Best regards, > George | Developer | Access Softek, Inc > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20180216/c999d5e2/attachment.html>
Galina Kistanova via llvm-dev
2018-Feb-16 21:30 UTC
[llvm-dev] New LLD performance builder
Hello George, Sorry, somehow hit a send button too soon. Please ignore the previous e-mail. The bot does 10 runs for each of the benchmarks (those dots in the logs are meaningful). We can increase the number of runs if proven that this would significantly increase the accuracy. I didn't see the increase in accuracy when have been staging the bot, which would justify the extra time and larger gaps between the tested revisions. 10 runs seems give a good balance. But I'm open for suggestions. It seems the statistics are quite stable if you would look over number of revisions. And in this particular case the picture seems quite clear. At http://lnt.llvm.org/db_default/v4/link/104, the list of Performance Regressions suggests that the most hit was with the linux-kernel. The regressed metrics - branches, branch-misses, instructions, cycles, seconds-elapsed, task-clock. Some other benchmarks shows regressions in branches and branch-misses, some shows improvements. The metrics are consistent before and after the commit, so, I do not think this one is an outliner. For example, if one would take a look at the linux-kernel branches - http://lnt.llvm.org/db_default/v4/link/graph?plot.0=1.12.2&highlight_run=104, it gets obvious that the number of branches increased significantly as a result of the r325313. The metric is very stable around the impacted commit and does not go down after. The branch-misses is more volatile, but still consistently shows the regression as the result of this commit. Now someone should see why this particular commit has resulted in significant increase of branching with the Linux Kernel. As of how to use LNT web UI, I'm sure you have checked that, but, just in case, here is the link to the LNT doc - http://llvm.org/docs/lnt/con tents.html.> task-clock results are available for "linux-kernel" and "llvm-as-fsds"only and all other> tests has blank field. Should it mean there was no noticable differencein results ? If you would go to http://lnt.llvm.org/db_default/v4/link/104#task-clock (or go to http://lnt.llvm.org/db_default/v4/link/104 and select the task-clock on the left, which is the same), you would see the list of actual values in the "Current" column. All of them populated, none is blank. The column "%" contains the difference from the previous run in percents, or dash for no measured difference.> Also, "Graph" and "Matrix" buttons whatever they should do show errorsatm. I guess you didn't select what to graph or what to show as a matrix, did you? Besides reporting to the lnt.llvm.org, each build contains in the log all the reported data, so you could process it whatever you want and find helpful. Hope this helps. Thanks Galina On Fri, Feb 16, 2018 at 1:55 AM, George Rimar via llvm-dev < llvm-dev at lists.llvm.org> wrote:> >Hello everyone, > > > >I have added a new public LLD performance builder at > >http://lab.llvm.org:8011/builders/lld-perf-testsuite. > >It builds LLVM and LLD by the latest releaed Clang and runs a set of > >perfromance tests. > > > >The builder is reliable. Please pay attention on the failures. > > > >The performance statistics are here: > >http://lnt.llvm.org/db_default/v4/link/recent_activity > > > >Thanks > > > >Galina > > Great news, thanks ! > > Looking on results I am not sure how to explain them though. > > For example r325313 fixes "use after free", it should not give any > performance > slowdowns or boosts. Though if I read results right, they show 23.65% > slowdown > for time of linking linux kernel (http://lnt.llvm.org/db_ > default/v4/link/104). > > I guess such variation can happen for example if bot do only single link > iteration for tests, > so that final time is just a error mostly probably. > > task-clock results are available for "linux-kernel" and "llvm-as-fsds" > only and all other > tests has blank field. Should it mean there was no noticable difference in > results ? > > Also, "Graph" and "Matrix" buttons whatever they should do show errors atm. > ("Nothing to graph." and "Not Found: Request requires some data > arguments."). > > Best regards, > George | Developer | Access Softek, Inc > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20180216/4a1189c1/attachment.html>
Thanks for information, Galina ! It was really helpfull for me.>> task-clock results are available for "linux-kernel" and "llvm-as-fsds" only and all other>> tests has blank field. Should it mean there was no noticable difference in results ? > >If you would go to http://lnt.llvm.org/db_default/v4/link/104#task-clock (or go to http://lnt.llvm.org/db_default/v4/link/104 and select >the task-clock on the left, which is the same), you would see the list of actual values in the "Current" column. All of them populated, none is blank. The column "%" contains >the difference from the previous run in percents, or dash for no measured difference.Yes, I meant exactly that. I see dashes in "%" columns for most of the tests. Sorry for my wording inaccuracy that caused this confusion :(>> Also, "Graph" and "Matrix" buttons whatever they should do show errors atm. > >I guess you didn't select what to graph or what to show as a matrix, did you? >Right, I did't know I should. Now I see how it works. So, great to see that such a new tool is already able to reveal interesting results. Thanks ! George. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20180217/76ba8f34/attachment.html>
Rafael Avila de Espindola via llvm-dev
2018-Feb-22 21:56 UTC
[llvm-dev] New LLD performance builder
Thanks a lot for setting this up! By using the "mean as aggregation" option one can see the noise in the results better: http://lnt.llvm.org/db_default/v4/link/graph?switch_min_mean=yes&moving_window_size=10&plot.9=1.9.7&submit=Update There are a few benchmarknig tips in https://www.llvm.org/docs/Benchmarking.html. For example, from looking at http://lab.llvm.org:8011/builders/lld-perf-testsuite/builds/285/steps/cmake-configure/logs/stdio It seems the produced lld binary is not being statically linked. A tip to make the bot a bit faster is that it could run "ninja bin/lld" instead of just "ninja": http://lab.llvm.org:8011/builders/lld-perf-testsuite/builds/285/steps/build-unified-tree/logs/stdio Is lld-speed-test in a tmpfs? Is lld-benchmark.py a copy of lld/utils/benchmark.py? Thanks, Rafael Galina Kistanova via llvm-dev <llvm-dev at lists.llvm.org> writes:> Hello George, > > Sorry, somehow hit a send button too soon. Please ignore the previous > e-mail. > > The bot does 10 runs for each of the benchmarks (those dots in the logs are > meaningful). We can increase the number of runs if proven that this would > significantly increase the accuracy. I didn't see the increase in accuracy when > have been staging the bot, which would justify the extra time and larger > gaps between the tested revisions. 10 runs seems give a good balance. But > I'm open for suggestions. > > It seems the statistics are quite stable if you would look over number of > revisions. > And in this particular case the picture seems quite clear. > > At http://lnt.llvm.org/db_default/v4/link/104, the list of Performance > Regressions suggests that the most hit was with the linux-kernel. The > regressed metrics - branches, branch-misses, instructions, > cycles, seconds-elapsed, task-clock. Some other benchmarks shows > regressions in branches and branch-misses, some shows improvements. > > The metrics are consistent before and after the commit, so, I do not think > this one is an outliner. > For example, if one would take a look at the linux-kernel branches - > http://lnt.llvm.org/db_default/v4/link/graph?plot.0=1.12.2&highlight_run=104, > it gets obvious that the number of branches increased significantly as a > result of the r325313. The metric is very stable around the impacted commit > and does not go down after. The branch-misses is more volatile, but still > consistently shows the regression as the result of this commit. > > Now someone should see why this particular commit has resulted in > significant increase of branching with the Linux Kernel. > > As of how to use LNT web UI, I'm sure you have checked that, but, just in > case, here is the link to the LNT doc - http://llvm.org/docs/lnt/con > tents.html. > >> task-clock results are available for "linux-kernel" and "llvm-as-fsds" > only and all other >> tests has blank field. Should it mean there was no noticable difference > in results ? > > If you would go to http://lnt.llvm.org/db_default/v4/link/104#task-clock > (or go to http://lnt.llvm.org/db_default/v4/link/104 and select the task-clock > on the left, which is the same), you would see the list of actual values in > the "Current" column. All of them populated, none is blank. The column "%" > contains the difference from the previous run in percents, or dash for no > measured difference. > >> Also, "Graph" and "Matrix" buttons whatever they should do show errors > atm. > > I guess you didn't select what to graph or what to show as a matrix, did > you? > > Besides reporting to the lnt.llvm.org, each build contains in the log all > the reported data, so you could process it whatever you want and find > helpful. > > Hope this helps. > > Thanks > > Galina > > > On Fri, Feb 16, 2018 at 1:55 AM, George Rimar via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > >> >Hello everyone, >> > >> >I have added a new public LLD performance builder at >> >http://lab.llvm.org:8011/builders/lld-perf-testsuite. >> >It builds LLVM and LLD by the latest releaed Clang and runs a set of >> >perfromance tests. >> > >> >The builder is reliable. Please pay attention on the failures. >> > >> >The performance statistics are here: >> >http://lnt.llvm.org/db_default/v4/link/recent_activity >> > >> >Thanks >> > >> >Galina >> >> Great news, thanks ! >> >> Looking on results I am not sure how to explain them though. >> >> For example r325313 fixes "use after free", it should not give any >> performance >> slowdowns or boosts. Though if I read results right, they show 23.65% >> slowdown >> for time of linking linux kernel (http://lnt.llvm.org/db_ >> default/v4/link/104). >> >> I guess such variation can happen for example if bot do only single link >> iteration for tests, >> so that final time is just a error mostly probably. >> >> task-clock results are available for "linux-kernel" and "llvm-as-fsds" >> only and all other >> tests has blank field. Should it mean there was no noticable difference in >> results ? >> >> Also, "Graph" and "Matrix" buttons whatever they should do show errors atm. >> ("Nothing to graph." and "Not Found: Request requires some data >> arguments."). >> >> Best regards, >> George | Developer | Access Softek, Inc >> _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org >> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >> > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev