Two other things: 1) I get massively more stable execution times on 16.04 than on 14.04 on both x86 and ARM because 16.04 does far fewer gratuitous moves from one core to another, even without explicit pinning. 2) turn off ASLR: "echo 0 > /proc/sys/kernel/randomize_va_space". As well as getting stable addresses for debugging repeatability, it also stabilizes execution time variability due to "random" conflicts in caches, hash collisions in branch prediction or BTB, maybe even uop cache. On Mon, Feb 27, 2017 at 12:36 PM, Kristof Beyls via llvm-dev < llvm-dev at lists.llvm.org> wrote:> Hi Mikael, > > Some noisiness in benchmark results is expected, but the numbers you see > seem to be higher than I'd expect. > A number of tricks people use to get lower noise results are (with the lnt > runtest nt command line options to enable it between brackets): > * Only build the benchmarks in parallel, but do the actual running of the > benchmark code at most one at a time. (--threads 1 --build-threads 6). > * Make lnt use linux perf to get more accurate timing for short-running > benchmarks (--use-perf=1) > * Pin the running benchmark to a specific core, so the OS doesn't move the > benchmark process from core to core. (--make-param=RUNUNDER=taskset -c 1) > * Only run the programs that are marked as a benchmark; some of the tests > in the test-suite are not intended to be used as a benchmark > (--benchmarking-only) > * Make sure each program gets run multiple times, so that LNT has a higher > chance of recognizing which programs are inherently noisy (--multisample=3) > > I hope this is the kind of answer you were looking for? > Do the above measures reduce the noisiness to acceptable levels for your > setup? > > Thanks, > > Kristof > > > > On 27 Feb 2017, at 09:46, Mikael Holmén via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > > > > Hi, > > > > I'm trying to run the benchmark suite: > > http://llvm.org/docs/TestingGuide.html#test-suite-quickstart > > > > I'm doing it the lnt way, as described at: > > http://llvm.org/docs/lnt/quickstart.html > > > > I don't know what to expect but the results seems to be quite noisy and > unstable. E.g I've done two runs on two different commits that only differ > by a space in CODE_OWNERS.txt on my 12 core ubuntu 14.04 machine with: > > > > lnt runtest nt --sandbox SANDBOX --cc <path-to-my-clang> --test-suite > /data/repo/test-suite -j 8 > > > > And then I get the following top execution time regressions: > > http://i.imgur.com/sv1xzlK.png > > > > The numbers bounce around a lot if I do more runs. > > > > Given the amount of noise I see here I don't know to sort out > significant regressions if I actually do a real change in the compiler. > > > > Are the above results expected? > > > > How to use this? > > > > > > As a bonus question, if I instead run the benchmarks with an added -m32: > > lnt runtest nt --sandbox SANDBOX --cflag=-m32 --cc <path-to-my-clang> > --test-suite /data/repo/test-suite -j 8 > > > > I get three failures: > > > > --- Tested: 2465 tests -- > > FAIL: MultiSource/Applications/ClamAV/clamscan.compile_time (1 of 2465) > > FAIL: MultiSource/Applications/ClamAV/clamscan.execution_time (494 of > 2465) > > FAIL: MultiSource/Benchmarks/DOE-ProxyApps-C/XSBench/XSBench.execution_time > (495 of 2465) > > > > Is this known/expected or do I do something stupid? > > > > Thanks, > > Mikael > > _______________________________________________ > > LLVM Developers mailing list > > llvm-dev at lists.llvm.org > > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170227/e931af78/attachment.html>
On 27 Feb 2017, at 11:32, Bruce Hoult <bruce at hoult.org<mailto:bruce at hoult.org>> wrote: Two other things: 1) I get massively more stable execution times on 16.04 than on 14.04 on both x86 and ARM because 16.04 does far fewer gratuitous moves from one core to another, even without explicit pinning. 2) turn off ASLR: "echo 0 > /proc/sys/kernel/randomize_va_space". As well as getting stable addresses for debugging repeatability, it also stabilizes execution time variability due to "random" conflicts in caches, hash collisions in branch prediction or BTB, maybe even uop cache. FWIW, I personally think it's better to keep ASLR turned on. It's better to get the performance fluctuations in your experiments from the slight changes in code layout from ASLR, as that gives some kind of indication of how sensitive the specific program, core, environment is to layout changes. If you disable ASLR and get a big speed difference when evaluating a compiler patch, you still won't know if it's down to some code layout change in a hot piece of code that your patch otherwise didn't change at all. Keeping ASLR turned on isn't perfect by far: if you really want to evaluate this properly, you might need to introduce more code layout randomization in your experiments. I've talked about this in a bit more detail at EuroLLVM last year, see https://www.youtube.com/watch?v=COmfRpnujF8. Being able to more quickly determine whether a performance change is due to the intent of the compiler patch you've written or due to a micro-architectural non-linearity (such as a big speed difference due to a small code layout change), was one of the main motivations to add profile-annotated disassembly views to LNT, as demonstrated at http://blog.llvm.org/2016/06/using-lnt-to-track-performance.html, or https://fosdem.org/2017/schedule/event/lnt/. Beware that to use this feature, you'll need to use the cmake+lit infrastructure in the test-suite rather than the older make infrastructure. From lnt runtest, this can be done by using "lnt runtest test-suite" rather than using "lnt runtest nt". Thanks, Kristof On Mon, Feb 27, 2017 at 12:36 PM, Kristof Beyls via llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> wrote: Hi Mikael, Some noisiness in benchmark results is expected, but the numbers you see seem to be higher than I'd expect. A number of tricks people use to get lower noise results are (with the lnt runtest nt command line options to enable it between brackets): * Only build the benchmarks in parallel, but do the actual running of the benchmark code at most one at a time. (--threads 1 --build-threads 6). * Make lnt use linux perf to get more accurate timing for short-running benchmarks (--use-perf=1) * Pin the running benchmark to a specific core, so the OS doesn't move the benchmark process from core to core. (--make-param=RUNUNDER=taskset -c 1) * Only run the programs that are marked as a benchmark; some of the tests in the test-suite are not intended to be used as a benchmark (--benchmarking-only) * Make sure each program gets run multiple times, so that LNT has a higher chance of recognizing which programs are inherently noisy (--multisample=3) I hope this is the kind of answer you were looking for? Do the above measures reduce the noisiness to acceptable levels for your setup? Thanks, Kristof> On 27 Feb 2017, at 09:46, Mikael Holmén via llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> wrote: > > Hi, > > I'm trying to run the benchmark suite: > http://llvm.org/docs/TestingGuide.html#test-suite-quickstart > > I'm doing it the lnt way, as described at: > http://llvm.org/docs/lnt/quickstart.html > > I don't know what to expect but the results seems to be quite noisy and unstable. E.g I've done two runs on two different commits that only differ by a space in CODE_OWNERS.txt on my 12 core ubuntu 14.04 machine with: > > lnt runtest nt --sandbox SANDBOX --cc <path-to-my-clang> --test-suite /data/repo/test-suite -j 8 > > And then I get the following top execution time regressions: > http://i.imgur.com/sv1xzlK.png > > The numbers bounce around a lot if I do more runs. > > Given the amount of noise I see here I don't know to sort out significant regressions if I actually do a real change in the compiler. > > Are the above results expected? > > How to use this? > > > As a bonus question, if I instead run the benchmarks with an added -m32: > lnt runtest nt --sandbox SANDBOX --cflag=-m32 --cc <path-to-my-clang> --test-suite /data/repo/test-suite -j 8 > > I get three failures: > > --- Tested: 2465 tests -- > FAIL: MultiSource/Applications/ClamAV/clamscan.compile_time (1 of 2465) > FAIL: MultiSource/Applications/ClamAV/clamscan.execution_time (494 of 2465) > FAIL: MultiSource/Benchmarks/DOE-ProxyApps-C/XSBench/XSBench.execution_time (495 of 2465) > > Is this known/expected or do I do something stupid? > > Thanks, > Mikael > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org> > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev_______________________________________________ LLVM Developers mailing list llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170227/3d35d0c7/attachment.html>
You should try it both ways, certainly, But it's good to isolate different effects from unrelated library code, especially if you're specifically working on things such as whether aligning branch targets is worthwhile, or choosing different instruction encodings to maximize dispatch width from 16 or 32 byte blocks. On Mon, Feb 27, 2017 at 1:53 PM, Kristof Beyls <Kristof.Beyls at arm.com> wrote:> > On 27 Feb 2017, at 11:32, Bruce Hoult <bruce at hoult.org> wrote: > > Two other things: > > 1) I get massively more stable execution times on 16.04 than on 14.04 on > both x86 and ARM because 16.04 does far fewer gratuitous moves from one > core to another, even without explicit pinning. > > 2) turn off ASLR: "echo 0 > /proc/sys/kernel/randomize_va_space". As well > as getting stable addresses for debugging repeatability, it also stabilizes > execution time variability due to "random" conflicts in caches, hash > collisions in branch prediction or BTB, maybe even uop cache. > > > FWIW, I personally think it's better to keep ASLR turned on. It's better > to get the performance fluctuations in your experiments from the slight > changes in code layout from ASLR, as that gives some kind of indication of > how sensitive the specific program, core, environment is to layout changes. > If you disable ASLR and get a big speed difference when evaluating a > compiler patch, you still won't know if it's down to some code layout > change in a hot piece of code that your patch otherwise didn't change at > all. Keeping ASLR turned on isn't perfect by far: if you really want to > evaluate this properly, you might need to introduce more code layout > randomization in your experiments. I've talked about this in a bit more > detail at EuroLLVM last year, see https://www.youtube.com/ > watch?v=COmfRpnujF8. > Being able to more quickly determine whether a performance change is due > to the intent of the compiler patch you've written or due to a > micro-architectural non-linearity (such as a big speed difference due to a > small code layout change), was one of the main motivations to add > profile-annotated disassembly views to LNT, as demonstrated at > http://blog.llvm.org/2016/06/using-lnt-to-track-performance.html, or > https://fosdem.org/2017/schedule/event/lnt/. Beware that to use this > feature, you'll need to use the cmake+lit infrastructure in the test-suite > rather than the older make infrastructure. From lnt runtest, this can be > done by using "lnt runtest test-suite" rather than using "lnt runtest nt". > > Thanks, > > Kristof > > > > On Mon, Feb 27, 2017 at 12:36 PM, Kristof Beyls via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > >> Hi Mikael, >> >> Some noisiness in benchmark results is expected, but the numbers you see >> seem to be higher than I'd expect. >> A number of tricks people use to get lower noise results are (with the >> lnt runtest nt command line options to enable it between brackets): >> * Only build the benchmarks in parallel, but do the actual running of the >> benchmark code at most one at a time. (--threads 1 --build-threads 6). >> * Make lnt use linux perf to get more accurate timing for short-running >> benchmarks (--use-perf=1) >> * Pin the running benchmark to a specific core, so the OS doesn't move >> the benchmark process from core to core. (--make-param=RUNUNDER=taskset -c >> 1) >> * Only run the programs that are marked as a benchmark; some of the tests >> in the test-suite are not intended to be used as a benchmark >> (--benchmarking-only) >> * Make sure each program gets run multiple times, so that LNT has a >> higher chance of recognizing which programs are inherently noisy >> (--multisample=3) >> >> I hope this is the kind of answer you were looking for? >> Do the above measures reduce the noisiness to acceptable levels for your >> setup? >> >> Thanks, >> >> Kristof >> >> >> > On 27 Feb 2017, at 09:46, Mikael Holmén via llvm-dev < >> llvm-dev at lists.llvm.org> wrote: >> > >> > Hi, >> > >> > I'm trying to run the benchmark suite: >> > http://llvm.org/docs/TestingGuide.html#test-suite-quickstart >> > >> > I'm doing it the lnt way, as described at: >> > http://llvm.org/docs/lnt/quickstart.html >> > >> > I don't know what to expect but the results seems to be quite noisy and >> unstable. E.g I've done two runs on two different commits that only differ >> by a space in CODE_OWNERS.txt on my 12 core ubuntu 14.04 machine with: >> > >> > lnt runtest nt --sandbox SANDBOX --cc <path-to-my-clang> --test-suite >> /data/repo/test-suite -j 8 >> > >> > And then I get the following top execution time regressions: >> > http://i.imgur.com/sv1xzlK.png >> > >> > The numbers bounce around a lot if I do more runs. >> > >> > Given the amount of noise I see here I don't know to sort out >> significant regressions if I actually do a real change in the compiler. >> > >> > Are the above results expected? >> > >> > How to use this? >> > >> > >> > As a bonus question, if I instead run the benchmarks with an added -m32: >> > lnt runtest nt --sandbox SANDBOX --cflag=-m32 --cc <path-to-my-clang> >> --test-suite /data/repo/test-suite -j 8 >> > >> > I get three failures: >> > >> > --- Tested: 2465 tests -- >> > FAIL: MultiSource/Applications/ClamAV/clamscan.compile_time (1 of 2465) >> > FAIL: MultiSource/Applications/ClamAV/clamscan.execution_time (494 of >> 2465) >> > FAIL: MultiSource/Benchmarks/DOE-ProxyApps-C/XSBench/XSBench.execution_time >> (495 of 2465) >> > >> > Is this known/expected or do I do something stupid? >> > >> > Thanks, >> > Mikael >> > _______________________________________________ >> > LLVM Developers mailing list >> > llvm-dev at lists.llvm.org >> > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >> >> _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org >> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >> > > > > -- > This message has been scanned for viruses and > dangerous content by *MailScanner* <http://www.mailscanner.info/>, and is > believed to be clean. >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170227/f9550fd6/attachment.html>
On 27 February 2017 at 10:32, Bruce Hoult via llvm-dev <llvm-dev at lists.llvm.org> wrote:> 1) I get massively more stable execution times on 16.04 than on 14.04 on > both x86 and ARM because 16.04 does far fewer gratuitous moves from one core > to another, even without explicit pinning.I think LNT should use taskset for the benchmarks if there are more than 1 cores. We usually taskset the scripts to the core zero and benchmark to a specific core (A53, A57) if they are different, or core 1 if they're all the same. --renato