Sean Silva via llvm-dev
2019-Oct-11 15:48 UTC
[llvm-dev] [cfe-dev] RFC: End-to-end testing
On Thu, Oct 10, 2019 at 2:21 PM David Greene via cfe-dev < cfe-dev at lists.llvm.org> wrote:> Florian Hahn via llvm-dev <llvm-dev at lists.llvm.org> writes: > > >> - Performance varies from implementation to implementation. It is > >> difficult to keep tests up-to-date for all possible targets and > >> subtargets. > > > > Could you expand a bit more what you mean here? Are you concerned > > about having to run the performance tests on different kinds of > > hardware? In what way do the existing benchmarks require keeping > > up-to-date? > > We have to support many different systems and those systems are always > changing (new processors, new BIOS, new OS, etc.). Performance can vary > widely day to day from factors completely outside the compiler's > control. As the performance changes you have to keep updating the tests > to expect the new performance numbers. Relying on performance > measurements to ensure something like vectorization is happening just > isn't reliable in our experience.Could you compare performance with vectorization turned on and off?> > > With tests checking ASM, wouldn’t we end up with lots of checks for > > various targets/subtargets that we need to keep up to date? > > Yes, that's true. But the only thing that changes the asm generated is > the compiler. > > > Just considering AArch64 as an example, people might want to check the > > ASM for different architecture versions and different vector > > extensions and different vendors might want to make sure that the ASM > > on their specific cores does not regress. > > Absolutely. We do a lot of that sort of thing downstream. > > >> - Partially as a result, but also for other reasons, performance tests > >> tend to be complicated, either in code size or in the numerous code > >> paths tested. This makes such tests hard to debug when there is a > >> regression. > > > > I am not sure they have to. Have you considered adding the small test > > functions/loops as micro-benchmarks using the existing google > > benchmark infrastructure in test-suite? > > We have tried nightly performance runs using LNT/test-suite and have > found it to be very unreliable, especially the microbenchmarks. > > > I think that might be able to address the points here relatively > > adequately. The separate micro benchmarks would be relatively small > > and we should be able to track down regressions in a similar fashion > > as if it would be a stand-alone file we compile and then analyze the > > ASM. Plus, we can easily run it and verify the performance on actual > > hardware. > > A few of my colleagues really struggled to get consistent results out of > LNT. They asked for help and discussed with a few upstream folks, but > in the end were not able to get something reliable working. I've talked > to a couple of other people off-list and they've had similar > experiences. It would be great if we have a reliable performance suite. > Please tell us how to get it working! :) > > But even then, I still maintain there is a place for the kind of > end-to-end testing I describe. Performance testing would complement it. > Neither is a replacement for the other. > > >> - Performance tests don't focus on the why/how of vectorization. They > >> just check, "did it run fast enough?" Maybe the test ran fast enough > >> for some other reason but we still lost desired vectorization and > >> could have run even faster. > >> > > > > If you would add a new micro-benchmark, you could check that it > > produces the desired result when adding it. The runtime-tracking > > should cover cases where we lost optimizations. I guess if the > > benchmarks are too big, additional optimizations in one part could > > hide lost optimizations somewhere else. But I would assume this to be > > relatively unlikely, as long as the benchmarks are isolated. > > Even then I have seen small performance tests vary widely in performance > due to system issues (see above). Again, there is a place for them but > they are not sufficient. > > > Also, checking the assembly for vector code does also not guarantee > > that the vector code will be actually executed. So for example by just > > checking the assembly for certain vector instructions, we might miss > > that we regressed performance, because we messed up the runtime checks > > guarding the vector loop. > > Oh absolutely. Presumably such checks would be included in the test or > would be checked by a different test. As always, tests have to be > constructed intelligently. :) > > -David > _______________________________________________ > cfe-dev mailing list > cfe-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20191011/3b3fb8cb/attachment.html>
David Greene via llvm-dev
2019-Oct-11 17:02 UTC
[llvm-dev] [cfe-dev] RFC: End-to-end testing
Sean Silva via cfe-dev <cfe-dev at lists.llvm.org> writes:>> We have to support many different systems and those systems are always >> changing (new processors, new BIOS, new OS, etc.). Performance can vary >> widely day to day from factors completely outside the compiler's >> control. As the performance changes you have to keep updating the tests >> to expect the new performance numbers. Relying on performance >> measurements to ensure something like vectorization is happening just >> isn't reliable in our experience. > > Could you compare performance with vectorization turned on and off?That might catch more things but now you're running tests twice and it still won't catch some cases. -David
Renato Golin via llvm-dev
2019-Oct-14 12:44 UTC
[llvm-dev] [cfe-dev] RFC: End-to-end testing
On Fri, 11 Oct 2019 at 18:02, David Greene via llvm-dev <llvm-dev at lists.llvm.org> wrote:> >> We have to support many different systems and those systems are always > >> changing (new processors, new BIOS, new OS, etc.). Performance can vary > >> widely day to day from factors completely outside the compiler's > >> control. As the performance changes you have to keep updating the tests > >> to expect the new performance numbers. Relying on performance > >> measurements to ensure something like vectorization is happening just > >> isn't reliable in our experience. > > > > Could you compare performance with vectorization turned on and off? > > That might catch more things but now you're running tests twice and it > still won't catch some cases.Precisely. In my experience, benchmarks numbers need to reset on most (if not all) system changes, that's why we keep our benchmark machines *very* stable (ie. outdated). Testing multiple configurations need multiple baselines, combinatorial explosion and all that. For clarity, I didn't mean "make e2e tests *only* run tests and check for performance", that would be a *very* poor substitute for the tests you proposed. The idea to have extra checks in the test-suite has circulated many years ago when a similar proposal was put forward, but IIRC, the piece-wise LIT tests we already have were deemed good enough for the cases we wanted to cover. But in the test-suite, we have more than just the compiler. We have libraries (run-time, language), tools (linkers, assemblers) and the environment. Those can affect the quality of the code (as you mention earlier). We need to test that, but we can't do a good job in the LIT side (how do you control libraries and other tools? it can get crazy ugly). The sanitizer tests are a good example on how weird it gets executing the code, grepping for output and relying on runtime system libraries to "get right". So, in a way, we could just stop the test-suite discussion and do like the sanitizers. If people are ok with this, don't let me stop you. :) But if we have some consensus on doing a clean job, then I would actually like to have that kind of intermediary check (diagnostics, warnings, etc) on most test-suite tests, which would cover at least the main vectorisation issues. Later, we could add more analysis tools, if we want. It would be as simple as adding CHECK lines on the execution of the compilation process (in CMake? Make? wrapper?) and keep the check files with the tests / per file. I think we're on the same page regarding almost everything, but perhaps I haven't been clear enough on the main point, which I think it's pretty simple. :) --renato