Alexandros Lamprineas via llvm-dev
2018-May-04 09:25 UTC
[llvm-dev] RFC: Are auto-generated assertions a good practice?
Hello llvm-dev, On a recent code review I was asked to auto-generate assertion checks for my unit test. I wasn't aware that this was even possible. I am referring to the python `update` scripts under `utils` directory. My first reaction was wow! I found it very practical and useful. It saves you significant amount of time when writing a regression test. So I gave it a try. The generated checks were satisfying enough, almost exactly what I wanted. Then I got a bit sceptical about them. I am worried that auto-generating tests based on the compiler output can be quite dangerous. The tests will always pass regardless of whether the compiler emits right or wrong code, therefore you have to be certain that they impose the desired compiler behaviour. I guess the question here is how often we should be using those scripts. Regards, Alexandros -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20180504/5812c3bc/attachment.html>
Alex Bradbury via llvm-dev
2018-May-04 09:55 UTC
[llvm-dev] RFC: Are auto-generated assertions a good practice?
On 4 May 2018 at 10:25, Alexandros Lamprineas via llvm-dev <llvm-dev at lists.llvm.org> wrote:> Hello llvm-dev, > > > On a recent code review I was asked to auto-generate assertion checks for my > unit test. I wasn't aware that this was even possible. I am referring to the > python `update` scripts under `utils` directory. My first reaction was wow! > I found it very practical and useful. It saves you significant amount of > time when writing a regression test. So I gave it a try. The generated > checks were satisfying enough, almost exactly what I wanted. Then I got a > bit sceptical about them. I am worried that auto-generating tests based on > the compiler output can be quite dangerous. The tests will always pass > regardless of whether the compiler emits right or wrong code, therefore you > have to be certain that they impose the desired compiler behaviour. I guess > the question here is how often we should be using those scripts.Like many test-related issues, it comes down to personal judgement. It is of course easy to create test/CodeGen/*/* tests that pass regardless of whether the compiler breaks broken code regardless of whether the test CHECK lines are generated by update_llc_test.checks.py or not. I find it very helpful to have auto-generated CHECK lines that pick up any codegen change, but this can of course be problematic for very large test cases that are likely to see churn due to scheduling or regallloc changes. Being able to regenerate the CHECK lines and view the diff is also incredibly helpful when rebasing or moving a patch between different branches. My policy for test/Codegen/RISCV is to use update_llc_test_checks.py wherever possible, except in cases where there are so many CHECK lines on the output that they obscure the property being tested, indicating that a more limited hand-crafted pattern would be superior. Best, Alex
David Blaikie via llvm-dev
2018-May-04 16:31 UTC
[llvm-dev] RFC: Are auto-generated assertions a good practice?
Yep - all about balance. The main risk are tests that overfit (golden files being the worst case - checking that the entire output matches /exactly/ - this is what FileCheck is intended to help avoid) and maintainability. In the case of the autogenerated FileCheck lines I've seen so far - they seem like they still walk a fairly good line of checking exactly what's intended. Though I sometimes wonder if they're checking full combinatorial expansions that may not be required/desirable - always a tradeoff of just how black/white box tests are. On Fri, May 4, 2018 at 2:56 AM Alex Bradbury via llvm-dev < llvm-dev at lists.llvm.org> wrote:> On 4 May 2018 at 10:25, Alexandros Lamprineas via llvm-dev > <llvm-dev at lists.llvm.org> wrote: > > Hello llvm-dev, > > > > > > On a recent code review I was asked to auto-generate assertion checks > for my > > unit test. I wasn't aware that this was even possible. I am referring to > the > > python `update` scripts under `utils` directory. My first reaction was > wow! > > I found it very practical and useful. It saves you significant amount of > > time when writing a regression test. So I gave it a try. The generated > > checks were satisfying enough, almost exactly what I wanted. Then I got a > > bit sceptical about them. I am worried that auto-generating tests based > on > > the compiler output can be quite dangerous. The tests will always pass > > regardless of whether the compiler emits right or wrong code, therefore > you > > have to be certain that they impose the desired compiler behaviour. I > guess > > the question here is how often we should be using those scripts. > > Like many test-related issues, it comes down to personal judgement. It > is of course easy to create test/CodeGen/*/* tests that pass > regardless of whether the compiler breaks broken code regardless of > whether the test CHECK lines are generated by > update_llc_test.checks.py or not. > > I find it very helpful to have auto-generated CHECK lines that pick up > any codegen change, but this can of course be problematic for very > large test cases that are likely to see churn due to scheduling or > regallloc changes. Being able to regenerate the CHECK lines and view > the diff is also incredibly helpful when rebasing or moving a patch > between different branches. > > My policy for test/Codegen/RISCV is to use update_llc_test_checks.py > wherever possible, except in cases where there are so many CHECK lines > on the output that they obscure the property being tested, indicating > that a more limited hand-crafted pattern would be superior. > > Best, > > Alex > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20180504/e21765f0/attachment.html>