Hi all, There were some discussions about target specific tests breaking on other targets, so I did a simple test: compiled LLVM with a single target and ran all the tests. Here are some results for random targets: ARM: Unexpected Failures: 334 PowerPC: Unexpected Failures: 340 Mips: Unexpected Failures: 334 X86: Unexpected Failures: 0 Most of them LLVM-Unit, Execution Engine, Codegen, DebugInfo, Clang. Most of them exactly the same for all targets. Most of them are just bad tests, ie. target specific tests in generic directories, or target-independent tests expecting target specific behaviour. It's safe to assume that each non-x86 target will have similar failure rate when compiled alone. While it's important to test LLVM on the specific targets (compile and test on them), it's also important (and sometimes a lot easier and faster) to test cross-compilation. This would also guarantee that cross-compilation is not just possible, but thoroughly tested across all targets. Call for action: Would be good to have some buildbots doing cross-compilation for each of our targets, but also those interested on their tests passing (Clang, Debug info, MCJIT folks, etc) should also probably run them locally on their machines and try to move their tests to the best category they think it makes sense. cheers, --renato -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130125/7587ef50/attachment.html>
Hi Renato,> Would be good to have some buildbots doing cross-compilation for each of our > targets,That would certainly be good.> but also those interested on their tests passing (Clang, Debug > info, MCJIT folks, etc) should also probably run them locally on their > machines and try to move their tests to the best category they think it > makes sense.I'm assuming you mean run any test you're creating on a selection of triples to make sure it's OK here, rather than rebuilding umpteen times with just one target enabled. It's a little ad-hoc, and edging towards onerous, but would improve matters. I wonder if some option to llvm-lit that runs tests multiple times over different targets could be useful? It could give a better sense of safety over existing tests as well as new ones during development. Cheers. Tim.
On 25 January 2013 11:18, Tim Northover <t.p.northover at gmail.com> wrote:> I'm assuming you mean run any test you're creating on a selection of > triples to make sure it's OK here, rather than rebuilding umpteen > times with just one target enabled. >No, I mean a one-off sprint of cleaning the silly bugs, so that cross buildbots would be actually useful. I don't expect anyone to do any more testing that on "at least one major target", as specified by the developer's policy.>From then on, buildbots will do what they're best at: spot the failures foryou. ;) cheers, --renato -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130125/a9569bfe/attachment.html>
Maybe Matching Threads
- [LLVMdev] [CFA] Target Specific Tests
- RFC: Adding GCC C Torture Suite to External Test Suites
- error in model specification for cfa with lavaan-package
- RFC: Adding GCC C Torture Suite to External Test Suites
- Factor analysis and cfa with asymptotically distributed data