Michael Kruse via llvm-dev
2020-Jun-24 16:50 UTC
[llvm-dev] [RFC] Compiled regression tests.
Am Mi., 24. Juni 2020 um 11:19 Uhr schrieb Mehdi AMINI <joker.eph at gmail.com>:> > Hi, > > On Tue, Jun 23, 2020 at 6:34 PM Michael Kruse via llvm-dev <llvm-dev at lists.llvm.org> wrote: >> >> Hello LLVM community, >> >> For testing IR passes, LLVM currently has two kinds of tests: >> 1. regression tests (in llvm/test); .ll files invoking opt, and >> matching its text output using FileCheck. >> 2. unittests (in llvm/unittests); Google tests containing the IR as a >> string, constructing a pass pipeline, and inspecting the output using >> code. >> >> I propose to add an additional kind of test, which I call "compiled >> regression test", combining the advantages of the two. > > > You expand below on the mechanism you'd like to implement, but I am a bit puzzled about the motivation right now?See https://reviews.llvm.org/D82426 and http://lists.llvm.org/pipermail/llvm-dev/2020-June/142706.html for more motivation.> I'm failing to see what kind of IR-level test (unittests are relevant for data-structures and non-IR internals IMO) we would implement this way that we can't just implement with lit/FileCheck?My argument is not that those tests cannot be written using FileCheck, but tend (not all of them) to check more than relevant, may be truisms and are difficult to reverse-engineer and to update when something changes. Michael
Chris Lattner via llvm-dev
2020-Jun-30 20:58 UTC
[llvm-dev] [RFC] Compiled regression tests.
On Jun 24, 2020, at 9:50 AM, Michael Kruse via llvm-dev <llvm-dev at lists.llvm.org> wrote:>>> >>> I propose to add an additional kind of test, which I call "compiled >>> regression test", combining the advantages of the two. >> >> >> You expand below on the mechanism you'd like to implement, but I am a bit puzzled about the motivation right now? > > See https://reviews.llvm.org/D82426 and > http://lists.llvm.org/pipermail/llvm-dev/2020-June/142706.html for > more motivation.Hi Michael, I’m sorry I’m late to this thread, but I would really rather not go this direction. Unit tests in general (and this sort of extension to the idea) come at very high cost to testing flows, particularly with large scale builds. One of the major and important pieces of the LLVM design is how its testing infrastructure works. The choice to use a small number of tools (llc, opt, etc) is important for multiple reasons: 1) Link time of executables is a significant problem, particularly in large scale builds. 2) This encourages investing in testing tools (see, e.g. the recent improvements to FileCheck etc) 3) It reduces/factors the number of users of API surface area, making it easier to do large scale refactoring etc. 4) It encourages the development of textual interfaces to libraries, which aids with understandability and helps reinforce stronger interfaces (llvm-mc is one example of this, LLVM IR text syntax is another). 5) Depending on the details, this can make the build dependence graph more serialized. Unit tests are very widely used across the industry, and it is certainly true that they are fully general and more flexible. This makes them attractive, but it is a trap. I’d really rather we don’t go down this route, and maintain the approach of only using unit tests for very low level things like apis in ADT etc. -Chris
Michael Kruse via llvm-dev
2020-Jul-01 05:42 UTC
[llvm-dev] [RFC] Compiled regression tests.
Am Di., 30. Juni 2020 um 15:58 Uhr schrieb Chris Lattner <clattner at nondot.org>:> One of the major and important pieces of the LLVM design is how its testing infrastructure works. The choice to use a small number of tools (llc, opt, etc) is important for multiple reasons: > > 1) Link time of executables is a significant problem, particularly in large scale builds.You can use dynamic linking. Unfortunately, an LLVM dynamic library is not (yet?) supported on Windows, so we need a static linking fallback. If this is the only issue, I'd work on a solution (at least with gtest) that works on Windows as well.> 2) This encourages investing in testing tools (see, e.g. the recent improvements to FileCheck etc)Google test also is a testing tool that is worth investing into, such as more expressive ASSERT macros as in the RFC. I think it makes it even easier to start new tools that begin being used within a single test only when it does not seem worth adding a new executable or FileCheck option.> 3) It reduces/factors the number of users of API surface area, making it easier to do large scale refactoring etc.FileCheck makes string output (including llvm::dbgs()) part of the interface, which becomes not only harder to change, but also harder to extend (new lines/tags to not match existing CHECK lines). In contrast, refactoring using tools such as clang-refactor/clang-format is not different then refactoring the source of LLVM itself. In the interest of downstream users, having a well checked API surface should be more important.> 4) It encourages the development of textual interfaces to libraries, which aids with understandability and helps reinforce stronger interfaces (llvm-mc is one example of this, LLVM IR text syntax is another).While I don't disagree with good textual presentation, I think these should be designed for human understandability and consistency, not for machine processing.> 5) Depending on the details, this can make the build dependence graph more serialized.I don't see why this would be that case.> Unit tests are very widely used across the industry, and it is certainly true that they are fully general and more flexible. This makes them attractive, but it is a trap. I’d really rather we don’t go down this route, and maintain the approach of only using unit tests for very low level things like apis in ADT etc.Note that we already have unittests for non low-level APIs such as passes (VPlan, LICM, Unrolling, ...) Can you elaborate on what the trap is? Michael