Jeremy Morse via llvm-dev
2019-Jun-19 15:27 UTC
[llvm-dev] Running debuginfo-tests with Dexter
Hi llvm-dev@, There's been some renewed interest in integration tests for debuginfo, checking that debuggers interpret LLVMs output in the way we expect. Paul suggested a while back [0] that our (Sony's) Dexter [1] tool could be a useful driver for running debuginfo tests -- I'd like to ask whether people think this would be desirable course to take, and what the requirements for using Dexter in these integration tests would be. Background: the plan with Dexter was to try and quantify the quality of debugging experience that a developer received when debugging their program. That allows integration testing between LLVM and debuggers, and coupled with a test-suite a measurement of "how good" a particular compiler is at preserving debug info. A full summary is best found in Greg's 5-minute lightning talk [2,3] on the topic. Dexter's significant parts are its abstraction of debugger APIs, a language to describe expected debug behaviour, and scoring of "how bad" divergences from the expected debug behaviour are. Some examples of Dexter tests can be found at [4], where we wrote various tests to measure how much debuginfo was destroyed by different LLVM passes. As far as I understand it, the existing debuginfo-tests [5] contain debugger commands that are fed into a debugger, and the debugger output is FileCheck'd. This works directly for gdb, and there's a thin layer (llgdb.py) for driving lldb, but windows-based cdb has a very different input language and has its own set of tests. An obvious win would be unifying these, which is something Dexter could be adapted to do. I'm sure most agree, it would be better to declare the expected behaviour in some language and have other scripting compare it with the real behaviour, than to put highly coupled-to-the-debugger interrogation commands and output examination in the tests. We can easily specialise Dexter to consider any divergence from expected behaviour to be an error, giving us a pass/fail test tool. Some existing tests examine types, which Dexter doesn't currently do (but we're working on). What other objectives would there be for a debugger integration tool? There was mention in [0] of tests for Microsoft specific extensions (I assume extended debugging facilities), knowing the scope of extra information involved would help us design around it. Note that the current Dexter codebase is going to be significantly remangled, we're trying to decouple the expected-behaviour language from the debugger-abstractions summary of how the program behaved. [0] https://reviews.llvm.org/D54187#1290282 [1] https://github.com/SNSystems/dexter [2] https://www.youtube.com/watch?v=XRT_GmpGjXE [3] https://llvm.org/devmtg/2018-04/slides/Bedwell-Measuring_the_User_Debugging_Experience.pdf [4] https://github.com/jmorse/dexter/tree/f46f13f778484ed5c6f7bf33b8fc2d4837ff7265/tests/nostdlib/llvm_passes [5] https://github.com/llvm/llvm-project/tree/master/debuginfo-tests -- Thanks, Jeremy
Reid Kleckner via llvm-dev
2019-Jun-19 18:05 UTC
[llvm-dev] Running debuginfo-tests with Dexter
I'd be in favor of using Dexter to write better debug info quality integration tests. As far as goals and requirements go, you've already identified the ability to drive various debuggers (gdb, lldb, VS, cdb). Regarding testing Microsoft extensions, we've always had ways to have platform-specific tests, and I think we can easily extend them. My initial thought is to set this up as a dexter/ subdirectory of debuginfo-tests, and add a lit.local.cfg that enables the tests if dexter is available. Inside there, we can have generic tests, and then Windows, Mac, Linux, Posix, etc, similar to what we do for asan tests today. You might want to set things up in CMake to support running the tests in multiple configurations, perhaps one per debugger, so you could run the tests with gdb and lldb if you have both installed. The asan test suite does this for static+dynamic linking. On Wed, Jun 19, 2019 at 8:27 AM Jeremy Morse via llvm-dev < llvm-dev at lists.llvm.org> wrote:> Hi llvm-dev@, > > There's been some renewed interest in integration tests for debuginfo, > checking that debuggers interpret LLVMs output in the way we expect. > Paul suggested a while back [0] that our (Sony's) Dexter [1] tool > could be a useful driver for running debuginfo tests -- I'd like to > ask whether people think this would be desirable course to take, and > what the requirements for using Dexter in these integration tests > would be. > > Background: the plan with Dexter was to try and quantify the quality > of debugging experience that a developer received when debugging their > program. That allows integration testing between LLVM and debuggers, > and coupled with a test-suite a measurement of "how good" a particular > compiler is at preserving debug info. A full summary is best found in > Greg's 5-minute lightning talk [2,3] on the topic. Dexter's > significant parts are its abstraction of debugger APIs, a language to > describe expected debug behaviour, and scoring of "how bad" > divergences from the expected debug behaviour are. > > Some examples of Dexter tests can be found at [4], where we wrote > various tests to measure how much debuginfo was destroyed by different > LLVM passes. > > As far as I understand it, the existing debuginfo-tests [5] contain > debugger commands that are fed into a debugger, and the debugger > output is FileCheck'd. This works directly for gdb, and there's a thin > layer (llgdb.py) for driving lldb, but windows-based cdb has a very > different input language and has its own set of tests. An obvious win > would be unifying these, which is something Dexter could be adapted to > do. I'm sure most agree, it would be better to declare the expected > behaviour in some language and have other scripting compare it with > the real behaviour, than to put highly coupled-to-the-debugger > interrogation commands and output examination in the tests. > > We can easily specialise Dexter to consider any divergence from > expected behaviour to be an error, giving us a pass/fail test tool. > Some existing tests examine types, which Dexter doesn't currently do > (but we're working on). What other objectives would there be for a > debugger integration tool? There was mention in [0] of tests for > Microsoft specific extensions (I assume extended debugging > facilities), knowing the scope of extra information involved would > help us design around it. > > Note that the current Dexter codebase is going to be significantly > remangled, we're trying to decouple the expected-behaviour language > from the debugger-abstractions summary of how the program behaved. > > [0] https://reviews.llvm.org/D54187#1290282 > [1] https://github.com/SNSystems/dexter > [2] https://www.youtube.com/watch?v=XRT_GmpGjXE > [3] > https://llvm.org/devmtg/2018-04/slides/Bedwell-Measuring_the_User_Debugging_Experience.pdf > [4] > https://github.com/jmorse/dexter/tree/f46f13f778484ed5c6f7bf33b8fc2d4837ff7265/tests/nostdlib/llvm_passes > [5] https://github.com/llvm/llvm-project/tree/master/debuginfo-tests > > -- > Thanks, > Jeremy > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20190619/7041e508/attachment.html>
Adrian Prantl via llvm-dev
2019-Jun-24 16:31 UTC
[llvm-dev] Running debuginfo-tests with Dexter
I think generally this is a good idea, with one caveat. Currently Dexter is a separate MIT-licensed tool. Would it be possible for Sony to contribute Dexter itself to the LLVM project? Perhaps even inside the debuginfo-tests repository? For many users, this would remove friction for using it and contributing bugfixes back. -- adrian> On Jun 19, 2019, at 8:27 AM, Jeremy Morse via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Hi llvm-dev@, > > There's been some renewed interest in integration tests for debuginfo, > checking that debuggers interpret LLVMs output in the way we expect. > Paul suggested a while back [0] that our (Sony's) Dexter [1] tool > could be a useful driver for running debuginfo tests -- I'd like to > ask whether people think this would be desirable course to take, and > what the requirements for using Dexter in these integration tests > would be. > > Background: the plan with Dexter was to try and quantify the quality > of debugging experience that a developer received when debugging their > program. That allows integration testing between LLVM and debuggers, > and coupled with a test-suite a measurement of "how good" a particular > compiler is at preserving debug info. A full summary is best found in > Greg's 5-minute lightning talk [2,3] on the topic. Dexter's > significant parts are its abstraction of debugger APIs, a language to > describe expected debug behaviour, and scoring of "how bad" > divergences from the expected debug behaviour are. > > Some examples of Dexter tests can be found at [4], where we wrote > various tests to measure how much debuginfo was destroyed by different > LLVM passes. > > As far as I understand it, the existing debuginfo-tests [5] contain > debugger commands that are fed into a debugger, and the debugger > output is FileCheck'd. This works directly for gdb, and there's a thin > layer (llgdb.py) for driving lldb, but windows-based cdb has a very > different input language and has its own set of tests. An obvious win > would be unifying these, which is something Dexter could be adapted to > do. I'm sure most agree, it would be better to declare the expected > behaviour in some language and have other scripting compare it with > the real behaviour, than to put highly coupled-to-the-debugger > interrogation commands and output examination in the tests. > > We can easily specialise Dexter to consider any divergence from > expected behaviour to be an error, giving us a pass/fail test tool. > Some existing tests examine types, which Dexter doesn't currently do > (but we're working on). What other objectives would there be for a > debugger integration tool? There was mention in [0] of tests for > Microsoft specific extensions (I assume extended debugging > facilities), knowing the scope of extra information involved would > help us design around it. > > Note that the current Dexter codebase is going to be significantly > remangled, we're trying to decouple the expected-behaviour language > from the debugger-abstractions summary of how the program behaved. > > [0] https://reviews.llvm.org/D54187#1290282 > [1] https://github.com/SNSystems/dexter > [2] https://www.youtube.com/watch?v=XRT_GmpGjXE > [3] https://llvm.org/devmtg/2018-04/slides/Bedwell-Measuring_the_User_Debugging_Experience.pdf > [4] https://github.com/jmorse/dexter/tree/f46f13f778484ed5c6f7bf33b8fc2d4837ff7265/tests/nostdlib/llvm_passes > [5] https://github.com/llvm/llvm-project/tree/master/debuginfo-tests > > -- > Thanks, > Jeremy > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev
Jeremy Morse via llvm-dev
2019-Jun-25 17:07 UTC
[llvm-dev] Running debuginfo-tests with Dexter
Hi Adrian, On Mon, Jun 24, 2019 at 5:31 PM Adrian Prantl <aprantl at apple.com> wrote:> I think generally this is a good idea, with one caveat. Currently Dexter is a separate MIT-licensed tool. Would it be possible for Sony to contribute Dexter itself to the LLVM project? Perhaps even inside the debuginfo-tests repository? For many users, this would remove friction for using it and contributing bugfixes back.Contributing Dexter would be our preferred solution -- making it easy to write and run tests for debugging is good for everyone. I'll get the process rolling on our side, we'll continue code-polishing in the meantime, hopefully with a (debuginfo-tests specific) proof-of-concept sometime soon. -- Thanks, Jeremy