Stefan Gränitz via llvm-dev
2017-Jul-27 15:54 UTC
[llvm-dev] Test Error Paths for Expected & ErrorOr
Yes definitely, testing a small piece of code like the GlobPattern::create() example, it would mostly indicate missing unit tests or insufficient test data. In contrast to unit tests, however, it can also verify correct handling of errors passed between function call hierarchies in more complex scenarios. For this I should point to the other example in the code, where it's applied to llvm::object::createBinary(): https://github.com/weliveindetail/ForceAllErrors-in-LLVM/blob/master/test/TestLLVMObject.h#L13 Here it detects and runs 44 different control paths, that can hardly be covered by a unit test altogether, because they don't depend on the input to creatBinary() but rather on the environment the test runs in. Am 27.07.17 um 16:46 schrieb David Blaikie:> I /kind/ of like the idea - but it almost feels like this would be a > tool for finding out that test coverage is insufficient, then adding > tests that actually exercise the bad input, etc (this should be > equally discoverable by code coverage, probably? Maybe not if multiple > error paths all collapse together, maybe... ) > > For instance, with your example, especially once there's an identified > bug that helps motivate, would it not be better to add a test that > does pass a fileName input that fails GlobPattern::create? > > On Thu, Jul 27, 2017 at 5:10 AM Stefan Gränitz via llvm-dev > <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote: > > Hello, this is a call for feedback: opinions, improvements, testers.. > > I use the support classes Expected<T> and ErrorOr<T> quite often > recently and I like the concept a lot! Thanks Lang btw! > However, from time to time I found issues in the execution paths of my > error cases and got annoyed by their naturally low test coverage. > > So I started sketching a test that runs all error paths for a given > piece of code to detect these issues. I just pushed it to GitHub and > added a little readme: > https://github.com/weliveindetail/ForceAllErrors-in-LLVM > > Are there people on the list facing the same issue? > How do you test your error paths? > Could this be of use for you if it was in a reusable state? > Is there something similar already around? > Anyone seeing bugs or improvements? > Could it maybe even increase coverage in the LLVM test suite some day? > > Thanks for all kinds of feedback! > Cheers, Stefan > > -- > https://weliveindetail.github.io/blog/ > https://cryptup.org/pub/stefan.graenitz at gmail.com > > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org> > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >-- https://weliveindetail.github.io/blog/ https://cryptup.org/pub/stefan.graenitz at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170727/e4a0f9d6/attachment.html>
David Blaikie via llvm-dev
2017-Jul-27 15:56 UTC
[llvm-dev] Test Error Paths for Expected & ErrorOr
On Thu, Jul 27, 2017 at 8:54 AM Stefan Gränitz <stefan.graenitz at gmail.com> wrote:> Yes definitely, testing a small piece of code like the > GlobPattern::create() example, it would mostly indicate missing unit tests > or insufficient test data. > > In contrast to unit tests, however, it can also verify correct handling of > errors passed between function call hierarchies in more complex scenarios. > For this I should point to the other example in the code, where it's > applied to llvm::object::createBinary(): > > https://github.com/weliveindetail/ForceAllErrors-in-LLVM/blob/master/test/TestLLVMObject.h#L13 > > Here it detects and runs 44 different control paths, that can hardly be > covered by a unit test altogether, because they don't depend on the input > to creatBinary() but rather on the environment the test runs in. >Yep, testing OS level environmental failures would be great for this - I wonder if there's a good way to distinguish between them (so that this only hits those cases, but doesn't unduly 'cover' other cases that should be targeted by tests, etc). Essentially something more opt-in or some other handshake. (perhaps a certain kind of Error that represents a "this failure is due to the environment, not the caller's arguments"? Not sure) Hopefully Lang (author of Error/Expected) chimes in - be curious to hear his thoughts on this stuff too. Thanks again for developing it/bringing it up here! :)> Am 27.07.17 um 16:46 schrieb David Blaikie: > > I /kind/ of like the idea - but it almost feels like this would be a tool > for finding out that test coverage is insufficient, then adding tests that > actually exercise the bad input, etc (this should be equally discoverable > by code coverage, probably? Maybe not if multiple error paths all collapse > together, maybe... ) > > For instance, with your example, especially once there's an identified bug > that helps motivate, would it not be better to add a test that does pass a > fileName input that fails GlobPattern::create? > > On Thu, Jul 27, 2017 at 5:10 AM Stefan Gränitz via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > >> Hello, this is a call for feedback: opinions, improvements, testers.. >> >> I use the support classes Expected<T> and ErrorOr<T> quite often >> recently and I like the concept a lot! Thanks Lang btw! >> However, from time to time I found issues in the execution paths of my >> error cases and got annoyed by their naturally low test coverage. >> >> So I started sketching a test that runs all error paths for a given >> piece of code to detect these issues. I just pushed it to GitHub and >> added a little readme: >> https://github.com/weliveindetail/ForceAllErrors-in-LLVM >> >> Are there people on the list facing the same issue? >> How do you test your error paths? >> Could this be of use for you if it was in a reusable state? >> Is there something similar already around? >> Anyone seeing bugs or improvements? >> Could it maybe even increase coverage in the LLVM test suite some day? >> >> Thanks for all kinds of feedback! >> Cheers, Stefan >> >> -- >> https://weliveindetail.github.io/blog/ >> https://cryptup.org/pub/stefan.graenitz at gmail.com >> >> >> _______________________________________________ >> LLVM Developers mailing list >> llvm-dev at lists.llvm.org >> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >> > > -- https://weliveindetail.github.io/blog/https://cryptup.org/pub/stefan.graenitz at gmail.com > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170727/f24ac3e2/attachment.html>
Lang Hames via llvm-dev
2017-Jul-28 21:36 UTC
[llvm-dev] Test Error Paths for Expected & ErrorOr
Hi Stefan, David, This is very interesting stuff - it adds a dimension of error security that Error/Expected can't provide on their own. I think it would be interesting to try to build a tool around this. Did you identify many cases where "real work" (in your example, the nullptr dereference" was being done in an error branch? My suspicion is that that should be rare, but that your tool would be great for exposing logic errors and resource leaks if run with the sanitizers turned on. In an ideal world we'd go even further and build a clang/LLDB based tool that can identify what kinds of errors a function can produce, then inject instances of those: That would allow us to test actual error handling logic too, not just the generic surrounding logic. Cheers, Lang. On Thu, Jul 27, 2017 at 8:56 AM, David Blaikie <dblaikie at gmail.com> wrote:> > > On Thu, Jul 27, 2017 at 8:54 AM Stefan Gränitz <stefan.graenitz at gmail.com> > wrote: > >> Yes definitely, testing a small piece of code like the >> GlobPattern::create() example, it would mostly indicate missing unit tests >> or insufficient test data. >> >> In contrast to unit tests, however, it can also verify correct handling >> of errors passed between function call hierarchies in more complex >> scenarios. >> For this I should point to the other example in the code, where it's >> applied to llvm::object::createBinary(): >> https://github.com/weliveindetail/ForceAllErrors- >> in-LLVM/blob/master/test/TestLLVMObject.h#L13 >> >> Here it detects and runs 44 different control paths, that can hardly be >> covered by a unit test altogether, because they don't depend on the input >> to creatBinary() but rather on the environment the test runs in. >> > Yep, testing OS level environmental failures would be great for this - I > wonder if there's a good way to distinguish between them (so that this only > hits those cases, but doesn't unduly 'cover' other cases that should be > targeted by tests, etc). Essentially something more opt-in or some other > handshake. (perhaps a certain kind of Error that represents a "this failure > is due to the environment, not the caller's arguments"? Not sure) > > Hopefully Lang (author of Error/Expected) chimes in - be curious to hear > his thoughts on this stuff too. > > Thanks again for developing it/bringing it up here! :) > >> Am 27.07.17 um 16:46 schrieb David Blaikie: >> >> I /kind/ of like the idea - but it almost feels like this would be a tool >> for finding out that test coverage is insufficient, then adding tests that >> actually exercise the bad input, etc (this should be equally discoverable >> by code coverage, probably? Maybe not if multiple error paths all collapse >> together, maybe... ) >> >> For instance, with your example, especially once there's an identified >> bug that helps motivate, would it not be better to add a test that does >> pass a fileName input that fails GlobPattern::create? >> >> On Thu, Jul 27, 2017 at 5:10 AM Stefan Gränitz via llvm-dev < >> llvm-dev at lists.llvm.org> wrote: >> >>> Hello, this is a call for feedback: opinions, improvements, testers.. >>> >>> I use the support classes Expected<T> and ErrorOr<T> quite often >>> recently and I like the concept a lot! Thanks Lang btw! >>> However, from time to time I found issues in the execution paths of my >>> error cases and got annoyed by their naturally low test coverage. >>> >>> So I started sketching a test that runs all error paths for a given >>> piece of code to detect these issues. I just pushed it to GitHub and >>> added a little readme: >>> https://github.com/weliveindetail/ForceAllErrors-in-LLVM >>> >>> Are there people on the list facing the same issue? >>> How do you test your error paths? >>> Could this be of use for you if it was in a reusable state? >>> Is there something similar already around? >>> Anyone seeing bugs or improvements? >>> Could it maybe even increase coverage in the LLVM test suite some day? >>> >>> Thanks for all kinds of feedback! >>> Cheers, Stefan >>> >>> -- >>> https://weliveindetail.github.io/blog/ >>> https://cryptup.org/pub/stefan.graenitz at gmail.com >>> >>> >>> _______________________________________________ >>> LLVM Developers mailing list >>> llvm-dev at lists.llvm.org >>> http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev >>> >> >> -- https://weliveindetail.github.io/blog/https://cryptup.org/pub/stefan.graenitz at gmail.com >> >>-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20170728/9f5734e6/attachment.html>