On Thu, 14 May 1998, Paul Gilbert wrote:
> One thing I've found missing in S/R is the ability to test error
trapping. I
> have written a fairly extensive set of tests for my time series library but
I
> can only test that things work for cases where they are suppose to work. In
> several places my code is suppose to recognize error conditions, like
invalid
> inputs, and make calls to stop() or warn(), but there is no automatic way
to
> check these in my test suite.
>
> I really would like to be able to write a code testing program which would
be
> able to specify that I am about to generate a "stop" condition
with a given
> message, then do it, then continue, accepting that that was the right
result.
>
I have recently had another need for this. If you are doing simulations
comparing your (wonderful) new method to someone else's (hopelessly
inferior) old method and their S-PLUS code crashes about 5% of the time
then it gets very tedious to run several hundred samples
semi-interactively. Trapping the error would be a lot more convenient
than trying to rewrite someone else's code to fail gracefully.
Of course in your own code you wouldn't dream of doing something like that
;-).
-thomas
-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-
r-devel mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html
Send "info", "help", or "[un]subscribe"
(in the "body", not the subject !) To:
r-devel-request@stat.math.ethz.ch
_._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._