Kirill Müller
2014-Mar-26 08:58 UTC
[Rd] NOTE when detecting mismatch in output, and codes for NOTEs, WARNINGs and ERRORs
Dear list It is possible to store expected output for tests and examples. From the manual: "If tests has a subdirectory Examples containing a file pkg-Ex.Rout.save, this is compared to the output file for running the examples when the latter are checked." And, earlier (written in the context of test output, but apparently applies here as well): "..., these two are compared, with differences being reported but not causing an error." I think a NOTE would be appropriate here, in order to be able to detect this by only looking at the summary. Is there a reason for not flagging differences here? The following is slightly related: Some compilers and static code analysis tools assign a numeric code to each type of error or warning they check for, and print it. Would that be possible to do for the anomalies detected by R CMD check? The most significant digit could denote the "severity" of the NOTE, WARNING or ERROR. This would further simplify (semi-)automated analysis of the output of R CMD check, e.g. in the context of automated tests. Best regards Kirill
Paul Gilbert
2014-Mar-26 17:46 UTC
[Rd] NOTE when detecting mismatch in output, and codes for NOTEs, WARNINGs and ERRORs
On 03/26/2014 04:58 AM, Kirill M?ller wrote:> Dear list > > > It is possible to store expected output for tests and examples. From the > manual: "If tests has a subdirectory Examples containing a file > pkg-Ex.Rout.save, this is compared to the output file for running the > examples when the latter are checked." And, earlier (written in the > context of test output, but apparently applies here as well): "..., > these two are compared, with differences being reported but not causing > an error." > > I think a NOTE would be appropriate here, in order to be able to detect > this by only looking at the summary. Is there a reason for not flagging > differences here?The problem is that differences occur too often because this is a comparison of characters in the output files (a diff). Any output that is affected by locale, node name or Internet downloads, time, host, or OS, is likely to cause a difference. Also, if you print results to a high precision you will get differences on different systems, depending on OS, 32 vs 64 bit, numerical libraries, etc. A better test strategy when it is numerical results that you want to compare is to do a numerical comparison and throw an error if the result is not good, something like r <- result from your function rGood <- known good value fuzz <- 1e-12 #tolerance if (fuzz < max(abs(r - rGood))) stop('Test xxx failed.') It is more work to set up, but the maintenance will be less, especially when you consider that your tests need to run on different OSes on CRAN. You can also use try() and catch error codes if you want to check those. Paul> > The following is slightly related: Some compilers and static code > analysis tools assign a numeric code to each type of error or warning > they check for, and print it. Would that be possible to do for the > anomalies detected by R CMD check? The most significant digit could > denote the "severity" of the NOTE, WARNING or ERROR. This would further > simplify (semi-)automated analysis of the output of R CMD check, e.g. in > the context of automated tests. > > > Best regards > > Kirill > > ______________________________________________ > R-devel at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel
Maybe Matching Threads
- S3 dispatch for S4 subclasses only works if variable "extends" is accessible from global environment
- Usage of PROTECT_WITH_INDEX in R-exts
- Usage of PROTECT_WITH_INDEX in R-exts
- S3 dispatch for S4 subclasses only works if variable "extends" is accessible from global environment
- S3 dispatch for S4 subclasses only works if variable "extends" is accessible from global environment