On Apr 21, 2017 12:01 PM, "JRG" <loesljrg at accucom.net> wrote: A good part of the problem in the specific case you initially presented is that some non-integer numbers have an exact representation in the binary floating point arithmetic being used. Basically, if the fractional part is of the form 1/2^k for some integer k > 0, there is an exact representation in the binary floating point scheme.> options(digits=20)> (100*23)/40[1] 57.5> 100*(23/40)[1] 57.499999999999992895 So the two operations give a slightly different result because the fractional part of the division of 100*23 by 40 is 0.5. So the first operations gives, exactly, 57.5 while the second operation does not because 23/40 has no exact representation. Thanks for answering. This case seemed fun because it was not a contrived example. We found this one by comparing masses of report tables from 2 separate programs. It happened 1 time in about 10,000 calculations. Guidelines for R coders, though, would be welcome. So far, all I am sure of is 1 Don't use == for floating point numbers. Your 1/2^k point helps me understand why == does seem to work correctly sometimes. I wonder if we should be suspicious of >=. Imagine the horror if a= w/x > b=y/z in fractions, but digitally a < b. Blech. Can that happen? But, change the example's divisor from 40 to 30 [the fractional part from 1/2 to 2/3]:> (100*23)/30[1] 76.666666666666671404> 100*(23/30)[1] 76.666666666666671404 Now the two operations give the same answer to the full precision available. So, it isn't "generally true true in R that (100*x)/y is more accurate than 100*(x/y), if x > y." The good news here is that round() gives same answer in both cases:) I am looking for a case where the first method is less accurate than the second. I expect that multiplying integers before dividing is never less accurate. Sometimes it is more accurate. ` Following your 1/2^k insight, you see why multiplying first is helpful in some cases. Question is will situation get worse. But Bert is right. I have to read more books. I studied Golub and van Loan and came away with healthy fear of matrix inversion. But when you look at user contributed regression packages, what do you find? Matrix inversion and lots of X'X. Paul Johnson University of Kansask The key (in your example) is a property of the way that floating point arithmetic is implemented. ---JRG On 04/21/2017 08:19 AM, Paul Johnson wrote:> We all agree it is a problem with digital computing, not unique to R. I > don't think that is the right place to stop. > > What to do? The round example arose in a real funded project where 2 R > programs differed in results and cause was that one person got 57 and > another got 58. The explanation was found, but its less clear how to > prevent similar in future. Guidelines, anyone? > > So far, these are my guidelines. > > 1. Insert L on numbers to signal that you really mean INTEGER. In R, > forgetting the L in a single number will usually promote whole calculation > to floats. > 2. S3 variables are called 'numeric' if they are integer or doublestorage.> So avoid "is.numeric" and prefer "is.double". > 3. == is a total fail on floats > 4. Run print with digits=20 so we can see the less rounded number. Perhaps > start sessions with "options(digits=20)" > 5. all.equal does what it promises, but one must be cautious. > > Are there math habits we should follow? > > For example, Is it generally true in R that (100*x)/y is more accuratethan> 100*(x/y), if x > y? (If that is generally true, couldn't the R > interpreter do it for the user?) > > I've seen this problem before. In later editions of the game theoryprogram> Gambit, extraordinary effort was taken to keep values symbolically as > integers as long as possible. Avoid division until the last steps. Same in > Swarm simulations. Gary Polhill wrote an essay about the Ghost in the > Machine along those lines, showing accidents from trusting floats. > > I wonder now if all uses of > or < with numeric variables are suspect. > > Oh well. If everybody posts their advice, I will write a summary. > > Paul Johnson > University of Kansas > > On Apr 21, 2017 12:02 AM, "PIKAL Petr" <petr.pikal at precheza.cz> wrote: > >> Hi >> >> The problem is that people using Excel or probably other suchspreadsheets>> do not encounter this behaviour as Excel silently rounds all your >> calculations and makes approximate comparison without telling it does so. >> Therefore most people usually do not have any knowledge of floating point >> numbers representation. >> >> Cheers >> Petr >>______________________________________________ R-help at r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code. [[alternative HTML version deleted]]
J C Nash
2017-Apr-23 12:49 UTC
[R] Interesting quirk with fractions and rounding / using == for floating point
For over 4 decades I've had to put up with people changing my codes because I use equalities of floating point numbers in tests for convergence. (Note that tests of convergence are a subset of tests for termination -- I'll be happy to explain that if requested.) Then I get "your program isn't working" and find that is is NOT my program, but a crippled version thereof. But I don't use equality tests (i.e., ==) blindly. My tests are of the form if ( (x_new + offset) == (x_old + offset) ) { # we cannot get more progress Now it is possible to imagine some weird cases where this can fail, so an additional test is needed on, say, maximum iterations to avoid trouble. But I've not seen such cases (or perhaps never noticed one, though I run with very large maximum counters). The test works by having offset as some modest value. For single precision I use 10 or 16, for double around 100. When x_new and x_old are near zero, the bit pattern is dominated by offset and we get convergence. When x_new is big, then it dominates. Why do I do this? The reason is that in the early 1970s there were many, many different floating point arithmetics with all sorts of choices of radix and length of mantissa. (Are "radix" and "mantissa" taught to computer science students any more?). If my programs were ported between machines there was a good chance any tolerances would be wrong. And users -- for some reason at the time engineers in particular -- would say "I only need 2 digits, I'll use 1e-3 as my tolerance". And "my" program would then not work. Sigh. For some reason, the nicely scaled offset did not attract the attention of the compulsive fiddlers. So equality in floating point is not always "wrong", though it should be used with some attention to what is going on. Apologies to those (e.g., Peter D.) who have heard this all before. I suspect there are many to whom it is new. John Nash On 2017-04-23 12:52 AM, Paul Johnson wrote:> > 1 Don't use == for floating point numbers. >
peter dalgaard
2017-Apr-23 13:37 UTC
[R] Interesting quirk with fractions and rounding / using == for floating point
> On 23 Apr 2017, at 14:49 , J C Nash <profjcnash at gmail.com> wrote: > > > So equality in floating point is not always "wrong", though it should be used > with some attention to what is going on. > > Apologies to those (e.g., Peter D.) who have heard this all before. I suspect > there are many to whom it is new.Peter D. still insists on never trusting exact equality, though. There was at least one case in the R sources where age-old code got itself into a condition where a residual terme that provably should decrease on every iteration oscillated between two values of 1-2 ulp in magnitude without ever reaching 0. The main thing is that you cannot trust optimising compilers these days. There is, e.g., no guarantee that a compiler will not transform (x_new + offset) == (x_old + offset) to (x_new + offset) - (x_old + offset) == 0 to (x_new - x_old) + (offset - offset) == 0 to.... well, you get the point. -pd -- Peter Dalgaard, Professor, Center for Statistics, Copenhagen Business School Solbjerg Plads 3, 2000 Frederiksberg, Denmark Phone: (+45)38153501 Office: A 4.23 Email: pd.mes at cbs.dk Priv: PDalgd at gmail.com