"Values of type 2^(-n) (and its binary complement) are exactly represented
as floating point numbers and do not generate the error. However, values
away from such special x-values will generate errors:"
That was exactly my point: The size of errors depends on the accuracy of
binary representation of floating point numbers and their arithmetic.
But you previously said:
"The ugly thing is that the error only gets worse as x decreases. The
value neither drops to 0, nor does it blow up to infinity; but it gets
worse in a continuous manner."
That is wrong and disagrees with what you say above.
-- Bert
On Fri, Aug 18, 2023 at 4:34?PM Leonard Mada <leo.mada at syonic.eu>
wrote:
> Dear Bert,
>
>
> Values of type 2^(-n) (and its binary complement) are exactly represented
> as floating point numbers and do not generate the error. However, values
> away from such special x-values will generate errors:
>
>
> # exactly represented:
> x = 9.53674316406250e-07
> y <- 1 - x^2/2;
> 1/(1 - y) - 2/x^2
>
> # almost exact:
> x = 9.536743164062502e-07
> y <- 1 - x^2/2;
> 1/(1 - y) - 2/x^2
>
> x = 9.536743164062498e-07
> y <- 1 - x^2/2;
> 1/(1 - y) - 2/x^2
>
> # the result behaves far better around values
> # which can be represented exactly,
> # but fails drastically for other values!
> x = 2^(-20) * 1.1
> y <- 1 - x^2/2;
> 1/(1 - y) - 2/x^2
> # 58672303 instead of 0!
>
>
> Sincerely,
>
>
> Leonard
>
>
> On 8/19/2023 2:06 AM, Bert Gunter wrote:
>
> "The ugly thing is that the error only gets worse as x decreases. The
> value neither drops to 0, nor does it blow up to infinity; but it gets
> worse in a continuous manner."
>
> If I understand you correctly, this is wrong:
>
> > x <- 2^(-20) ## considerably less then 1e-4 !!
> > y <- 1 - x^2/2;
> > 1/(1 - y) - 2/x^2
> [1] 0
>
> It's all about the accuracy of the binary approximation of floating
point
> numbers (and their arithmetic)
>
> Cheers,
> Bert
>
>
> On Fri, Aug 18, 2023 at 3:25?PM Leonard Mada via R-help <
> r-help at r-project.org> wrote:
>
>> I have added some clarifications below.
>>
>> On 8/18/2023 10:20 PM, Leonard Mada wrote:
>> > [...]
>> > After more careful thinking, I believe that it is a limitation due
to
>> > floating points:
>> > [...]
>> >
>> > The problem really stems from the representation of 1 - x^2/2 as
shown
>> > below:
>> > x = 1E-4
>> > print(1 - x^2/2, digits=20)
>> > print(0.999999995, digits=20) # fails
>> > # 0.99999999500000003039
>>
>> The floating point representation of 1 - x^2/2 is the real culprit:
>> # 0.99999999500000003039
>>
>> The 3039 at the end is really an error due to the floating point
>> representation. However, this error blows up when inverting the value:
>> x = 1E-4;
>> y = 1 - x^2/2;
>> 1/(1 - y) - 2/x^2
>> # 1.215494
>> # should be 1/(x^2/2) - 2/x^2 = 0
>>
>>
>> The ugly thing is that the error only gets worse as x decreases. The
>> value neither drops to 0, nor does it blow up to infinity; but it gets
>> worse in a continuous manner. At least the reason has become now clear.
>>
>>
>> >
>> > Maybe some functions of type cos1p and cos1n would be handy for
such
>> > computations (to replace the manual series expansion):
>> > cos1p(x) = 1 + cos(x)
>> > cos1n(x) = 1 - cos(x)
>> > Though, I do not have yet the big picture.
>> >
>>
>> Sincerely,
>>
>>
>> Leonard
>>
>> >
>> >
>> > On 8/17/2023 1:57 PM, Martin Maechler wrote:
>> >>>>>>> Leonard Mada
>> >>>>>>> on Wed, 16 Aug 2023 20:50:52 +0300
writes:
>> >> > Dear Iris,
>> >> > Dear Martin,
>> >>
>> >> > Thank you very much for your replies. I add a few
comments.
>> >>
>> >> > 1.) Correct formula
>> >> > The formula in the Subject Title was correct. A
small glitch
>> >> swept into
>> >> > the last formula:
>> >> > - 1/(cos(x) - 1) - 2/x^2
>> >> > or
>> >> > 1/(1 - cos(x)) - 2/x^2 # as in the subject title;
>> >>
>> >> > 2.) log1p
>> >> > Actually, the log-part behaves much better. And when
it fails,
>> >> it fails
>> >> > completely (which is easy to spot!).
>> >>
>> >> > x = 1E-6
>> >> > log(x) -log(1 - cos(x))/2
>> >> > # 0.3465291
>> >>
>> >> > x = 1E-8
>> >> > log(x) -log(1 - cos(x))/2
>> >> > # Inf
>> >> > log(x) - log1p(- cos(x))/2
>> >> > # Inf => fails as well!
>> >> > # although using only log1p(cos(x)) seems to do the
trick;
>> >> > log1p(cos(x)); log(2)/2;
>> >>
>> >> > 3.) 1/(1 - cos(x)) - 2/x^2
>> >> > It is possible to convert the formula to one which
is
>> >> numerically more
>> >> > stable. It is also possible to compute it manually,
but it
>> >> involves much
>> >> > more work and is also error prone:
>> >>
>> >> > (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x)))
>> >> > And applying L'Hospital:
>> >> > (2*x - 2*sin(x)) / (2*x * (1 - cos(x)) + x^2*sin(x))
>> >> > # and a 2nd & 3rd & 4th time
>> >> > 1/6
>> >>
>> >> > The big problem was that I did not expect it to fail
for x >> >> 1E-4. I
>> >> > thought it is more robust and works maybe until
1E-5.
>> >> > x = 1E-5
>> >> > 2/x^2 - 2E+10
>> >> > # -3.814697e-06
>> >>
>> >> > This is the reason why I believe that there is room
for
>> >> improvement.
>> >>
>> >> > Sincerely,
>> >> > Leonard
>> >>
>> >> Thank you, Leonard.
>> >> Yes, I agree that it is amazing how much your formula suffers
from
>> >> (a generalization of) "cancellation" --- leading you
to think
>> >> there was a problem with cos() or log() or .. in R.
>> >> But really R uses the system builtin libmath library, and the
>> >> problem is really the inherent instability of your formula.
>> >>
>> >> Indeed your first approximation was not really much more
stable:
>> >>
>> >> ## 3.) 1/(1 - cos(x)) - 2/x^2
>> >> ## It is possible to convert the formula to one which is
numerically
>> >> more
>> >> ## stable. It is also possible to compute it manually, but it
>> >> involves much
>> >> ## more work and is also error prone:
>> >> ## (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x)))
>> >> ## MM: but actually, that approximation does not seem better
(close
>> >> to the breakdown region):
>> >> f1 <- \(x) 1/(1 - cos(x)) - 2/x^2
>> >> f2 <- \(x) (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x)))
>> >> curve(f1, 1e-8, 1e-1, log="xy" n=2^10)
>> >> curve(f2, add = TRUE, col=2, n=2^10)
>> >> ## Zoom in:
>> >> curve(f1, 1e-4, 1e-1, log="xy",n=2^9)
>> >> curve(f2, add = TRUE, col=2, n=2^9)
>> >> ## Zoom in much more in y-direction:
>> >> yl <- 1/6 + c(-5, 20)/100000
>> >> curve(f1, 1e-4, 1e-1, log="x", ylim=yl, n=2^9)
>> >> abline(h = 1/6, lty=3, col="gray")
>> >> curve(f2, add = TRUE, n=2^9, col=adjustcolor(2, 1/2))
>> >>
>> >> Now, you can use the Rmpfr package (interface to the GNU MPFR
>> >> multiple-precision C library) to find out more :
>> >>
>> >> if(!requireNamespace("Rmpfr"))
install.packages("Rmpfr")
>> >> M <- function(x, precBits=128) Rmpfr::mpfr(x, precBits)
>> >>
>> >> (xM <- M(1e-8))# yes, only ~ 16 dig accurate
>> >> ## 1.000000000000000020922560830128472675327e-8
>> >> M(10, 128)^-8 # would of course be more accurate,
>> >> ## but we want the calculation for the double precision number
1e-8
>> >>
>> >> ## Now you can draw "the truth" into the above
plots:
>> >> curve(f1, 1e-4, 1e-1, log="xy",n=2^9)
>> >> curve(f2, add = TRUE, col=2, n=2^9)
>> >> ## correct:
>> >> curve(f1(M(x, 256)), add = TRUE, col=4, lwd=2, n=2^9)
>> >> abline(h = 1/6, lty=3, col="gray")
>> >>
>> >> But, indeed we take note how much it is the formula
instability:
>> >> Also MPFR needs a lot of extra bits precision before it gets
to
>> >> the correct numbers:
>> >>
>> >> xM <- c(M(1e-8, 80), M(1e-8, 96), M(1e-8, 112),
>> >> M(1e-8, 128), M(1e-8, 180), M(1e-8, 256))
>> >> ## to and round back to 70 bits for display:
>> >> R <- \(x) Rmpfr::roundMpfr(x, 70)
>> >> R(f1(xM))
>> >> R(f2(xM))
>> >> ## [1] 0 0
>> >> 0.15407439555097885670915
>> >> ## [4] 0.16666746653133802175779 0.16666666666666666749979
>> >> 0.16666666666666666750001
>> >>
>> >> ## 1. f1() is even worse than f2() {here at x=1e-8}
>> >> ## 2. Indeed, even 96 bits precision is *not* sufficient at
all, ...
>> >> ## which is amazing to me as well !!
>> >>
>> >> Best regards,
>> >> Martin
>>
>> ______________________________________________
>> R-help at r-project.org mailing list -- To UNSUBSCRIBE and more, see
>> https://stat.ethz.ch/mailman/listinfo/r-help
>> PLEASE do read the posting guide
>> http://www.R-project.org/posting-guide.html
>> and provide commented, minimal, self-contained, reproducible code.
>>
>
[[alternative HTML version deleted]]