Dear Martin,
Thank you very much for your analysis.
I add only a small comment:
- the purpose of the modified formula was to apply l'Hospital;
- there are other ways to transform the formula; although applying
l'Hospital once is probably more robust than simple transformations (but
the computations are also more tedious and error prone);
After more careful thinking, I believe that it is a limitation due to
floating points:
x = 1E-4
1/(-x^2/2 - x^4/24) + 2/x^2
1/6
y = 1 - x^2/2 - x^4/24;
1/(cos(x) - 1) + 2/x^2
1/(y - 1) + 2/x^2
# -1.215494
# correct: 1/6
We need the 3rd term for the correct computation of cos(x) in this
problem: but this is x^4 / 24, which for 1E-4 requires precision at
least up to 1E-16 / 24, or ~ 1E-18. I did not thought initially about
that. The trigonometric functions skip one term, and are therefore much
uglier than the log. The problem really stems from the representation of
1 - x^2/2 as shown below:
x = 1E-4
print(1 - x^2/2, digits=20)
print(0.999999995, digits=20) # fails
# 0.99999999500000003039
Maybe some functions of type cos1p and cos1n would be handy for such
computations (to replace the manual series expansion):
cos1p(x) = 1 + cos(x)
cos1n(x) = 1 - cos(x)
Though, I do not have yet the big picture.
Sincerely,
Leonard
On 8/17/2023 1:57 PM, Martin Maechler wrote:>>>>>> Leonard Mada
>>>>>> on Wed, 16 Aug 2023 20:50:52 +0300 writes:
> > Dear Iris,
> > Dear Martin,
>
> > Thank you very much for your replies. I add a few comments.
>
> > 1.) Correct formula
> > The formula in the Subject Title was correct. A small glitch
swept into
> > the last formula:
> > - 1/(cos(x) - 1) - 2/x^2
> > or
> > 1/(1 - cos(x)) - 2/x^2 # as in the subject title;
>
> > 2.) log1p
> > Actually, the log-part behaves much better. And when it fails, it
fails
> > completely (which is easy to spot!).
>
> > x = 1E-6
> > log(x) -log(1 - cos(x))/2
> > # 0.3465291
>
> > x = 1E-8
> > log(x) -log(1 - cos(x))/2
> > # Inf
> > log(x) - log1p(- cos(x))/2
> > # Inf => fails as well!
> > # although using only log1p(cos(x)) seems to do the trick;
> > log1p(cos(x)); log(2)/2;
>
> > 3.) 1/(1 - cos(x)) - 2/x^2
> > It is possible to convert the formula to one which is numerically
more
> > stable. It is also possible to compute it manually, but it
involves much
> > more work and is also error prone:
>
> > (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x)))
> > And applying L'Hospital:
> > (2*x - 2*sin(x)) / (2*x * (1 - cos(x)) + x^2*sin(x))
> > # and a 2nd & 3rd & 4th time
> > 1/6
>
> > The big problem was that I did not expect it to fail for x =
1E-4. I
> > thought it is more robust and works maybe until 1E-5.
> > x = 1E-5
> > 2/x^2 - 2E+10
> > # -3.814697e-06
>
> > This is the reason why I believe that there is room for
improvement.
>
> > Sincerely,
> > Leonard
>
> Thank you, Leonard.
> Yes, I agree that it is amazing how much your formula suffers from
> (a generalization of) "cancellation" --- leading you to think
> there was a problem with cos() or log() or .. in R.
> But really R uses the system builtin libmath library, and the
> problem is really the inherent instability of your formula.
>
> Indeed your first approximation was not really much more stable:
>
> ## 3.) 1/(1 - cos(x)) - 2/x^2
> ## It is possible to convert the formula to one which is numerically more
> ## stable. It is also possible to compute it manually, but it involves much
> ## more work and is also error prone:
> ## (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x)))
> ## MM: but actually, that approximation does not seem better (close to the
breakdown region):
> f1 <- \(x) 1/(1 - cos(x)) - 2/x^2
> f2 <- \(x) (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x)))
> curve(f1, 1e-8, 1e-1, log="xy" n=2^10)
> curve(f2, add = TRUE, col=2, n=2^10)
> ## Zoom in:
> curve(f1, 1e-4, 1e-1, log="xy",n=2^9)
> curve(f2, add = TRUE, col=2, n=2^9)
> ## Zoom in much more in y-direction:
> yl <- 1/6 + c(-5, 20)/100000
> curve(f1, 1e-4, 1e-1, log="x", ylim=yl, n=2^9)
> abline(h = 1/6, lty=3, col="gray")
> curve(f2, add = TRUE, n=2^9, col=adjustcolor(2, 1/2))
>
> Now, you can use the Rmpfr package (interface to the GNU MPFR
> multiple-precision C library) to find out more :
>
> if(!requireNamespace("Rmpfr"))
install.packages("Rmpfr")
> M <- function(x, precBits=128) Rmpfr::mpfr(x, precBits)
>
> (xM <- M(1e-8))# yes, only ~ 16 dig accurate
> ## 1.000000000000000020922560830128472675327e-8
> M(10, 128)^-8 # would of course be more accurate,
> ## but we want the calculation for the double precision number 1e-8
>
> ## Now you can draw "the truth" into the above plots:
> curve(f1, 1e-4, 1e-1, log="xy",n=2^9)
> curve(f2, add = TRUE, col=2, n=2^9)
> ## correct:
> curve(f1(M(x, 256)), add = TRUE, col=4, lwd=2, n=2^9)
> abline(h = 1/6, lty=3, col="gray")
>
> But, indeed we take note how much it is the formula instability:
> Also MPFR needs a lot of extra bits precision before it gets to
> the correct numbers:
>
> xM <- c(M(1e-8, 80), M(1e-8, 96), M(1e-8, 112),
> M(1e-8, 128), M(1e-8, 180), M(1e-8, 256))
> ## to and round back to 70 bits for display:
> R <- \(x) Rmpfr::roundMpfr(x, 70)
> R(f1(xM))
> R(f2(xM))
> ## [1] 0 0
0.15407439555097885670915
> ## [4] 0.16666746653133802175779 0.16666666666666666749979
0.16666666666666666750001
>
> ## 1. f1() is even worse than f2() {here at x=1e-8}
> ## 2. Indeed, even 96 bits precision is *not* sufficient at all, ...
> ## which is amazing to me as well !!
>
> Best regards,
> Martin