>>>>> peter dalgaard <pdalgd at gmail.com> >>>>> on Fri, 16 Sep 2016 13:33:11 +0200 writes:> On 16 Sep 2016, at 12:41 , Alexis Sarda <alexis.sarda at gmail.com> wrote: >> Hello, >> >> while testing the crossprod() function under Linux, I noticed the following: >> >> set.seed(883) >> x <- rnorm(100) >> x %*% x - sum(x^2) # equal to 1.421085e-14 >> >> Is this difference normal? It seems to be rather large for double precision. >> > It's less than .Machine$double.eps, relative (!) to x %*% x ~= 100. indeed! Still, it gives exactly 0 on my platform(s), where I'm using R's own version of BLAS / Lapack. Are you perhaps using an "optimized" BLAS / LAPACK , i.e, one that is fast but slightly less so accurate ? Martin Maechler, ETH Zurich > -pd >> Regards, >> Alexis. >> >> [[alternative HTML version deleted]] >> >> ______________________________________________ >> R-devel at r-project.org mailing list >> https://stat.ethz.ch/mailman/listinfo/r-devel > -- > Peter Dalgaard, Professor, > Center for Statistics, Copenhagen Business School > Solbjerg Plads 3, 2000 Frederiksberg, Denmark > Phone: (+45)38153501 > Office: A 4.23 > Email: pd.mes at cbs.dk Priv: PDalgd at gmail.com > ______________________________________________ > R-devel at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel
I just realized that I was actually using a different random number generator, could that be a valid reason for the discrepancy? The code should be: RNGkind("L'Ecuyer") set.seed(883) x <- rnorm(100) x %*% x - sum(x^2) # equal to 1.421085e-14 Regards, Alexis Sarda. On Tue, Sep 20, 2016 at 5:27 PM, Martin Maechler <maechler at stat.math.ethz.ch> wrote:> >>>>> peter dalgaard <pdalgd at gmail.com> > >>>>> on Fri, 16 Sep 2016 13:33:11 +0200 writes: > > > On 16 Sep 2016, at 12:41 , Alexis Sarda <alexis.sarda at gmail.com> > wrote: > > >> Hello, > >> > >> while testing the crossprod() function under Linux, I noticed the > following: > >> > >> set.seed(883) > >> x <- rnorm(100) > >> x %*% x - sum(x^2) # equal to 1.421085e-14 > >> > >> Is this difference normal? It seems to be rather large for double > precision. > >> > > > It's less than .Machine$double.eps, relative (!) to x %*% x ~= 100. > > indeed! > > Still, it gives exactly 0 on my platform(s), where I'm using R's > own version of BLAS / Lapack. > > Are you perhaps using an "optimized" BLAS / LAPACK , i.e, one > that is fast but slightly less so accurate ? > > Martin Maechler, > ETH Zurich > > > > -pd > > >> Regards, > >> Alexis. > >> > >> [[alternative HTML version deleted]] > >> > >> ______________________________________________ > >> R-devel at r-project.org mailing list > >> https://stat.ethz.ch/mailman/listinfo/r-devel > > > -- > > Peter Dalgaard, Professor, > > Center for Statistics, Copenhagen Business School > > Solbjerg Plads 3, 2000 Frederiksberg, Denmark > > Phone: (+45)38153501 > > Office: A 4.23 > > Email: pd.mes at cbs.dk Priv: PDalgd at gmail.com > > > ______________________________________________ > > R-devel at r-project.org mailing list > > https://stat.ethz.ch/mailman/listinfo/r-devel >[[alternative HTML version deleted]]
>>>>> Alexis Sarda <alexis.sarda at gmail.com> >>>>> on Tue, 20 Sep 2016 17:33:49 +0200 writes:> I just realized that I was actually using a different random number > generator, could that be a valid reason for the discrepancy? > The code should be: > RNGkind("L'Ecuyer") > set.seed(883) > x <- rnorm(100) > x %*% x - sum(x^2) # equal to 1.421085e-14 Yes, now I get the same result so my story on "BLAS / LAPACK" is not relevant here. But do note the main point from Peter Dalgaard that this is well within Machine epsilon precision. More precisely, here it is really one bit difference in the least significant bit :> print(rbind( x%*%x, crossprod(x), sum(x^2)), digits= 19)[,1] [1,] 103.5096830356289814 [2,] 103.5096830356289814 [3,] 103.5096830356289672> cbind(sprintf("%a", c(x%*%x, crossprod(x), sum(x^2))))[,1] [1,] "0x1.9e09ea598568fp+6" [2,] "0x1.9e09ea598568fp+6" [3,] "0x1.9e09ea598568ep+6">> Regards, > Alexis Sarda. > On Tue, Sep 20, 2016 at 5:27 PM, Martin Maechler <maechler at stat.math.ethz.ch >> wrote: >> >>>>> peter dalgaard <pdalgd at gmail.com> >> >>>>> on Fri, 16 Sep 2016 13:33:11 +0200 writes: >> >> > On 16 Sep 2016, at 12:41 , Alexis Sarda <alexis.sarda at gmail.com> >> wrote: >> >> >> Hello, >> >> >> >> while testing the crossprod() function under Linux, I noticed the >> following: >> >> >> >> set.seed(883) >> >> x <- rnorm(100) >> >> x %*% x - sum(x^2) # equal to 1.421085e-14 >> >> >> >> Is this difference normal? It seems to be rather large for double >> precision. >> >> >> >> > It's less than .Machine$double.eps, relative (!) to x %*% x ~= 100. >> >> indeed! >> >> Still, it gives exactly 0 on my platform(s), where I'm using R's >> own version of BLAS / Lapack. >> >> Are you perhaps using an "optimized" BLAS / LAPACK , i.e, one >> that is fast but slightly less so accurate ? >> >> Martin Maechler, >> ETH Zurich >> >> >> > -pd >> >> >> Regards, >> >> Alexis. >> >> >> >> [[alternative HTML version deleted]] >> >> >> >> ______________________________________________ >> >> R-devel at r-project.org mailing list >> >> https://stat.ethz.ch/mailman/listinfo/r-devel >> >> > -- >> > Peter Dalgaard, Professor, >> > Center for Statistics, Copenhagen Business School >> > Solbjerg Plads 3, 2000 Frederiksberg, Denmark >> > Phone: (+45)38153501 >> > Office: A 4.23 >> > Email: pd.mes at cbs.dk Priv: PDalgd at gmail.com >> >> > ______________________________________________ >> > R-devel at r-project.org mailing list >> > https://stat.ethz.ch/mailman/listinfo/r-devel >> > [[alternative HTML version deleted]]