Florian Gerber
2019-Nov-19 14:09 UTC
[Rd] Why is matrix product slower when matrix has very small values?
Hi, I experience surprisingly large timing differences for the multiplication of matrices of the same dimension. An example is given below. How can this be explained? I posted the question on Stackoverflow: https://stackoverflow.com/questions/58886111/r-why-is-matrix-product-slower-when-matrix-has-very-small-values Somebody could reproduce the behavior but I did not get any useful explanations yet. Many thanks for hints! Florian ## disable openMP library(RhpcBLASctl); blas_set_num_threads(1); omp_set_num_threads(1) A <- exp(-as.matrix(dist(expand.grid(1:60, 1:60)))) summary(c(A)) # Min. 1st Qu. Median Mean 3rd Qu. Max. # 0.000000 0.000000 0.000000 0.001738 0.000000 1.000000 B <- exp(-as.matrix(dist(expand.grid(1:60, 1:60)))*10) summary(c(B)) # Min. 1st Qu. Median Mean 3rd Qu. Max. # 0.0000000 0.0000000 0.0000000 0.0002778 0.0000000 1.0000000 identical(dim(A), dim(B)) ## [1] TRUE system.time(A %*% A) # user system elapsed # 2.387 0.001 2.389 system.time(B %*% B) # user system elapsed # 21.285 0.020 21.310 sessionInfo() # R version 3.6.1 (2019-07-05) # Platform: x86_64-pc-linux-gnu (64-bit) # Running under: Linux Mint 19.2 # Matrix products: default # BLAS: /usr/lib/x86_64-linux-gnu/openblas/libblas.so.3 # LAPACK: /usr/lib/x86_64-linux-gnu/libopenblasp-r0.2.20.so
Hilmar Berger
2019-Nov-20 08:56 UTC
[Rd] Why is matrix product slower when matrix has very small values?
Hi Florian, just a guess, but couldn't it be that the multiplication of very small values leads to FP underflow exceptions which have to be handled by BLAS in a less efficient way than "normal" multiplications handled by SIMD instructions ? Best regards, Hilmar On 19/11/2019 15:09, Florian Gerber wrote:> Hi, > > I experience surprisingly large timing differences for the > multiplication of matrices of the same dimension. An example is given > below. How can this be explained? > I posted the question on Stackoverflow: > https://stackoverflow.com/questions/58886111/r-why-is-matrix-product-slower-when-matrix-has-very-small-values > Somebody could reproduce the behavior but I did not get any useful > explanations yet. > > Many thanks for hints! > Florian > > ## disable openMP > library(RhpcBLASctl); blas_set_num_threads(1); omp_set_num_threads(1) > > A <- exp(-as.matrix(dist(expand.grid(1:60, 1:60)))) > summary(c(A)) > # Min. 1st Qu. Median Mean 3rd Qu. Max. > # 0.000000 0.000000 0.000000 0.001738 0.000000 1.000000 > > B <- exp(-as.matrix(dist(expand.grid(1:60, 1:60)))*10) > summary(c(B)) > # Min. 1st Qu. Median Mean 3rd Qu. Max. > # 0.0000000 0.0000000 0.0000000 0.0002778 0.0000000 1.0000000 > > identical(dim(A), dim(B)) > ## [1] TRUE > > system.time(A %*% A) > # user system elapsed > # 2.387 0.001 2.389 > system.time(B %*% B) > # user system elapsed > # 21.285 0.020 21.310 > > sessionInfo() > # R version 3.6.1 (2019-07-05) > # Platform: x86_64-pc-linux-gnu (64-bit) > # Running under: Linux Mint 19.2 > > # Matrix products: default > # BLAS: /usr/lib/x86_64-linux-gnu/openblas/libblas.so.3 > # LAPACK: /usr/lib/x86_64-linux-gnu/libopenblasp-r0.2.20.so > > ______________________________________________ > R-devel at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel-- Dr. Hilmar Berger, MD Max Planck Institute for Infection Biology Charit?platz 1 D-10117 Berlin GERMANY Phone: + 49 30 28460 430 Fax: + 49 30 28460 401 E-Mail: berger at mpiib-berlin.mpg.de Web : www.mpiib-berlin.mpg.de
Serguei Sokol
2019-Nov-20 09:14 UTC
[Rd] Why is matrix product slower when matrix has very small values?
Le 20/11/2019 ? 09:56, Hilmar Berger a ?crit?:> Hi Florian, > > just a guess, but couldn't it be that the multiplication of very small > values leads to FP underflow exceptions which have to be handled by > BLAS in a less efficient way than "normal" multiplications handled by > SIMD instructions ?Another guess is that you are caught by what is called "denormal numbers" https://en.wikipedia.org/wiki/Denormal_number. Arithmetic operations on them are different and slower that those on "normal" numbers. Best, Serguei.> > Best regards, > Hilmar > > On 19/11/2019 15:09, Florian Gerber wrote: >> Hi, >> >> I experience surprisingly large timing differences for the >> multiplication of matrices of the same dimension. An example is given >> below. How can this be explained? >> I posted the question on Stackoverflow: >> https://stackoverflow.com/questions/58886111/r-why-is-matrix-product-slower-when-matrix-has-very-small-values >> >> Somebody could reproduce the behavior but I did not get any useful >> explanations yet. >> >> Many thanks for hints! >> Florian >> >> ## disable openMP >> library(RhpcBLASctl); blas_set_num_threads(1); omp_set_num_threads(1) >> >> A <- exp(-as.matrix(dist(expand.grid(1:60, 1:60)))) >> summary(c(A)) >> #???? Min.? 1st Qu.?? Median???? Mean? 3rd Qu.???? Max. >> # 0.000000 0.000000 0.000000 0.001738 0.000000 1.000000 >> >> B <- exp(-as.matrix(dist(expand.grid(1:60, 1:60)))*10) >> summary(c(B)) >> #????? Min.?? 1st Qu.??? Median????? Mean?? 3rd Qu.????? Max. >> # 0.0000000 0.0000000 0.0000000 0.0002778 0.0000000 1.0000000 >> >> identical(dim(A), dim(B)) >> ## [1] TRUE >> >> system.time(A %*% A) >> #??? user? system elapsed >> #?? 2.387?? 0.001?? 2.389 >> system.time(B %*% B) >> #??? user? system elapsed >> #? 21.285?? 0.020? 21.310 >> >> sessionInfo() >> # R version 3.6.1 (2019-07-05) >> # Platform: x86_64-pc-linux-gnu (64-bit) >> # Running under: Linux Mint 19.2 >> >> # Matrix products: default >> # BLAS:?? /usr/lib/x86_64-linux-gnu/openblas/libblas.so.3 >> # LAPACK: /usr/lib/x86_64-linux-gnu/libopenblasp-r0.2.20.so >> >> ______________________________________________ >> R-devel at r-project.org mailing list >> https://stat.ethz.ch/mailman/listinfo/r-devel >
Reasonably Related Threads
- Why is matrix product slower when matrix has very small values?
- Crash after (wrongly) applying product operator on object from LIMMA package
- Crash after (wrongly) applying product operator on object from LIMMA package
- Crash after (wrongly) applying product operator on object from LIMMA package
- '==' operator: inconsistency in data.frame(...) == NULL