similar to: Fourteen patches to speed up R

Displaying 20 results from an estimated 7000 matches similar to: "Fourteen patches to speed up R"

2010 Sep 03
0
Pointer to fourteen patches to speed up R
I've continued to work on speeding up R, and now have a collection of fourteen patches, some of which speed up particular functions, and some of which reduce general interpretive overhead. The total speed improvement from these patches is substantial. It varies a lot from one R program to the next, of course, and probably from one machine to the next, but speedups of 25% can be expected in
2015 Jun 01
2
sum(..., na.rm=FALSE): Summing over NA_real_ values much more expensive than non-NAs for na.rm=FALSE? Hmm...
I'm observing that base::sum(x, na.rm=FALSE) for typeof(x) == "double" is much more time consuming when there are missing values versus when there are not. I'm observing this on both Window and Linux, but it's quite surprising to me. Currently, my main suspect is settings in on how R was built. The second suspect is my brain. I hope that someone can clarify the below
2010 Aug 23
1
Speeding up sum and prod
Looking for more ways to speed up R, I've found that large improvements are possible in the speed of "sum" and "prod" for long real vectors. Here is a little test with R version 2.11.1 on an Intel Linux system > a <- seq(0,1,length=1000) > system.time({for (i in 1:1000000) b <- sum(a)}) user system elapsed 4.800 0.010 4.817 > system.time({for (i
2017 Jan 07
2
accelerating matrix multiply
I am using R to multiply some large (30k x 30k double) matrices on a 64 core machine (xeon phi). I added some timers to src/main/array.c to see where the time is going. All of the time is being spent in the matprod function, most of that time is spent in dgemm. 15 seconds is in matprod in some code that is checking if there are NaNs. > system.time (C <- B %*% A) nancheck: wall time
2010 Aug 23
1
Internal state indicating if a data object has NAs/no NAs/not sure (Was: Re: Speeding up matrix multiplies)
Hi, I'm just following your messages the overhead that the code for dealing with possible NA/NaN values brings. When I was setting up part of the matrixStats package, I've also though about this. I was thinking of having an additional logical argument 'hasNA'/'has.na' where you as a user can specify whether there is NA/NaN:s or not, allowing the code to avoid it or not.
2017 Jan 11
2
accelerating matrix multiply
> Do you have R code (including set.seed(.) if relevant) to show on how to generate > the large square matrices you've mentioned in the beginning? So we get to some > reproducible benchmarks? Hi Martin, Here is the program I used. I only generate 2 random numbers and reuse them to make the benchmark run faster. Let me know if there is something I can do to help--alternate
2010 Aug 23
1
Speeding up matrix multiplies
I've looked at the code for matrix multiplies in R, and found that it can speeded up quite a bit, for multiplies that are really vector dot products, and for other multiplies in which the result matrix is small. Here's my test program: u <- seq(0,1,length=1000) v <- seq(0,2,length=1000) A2 <- matrix(2.1,2,1000) A5 <- matrix(2.1,5,1000) B3 <- matrix(3.2,1000,3) A4 <-
2005 Oct 12
1
Using matprod from array.c
Hi, I was wondering if I could use the matprod function from array.c in my own C routine. I tried the following as a test /* my_matprod.c */ # include <Rinternals.h> /* for REAL, SEXP etc */ # include <R_ext/Applic.h> /* array.c says need for dgemm */ /* following copied from array.c */ static void matprod(double *x, int nrx, int ncx, double *y, int nry, int ncy, double *z)
2017 Jan 16
1
accelerating matrix multiply
Hi Tomas, Can you share the full code for your benchmark, compiler options, and performance results so that I can try to reproduce them? There are a lot of variables that can affect the results. Private email is fine if it is too much for the mailing list. I am measuring on Knight's Landing (KNL) that was released in November. KNL is not a co-processor so no offload is necessary. R executes
2003 Dec 30
1
Accuracy: Correct sums in rowSums(), colSums() (PR#6196)
Full_Name: Nick Efthymiou Version: R1.5.0 and above OS: Red Hat Linux Submission from: (NULL) (162.93.14.73) With the introduction of the functions rowSums(), colSums(), rowMeans() and colMeans() in R1.5.0, function "SEXP do_colsum(SEXP call, SEXP op, SEXP args, SEXP rho)" was added to perform the fast summations. We have an excellent opportunity to improve the accuracy by
2015 Jun 01
0
sum(..., na.rm=FALSE): Summing over NA_real_ values much more expensive than non-NAs for na.rm=FALSE? Hmm...
This is a great example how you cannot figure it out after spending two hours troubleshooting, but a few minutes after you post to R-devel, it's just jumps to you (is there a word for this other than "impatient"?); Let me answer my own question. The discrepancy between my sum2() code and the internal code for base::sum() is that the latter uses LDOUBLE = long double (on some system
2020 Feb 17
1
NA in doc for options(matprod="default")
Le 17/02/2020 ? 17:50, Tomas Kalibera a ?crit?: > On 2/17/20 5:36 PM, Serguei Sokol wrote: >> Hi, >> >> A colleague of mine has spotted me a passage of the doc ?option >> talking about Inf and NaN check in 'matprod=default' section: >> https://stat.ethz.ch/R-manual/R-devel/library/base/html/options.html >> >> I am wondering if NA should be
2020 Feb 17
2
NA in doc for options(matprod="default")
Hi, A colleague of mine has spotted me a passage of the doc ?option talking about Inf and NaN check in 'matprod=default' section: https://stat.ethz.ch/R-manual/R-devel/library/base/html/options.html I am wondering if NA should be mentioned too as the check seems to include this "value" too. NA being different from Inf and NaN it is worth mentioning, isn't it? Best,
2017 Jan 16
0
accelerating matrix multiply
Hi Robert, thanks for the report and your suggestions how to make the NaN checks faster. Based on my experiments it seems that the "break" in the loop actually can have positive impact on performance even in the common case when we don't have NaNs. With gcc on linux (corei7), where isnan is inlined, the "break" version uses a conditional jump while the
2017 Jan 10
0
accelerating matrix multiply
>>>>> Cohn, Robert S <robert.s.cohn at intel.com> >>>>> on Sat, 7 Jan 2017 16:41:42 +0000 writes: > I am using R to multiply some large (30k x 30k double) > matrices on a 64 core machine (xeon phi). I added some timers > to src/main/array.c to see where the time is going. All of the > time is being spent in the matprod function, most of that
2008 Apr 05
2
Adding a Matrix Exponentiation Operator
Hi all I recently started to write a matrix exponentiation operator for R (by adding a new operator definition to names.c, and adding the following code to arrays.c). It is not finished yet, but I would like to solicit some comments, as there are a few areas of R's internals that I am still feeling my way around. Firstly: 1) Would there be interest in adding a new operator %^% that performs
2011 Aug 29
3
How to safely using OpenMP pragma inside a .C() function?
I am trying to parallelize part of a C function that is called from R (via .C) using OpenMP's "parallel for" pragma. I get mixed results: some runs finish with no problem, but some lead to R crashing after issuing a long error message involving memory violations. I found this post, which describes how a .Call() function can be made to avoid crashing R by raising the stack limit:
2010 Sep 08
0
Correction to vec-subset speed patch
I found a bug in one of the fourteen speed patches I posted, namely in patch-vec-subset. I've fixed this (I now see one does need to duplicate index vectors sometimes, though one can avoid it most of the time). I also split this patch in two, since it really has two different and independent parts. The patch-vec-subset patch now has only some straightforward (locally-checkable) speedups for
2001 Dec 14
2
colSums in C
Hi, all! My project today is to write a speedy colSums(), which is a function available in S-Plus to add the columns of a matrix. Here are 4 ways to do it, with the time it took (elapsed, best of 3 trials) in both R and S-Plus: m <- matrix(1, 400, 40000) x1 <- apply(m, 2, sum) ## R=16.55 S=52.39 x2 <- as.vector(rep(1,nrow(m)) %*% m) ## R= 2.39 S= 8.52 x3 <-
2017 Jan 09
0
accelerating matrix multiply
> From: "Cohn, Robert S" <robert.s.cohn at intel.com> > > I am using R to multiply some large (30k x 30k double) matrices on a > 64 core machine (xeon phi). I added some timers to > src/main/array.c to see where the time is going. All of the time is > being spent in the matprod function, most of that time is spent in > dgemm. 15 seconds is in matprod in