Hi, Profiling shows that 65-70% of the time of my program is spent inside a single function -- this is not surprising, as it is inside an optimize call inside a loop (this is a dynamic programming problem). I would like to speed this up. The function does very little: has a single argument, evaluates a spline at that argument, does some simple arithmetic with it (adding constants, multiplication). With R being a functional programming language, I implemented this by calling several functions inside the function: ## RHS of bellman equation f <- function(knext,k,ei) { util(consf(knext,k))+quickeval(knext,gridsecpp,Vkbarpp) } where quickeval evaluates a spline at knext (on gridsecpp, pp-form Vkbarpp), util is a function in the environment, and so is consf: ## consumption consf <- function(knext,k) { rp*k+W+knext*A } A, W, and rp are constants in the environment. Then I call optimize(f, lower=...,upper=...,k=...) to find the maximum. Questions: 1. does function calling give a significant overhead in R? If so, I would rewrite the function into a single one. I tried to test this by> f <- function(x) 1+x > g <- function(x) f(x) > x <- rnorm(1e6) > system.time(sapply(x,f))[1] 11.315 0.157 11.735 0.000 0.000> system.time(sapply(x,g))[1] 8.850 0.140 9.283 0.000 0.000> system.time(for (i in seq_along(x)) f(x[i]))[1] 2.466 0.036 2.884 0.000 0.000> system.time(for (i in seq_along(x)) g(x[i]))[1] 3.548 0.045 4.165 0.000 0.000 but I find that hard to interpret -- the overhead looks significant in the first case, but something strange (at least to my limited knowledge) is happening with sapply. 2. Do calls to .C or .Fortran carry large overhead? If they don't, I would recode f in either. Thanks, Tamas
I don't know if this would have an appreciable effect or not but you could also check whether passing the free variables explicitly speeds it up so that they don't have to be looked up each time in the outside environment. On 11/18/06, Tamas K Papp <tpapp at princeton.edu> wrote:> Hi, > > Profiling shows that 65-70% of the time of my program is spent inside > a single function -- this is not surprising, as it is inside an > optimize call inside a loop (this is a dynamic programming problem). > I would like to speed this up. > > The function does very little: has a single argument, evaluates a > spline at that argument, does some simple arithmetic with it (adding > constants, multiplication). With R being a functional programming > language, I implemented this by calling several functions inside the > function: > > ## RHS of bellman equation > f <- function(knext,k,ei) { > util(consf(knext,k))+quickeval(knext,gridsecpp,Vkbarpp) > } > > where quickeval evaluates a spline at knext (on gridsecpp, pp-form > Vkbarpp), util is a function in the environment, and so is consf: > > ## consumption > consf <- function(knext,k) { > rp*k+W+knext*A > } > > A, W, and rp are constants in the environment. > > Then I call > > optimize(f, lower=...,upper=...,k=...) > > to find the maximum. > > Questions: > > 1. does function calling give a significant overhead in R? If so, I > would rewrite the function into a single one. I tried to test this by > > > f <- function(x) 1+x > > g <- function(x) f(x) > > x <- rnorm(1e6) > > system.time(sapply(x,f)) > [1] 11.315 0.157 11.735 0.000 0.000 > > system.time(sapply(x,g)) > [1] 8.850 0.140 9.283 0.000 0.000 > > system.time(for (i in seq_along(x)) f(x[i])) > [1] 2.466 0.036 2.884 0.000 0.000 > > system.time(for (i in seq_along(x)) g(x[i])) > [1] 3.548 0.045 4.165 0.000 0.000 > > but I find that hard to interpret -- the overhead looks significant in > the first case, but something strange (at least to my limited > knowledge) is happening with sapply. > > 2. Do calls to .C or .Fortran carry large overhead? If they don't, I > would recode f in either. > > Thanks, > > Tamas > > ______________________________________________ > R-devel at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel >
On Sat, 18 Nov 2006, Tamas K Papp wrote:> Hi, > > Profiling shows that 65-70% of the time of my program is spent inside > a single function -- this is not surprising, as it is inside an > optimize call inside a loop (this is a dynamic programming problem). > I would like to speed this up. > > The function does very little: has a single argument, evaluates a > spline at that argument, does some simple arithmetic with it (adding > constants, multiplication). With R being a functional programming > language, I implemented this by calling several functions inside the > function: > > ## RHS of bellman equation > f <- function(knext,k,ei) { > util(consf(knext,k))+quickeval(knext,gridsecpp,Vkbarpp) > } > > where quickeval evaluates a spline at knext (on gridsecpp, pp-form > Vkbarpp), util is a function in the environment, and so is consf: > > ## consumption > consf <- function(knext,k) { > rp*k+W+knext*A > } > > A, W, and rp are constants in the environment. > > Then I call > > optimize(f, lower=...,upper=...,k=...) > > to find the maximum. > > Questions: > > 1. does function calling give a significant overhead in R?Not compared with all but the very simplest computations: the overhead is of the order of a few microseconds. If it were otherwise, R would not to a large extent be implemented by function wrappers round .Internal calls (rather than .Primitives).> If so, I would rewrite the function into a single one. I tried to test > this by > >> f <- function(x) 1+x >> g <- function(x) f(x) >> x <- rnorm(1e6) >> system.time(sapply(x,f)) > [1] 11.315 0.157 11.735 0.000 0.000 >> system.time(sapply(x,g)) > [1] 8.850 0.140 9.283 0.000 0.000 >> system.time(for (i in seq_along(x)) f(x[i])) > [1] 2.466 0.036 2.884 0.000 0.000 >> system.time(for (i in seq_along(x)) g(x[i])) > [1] 3.548 0.045 4.165 0.000 0.000 > > but I find that hard to interpret -- the overhead looks significant in > the first case, but something strange (at least to my limited > knowledge) is happening with sapply.Note that subsequent calls change quite a lot:> system.time(sapply(x,f))[1] 12.989 0.150 13.215 0.000 0.000> system.time(sapply(x,f))[1] 8.751 0.105 8.923 0.000 0.000> system.time(sapply(x,f))[1] 7.993 0.175 8.191 0.000 0.000> system.time(sapply(x,f))[1] 7.016 0.058 7.074 0.000 0.000> system.time(sapply(x,f))[1] 7.748 0.181 7.932 0.000 0.000> system.time(sapply(x,g))[1] 8.289 0.074 8.371 0.000 0.000 Several things are going on here, but most likely the most important is that the memory manager is being tuned. On my (64-bit) system:> gc()used (Mb) gc trigger (Mb) max used (Mb) Ncells 224202 12.0 407500 21.8 350000 18.7 Vcells 1115457 8.6 1519644 11.6 1419269 10.9> for(i in 1:5) sapply(x,f) > gc()used (Mb) gc trigger (Mb) max used (Mb) Ncells 224206 12.0 2156470 115.2 2665684 142.4 Vcells 2115455 16.2 7060137 53.9 8164087 62.3 Note the factor 5x differences. It is better to profile here, so if I profile> for (i in seq_along(x)) g(x[i])I get self.time self.pct total.time total.pct "f" 2.18 62.6 2.34 67.2 "g" 1.14 32.8 3.48 100.0 "+" 0.16 4.6 0.16 4.6 The extra call to g is taking about 1 usec (as I expected). Note that lazy evaluation means (I think) that f is being charged for the evaluation of the argument, not g.> 2. Do calls to .C or .Fortran carry large overhead? If they don't, I > would recode f in either.No (and calls to .Call carry less, as there is no duplication on call and return). -- Brian D. Ripley, ripley at stats.ox.ac.uk Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UK Fax: +44 1865 272595