search for: summaryrprof

Displaying 20 results from an estimated 82 matches for "summaryrprof".

2008 Sep 16
0
Help with Memory Profiling in R with summaryRprof
Hi All, I am a new user of R currently trying to profile memory usage of some R code with summaryRprof in R version 2.7.2 in Windows. If I use the memory = "both" option in summaryRprof(), I have no problems viewing the profiling of both the time and memory usage. However if I try to use memory = "stats," I get the following error: Error in tapply(1:4369L, list(index = c(&quot...
2008 Sep 16
0
Help with Profiling Memory Use in R with summaryRprof
Hi All, I am a new user of R currently trying to profile memory usage of some R code with summaryRprof in R version 2.7.2 in Windows. If I use the memory = "both" option in summaryRprof(), I have no problems viewing the profiling of both the time and memory usage. However if I try to use memory = "stats," I get the following error: Error in tapply(1:4369L, list(index = c(&quot...
2007 Mar 31
1
Probem with argument "append" in "Rprof"
...le(boot) Rprof(NULL) } </code> The problem is that the file Rprof.out contains more than once the header information: $ grep "sample.interval=" Rprof.out memory profiling: sample.interval=20000 memory profiling: sample.interval=20000 memory profiling: sample.interval=20000 and `summaryRprof` or `R CMD Rprof` are not dealing with it > idx <- grep( "sample", rownames( smp <- summaryRprof()[[1]] ) ); smp[idx, ] self.time self.pct total.time total.pct sample.interval=20000 0 0 0.04 0.1 `sample.interval=20000` is incorre...
2004 Oct 19
0
Question on Rprof(); was: Re: sapply and loop
...ot;, filename) > >just before > > invisible(.Internal(Rprof(filename, append, interval))) > > > >Uwe Ligges > > > >>Rprof("boot.out") >> storm.boot <- boot(rs, storm.bf, R = 4999) # pretty slow >> Rprof(NULL) >> >>summaryRprof() >>Error in summaryRprof() : no events were recorded >> >>summaryRprof("boot.out") >>Error in summaryRprof("boot.out") : no events were recorded >> >> Rprof() >> storm.boot <- boot(rs, storm.bf, R = 4999) # pretty slow >&g...
2004 Oct 16
7
sapply and loop
Dear all, I am doing 200 times simulation. For each time, I generate a matrix and define some function on this matrix to get a 6 dimension vector as my results. As the loop should be slow, I generate 200 matrice first, and save them into a list named ma, then I define zz<-sapply(ma, myfunction) To my surprise, It almost costs me the same time to get my results if I directly use a loop
2004 Oct 16
7
sapply and loop
Dear all, I am doing 200 times simulation. For each time, I generate a matrix and define some function on this matrix to get a 6 dimension vector as my results. As the loop should be slow, I generate 200 matrice first, and save them into a list named ma, then I define zz<-sapply(ma, myfunction) To my surprise, It almost costs me the same time to get my results if I directly use a loop
2004 Jul 16
3
interpreting profiling output
I have some trouble interpreting the output from profiling. I have read the help pages Rprof, summaryRprof and consult the R extensions manual, but I still have problems understanding the output. Basically the output consist of self.time and total.time. I have the understanding that total.time is the time spent in a given function including any subcalls or child functions or whatever the technical term...
2011 Feb 11
1
Help optimizing EMD::extrema()
...: https://gist.github.com/822691 Any suggestions/help would be greatly appreciated. #load the EMD library for the default version of extrema library(EMD) #some data to process values = rnorm(1e4) #profile the default version of extrema Rprof(tmp <- tempfile()) temp = extrema(values) Rprof() summaryRprof(tmp) #1.2s total with most time spend doing rbind unlink(tmp) #load a rbind-free version of extrema source('extrema_c.R') Rprof(tmp <- tempfile()) temp = extrema_c(values) Rprof() summaryRprof(tmp) #much faster! .5s total unlink(tmp) #still, it encounters slowdowns with lots of data va...
2010 Sep 23
0
R CMD Rprof --help suggestion
...UM set NUM as minimum % to print for 'by total' --min%self=NUM set NUM as minimum % to print for 'by self' which I think is more in line with e.g. R --help. PS 1: ?Rprof states that R CMD Rprof is a Perl script but that no longer seems to be the case. The remark in ?summaryRprof about it being slower than R CMD Rprof for large files no longer applies? PS 2: summaryRprof() currently does not appear to support these min % options. I find them quite useful so I would like to request them to be added. By looking at how .Rprof() post-processes summaryRprof(), a quick hack I us...
2017 May 18
1
Interpreting R memory profiling statistics from Rprof() and gc()
Sorry, this might be a really basic question, but I'm trying to interpret the results from memory profiling, and I have a few questions (marked by *Q#*). From the summaryRprof() documentation, it seems that the four columns of statistics that are reported when setting memory.profiling=TRUE are - vector memory in small blocks on the R heap - vector memory in large blocks (from malloc) - memory in nodes on the R heap - number of calls to the internal function duplicate in...
2013 Apr 24
0
help with execution of 'embarrassingly parallel' problem using foreach, doParallel on a windows system
...ind~res, data=wk.grd))$sigma, silent=TRUE) if(ind.out < ind.start) { item.out[,1] <- tmp1 item.out[,2] <- tmp2 item.out[,3] <- ind.out } } return(item.out) } fitting.c<-cmpfun(fitting) #compiled Rprof('myFunction.out', memory.profiling=T) y <- fitting.c() Rprof(NULL) summaryRprof('myFunction.out', memory='both') system.time(fitting.c()) Rprof('myFunction.out', memory.profiling=T) y <- foreach(icount(length(two))) %dopar% fitting.c() Rprof(NULL) summaryRprof('myFunction.out', memory='both') system.time(foreach(icount(length(two)))...
2009 Nov 10
1
standardGeneric seems slow; any way to get around it?
Hi, I'm running some routines with standard matrix operations like solve() and diag(). When I do a profile, the lead item under total time is standardGeneric(). Furthermore, solve() and diag() have much greater total time than self time. ??? I assume there is some time-consuming decision going on in the usual functions; is there any way to avoid that and go straight to the calculaions? Thanks
2007 Aug 23
2
read big text file into R
...y questions are : 1) Is this a memory issue? 2) How to get around this?: I can't just sit around for 15 mins. Would write a c function help? Thanks! Here is the profiling I did: > Rprof() > dd = read.delim(file,skip=9,sep="\t",as.is= T,nrows=10000) > Rprof(NULL) > summaryRprof() $by.self self.time self.pct total.time total.pct "scan" 3.56 85.2 3.56 85.2 "type.convert" 0.48 11.5 0.48 11.5 "read.table" 0.08 1.9 4.18 100.0 "make.names" 0.02...
2010 Jan 05
1
Naming functions for the purpose of profiling
...own" (and of course, even the text of a function can be constructed dynamically within a program). So I guess that is not a viable approach. Thanks in advance for any help on this, and any pointers on the best references for advanced profiling issues would be appreciated as well (I know of summaryRprof of course, but it can be difficult to get the full picture from the summaryRprof output if the calling structure is complicated). Best, Magnus
2010 Nov 19
1
memory profiling
...g the output. I then rebuild R: % make Then I fire up R and run a script, using Rprof with the memory-profiling switch set to TRUE: Rprof("output", memory.profiling=TRUE); # a bunch of R code Rprof(NULL); When I examine the output, however, using either R CMD Rprof from the shell, or summaryRprof from within R, the output I see is identical to the output I got when I ran R BEFORE I recompiled with memory profiling enabled. Anyone see something that I'm missing? Thanks, Patrick [[alternative HTML version deleted]]
2009 Mar 03
1
profiler and loops
...m/execution-time-of-.packages-td22304833.html but with a different focus) I am often confused by the result of the profiler, when a loop is involved. Consider these two scripts: script1: Rprof( ) x <- numeric( ) for( i in 1:10000){ x <- c( x, rnorm(10) ) } Rprof( NULL ) print( summaryRprof( ) ) script2: Rprof( ) ffor <- function(){ x <- numeric( ) for( i in 1:10000){ x <- c( x, rnorm(10) ) } } ffor() Rprof( NULL ) print( summaryRprof( ) ) []$ time Rscript --vanilla script1.R $by.self self.time self.pct total.time total.pct "rnorm" 0.2...
2011 Feb 28
0
Fwd: Re: speed up process
...q.yvar[i] plot(mydata1[[k]]~mydata1[[ind.xvar]], type="p", xlab=names(mydata1)[ind.xvar], ylab=names(mydata1)[k]) for (j in seq_along(mydata_list)){ foo_reg(dat=mydata_list[[j]], xvar=ind.xvar, yvar=k, mycol=j, pos=mypos[j], name.dat=names(mydata_list)[j]) } } Rprof(NULL) summaryRprof() $by.self self.time self.pct total.time total.pct pt 0.04 18.18 0.04 18.18 plot 0.02 9.09 0.08 36.36 sc 0.02 9.09 0.08 36.36 mean 0.02 9.09 0.04 18.18 | 0.02 9.09 0.02...
2012 Jun 09
3
More simple implementation is slow.
Hi all. I'm developing a function, which must return a square matrix. Here is the code: http://pastebin.com/THzEW9N7 These functions implement an analog of two embedded for cycles. The first variant creates the resulting matrix by columns, cbind()-ing them one by one. The second variant creates the matrix with two columns, which rows contain all possible variants of i and j and calls apply
2013 Apr 05
2
line profiling
...<- function(x) { out <- c() for (i in x) { out <- c(out, 2*i) # line 4 } return(out) } Then this how I source the file and run a profile: source("test.R", keep.source = TRUE) Rprof("test.profile", line.profiling=TRUE) y <- double(1:20000) Rprof(NULL) summaryRprof("test.profile", lines = "both") The relevant part of the output is: $by.total total.time total.pct self.time self.pct "double" 0.98 100.00 0.00 0.00 "c" 0.92 93.88 0.92 93.88 test.R#4 0.06 6.12...
2009 Oct 19
2
how to get rid of 2 for-loops and optimize runtime
...ives a good idea what I have and what I want The code I have written (see below) does what I want but it is very very slow. It takes 129s for 400 rows. And the time gets four times higher each time I double the amount of rows. I'm new in programming in R, but I found that you can use Rprof and summaryRprof to analyse your code (output see below) But I don't really understand the output I guess I need code that requires linear time and need to get rid of the 2 for loops. can someone help me or tell me what else I can do to optimize my runtime I use R 2.9.2 windows Xp service pack3 Thank you in a...