Hi Frank,
I don't think it is possible to state a general rule about which will
be faster. For example this
system.time({
for(i in 1:10000) {
x <- matrix(rnorm(10), ncol = 10)
y <- mean(x)
#rm(x)
z <- matrix(runif(10), ncol = 100)
#rm(z)
}
})
gets a lot slower if I uncomment the "rm()" lines, but this
system.time({
for(i in 1:5) {
x <- matrix(rnorm(10000000), ncol = 10)
y <- mean(x)
rm(x)
z <- matrix(runif(10000000), ncol = 100)
rm(z)
}
})
is slightly faster than it would be without the rm() lines. I think
you'll have to run a smaller version of the simulation both ways and
see which is faster.
Best,
Ista
On Tue, Jun 3, 2014 at 10:05 AM, Frank van Berkum
<frankieboytje at hotmail.com> wrote:> Dear R-users,
>
> I'm working on a project in which many simulations have to be performed
within functions. The simulations are quite time consuming. I thought that in
general an empty memory is better for speed performance than a full memory.
>
> If I call a function which performs simulations within the function, than
the memory will temporarily increase (while the function is executed and objects
are created within the function), but as soon as the function is finished,
temporarily objects are flushed. It seems as if it might be beneficial for speed
performance to clear objects from the memory within the function if they are no
longer needed in the remainder of the function. Does anyone know whether this is
actually the case?
>
> Thanks in advance!
>
> Frank
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.