On May 24, 2010, at 2:59 PM, Peter Holt wrote:
> Hi All,
>
> I created a .R file with source code that accesses functions from a R
> package (example, fTrading).
>
> I then run the created application in two different configurations:
>
> 1. I started a R session, and then ran the application using the source
> ("my_application.R") command, and I measured the time the
application ran.
>
> 2. I started 2 R sessions in the same processor, and executed the same
> source ("my_application.R") command, and measured the times the
application
> ran.
>
> The times I measured for each applications in #2 was slower than the times
I
> measured for the application in #1.
>
> The application was run in a 4-core machine running Linux.
>
> When the application ran, i used "mpstat" to look at the CPU
usage. For #1,
> the CPU usage was 25%, and for #2, the CPU usage was 50%.
>
> No other process was running in the machine.
>
> My question is, why would #2 be slower than #1?
>
Because you're running twice the load? Why shouldn't it? Even on
multi-core machines there is overhead associated with running processes or
threads in parallel (they access the same resources), so #2 is always expected
to be slower. However, you failed to include any relevant details (how do you
measure the time, how much slower was it, how do you start the sessions etc.) so
the difference could be really anywhere.
Cheers,
Simon