Displaying 20 results from an estimated 800 matches similar to: "Question on Rmpi looping"
2008 Jul 10
0
Rmpi unkown input format error
I have just installed Rmpi on a Suse 9.1 linux cluster with
openmpi-1.0.1. I am trying the example included below from the tutorial
website. However, I keep getting the following error:
> # Load the R MPI package if it is not already loaded.
> if (!is.loaded("mpi_initialize")) {
+ library("Rmpi")
+ }
>
> # Spawn as many slaves as possible
>
2008 Nov 07
1
Rmpi task-pull
Hi, I'm testing the efficiency of the Rmpi package regarding parallelization
using a cluster.
I've found and tried the task pull programming method, but even if it is
described as the best method, it seems to cause deadlock, anyone could help
me in using this method?
here is the code I've found and tried:
# Initialize MPI
library("Rmpi")
# Notice we just say "give us
2011 Feb 01
1
Rmpi; sample code not running, the slaves won't execute commands
Hi All,
I'm trying to parallelize some code using Rmpi and I've started with a
sample 'hello world' program that's available at
http://math.acadiau.ca/ACMMaC/Rmpi/sample.html. The code is as
follows;
# Load the R MPI package if it is not already loaded.
if (!is.loaded("mpi_initialize")) {
library("Rmpi")
}
# Spawn as many slaves as possible
2010 Jul 12
1
How to use mpi.allreduce() in Rmpi?
Hi everybody!
I have the next code which makes a reduction of the *a *variable in two
slaves, using the Rmpi package.
library(Rmpi)
mpi.spawn.Rslaves(nslaves=2)
reduc<-function(){
a<-mpi.comm.rank()+2
mpi.reduce(a,type=2, op="prod")
return(paste("a=",a))
}
mpi.bcast.Robj2slave(reduc)
mpi.remote.exec(reduc())
cat("Product: ")
2010 Jul 12
1
How to use mpi.allreduce() in Rmpi?
Hi everybody!
I have the next code which makes a reduction of the *a *variable in two
slaves, using the Rmpi package.
library(Rmpi)
mpi.spawn.Rslaves(nslaves=2)
reduc<-function(){
a<-mpi.comm.rank()+2
mpi.reduce(a,type=2, op="prod")
return(paste("a=",a))
}
mpi.bcast.Robj2slave(reduc)
mpi.remote.exec(reduc())
cat("Product: ")
2010 Oct 04
0
Syntax for Rmpi cf multicore
I'm aiming to compare the workings of Rmpi and multicore on a duel
processor quad core machine with 64 bit R-2.11.1 Kubuntu 10.4.
It's impossible for me to get a small reproducable code segment to
show what I mean, but if I show what works for mclapply, I'd hope it's
possible to be shown what would be the equivalent way with mpi.apply.
The function lr.gbm has variables trees,
2009 Mar 25
0
Rmpi - send/receive multiple objects to slaves
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I've written a function that uses Rmpi to perform a calculation in parallel. It
works fine, but I'm trying to improve efficiency in terms of memory usage and
the amount of data being passed back and forth between mater and slaves.
Calculations are performed on a symmetrical matrix in order to zero-out some of
the cells.
In the parallel
2008 Apr 08
1
Rmpi 0.5-6 : error spawning process
Hi,
I am using a cluster with LAM 7.1.3/MPI 2 and R 2.6.0.
Rmpi version 0.5-5 is working very well.
Now I have tested "Rmpi 0.5-6". During spawning the Rslaves I get an
error: MPI_Error_string: error spawning process
> sessionInfo()
R version 2.6.0 (2007-10-03)
x86_64-unknown-linux-gnu
locale:
2008 Jul 16
1
Problem with mpi.close.Rslaves()
I am running R 2.7.0 on a Suse 9.1 linux cluster with a job scheduler
dispatching jobs and openmpi-1.0.1. I have tried running one of the
examples at http://ace.acadiau.ca/math/ACMMaC/Rmpi/examples.html in Rmpi
and they seem to be working, except mpi.close.Rslaves() hangs. The
slaves are closed, but the master doesn't finish its script. Below is
the example script and the call to R. The job is
2012 Apr 04
1
npRmpi trouble - mpi.comm.spawn causes segfault
Dear all,
I have a large dataset of randomly generated weighed sample for which I
wish to compute a kernel density estimate.
I have used the "np" package successfully for smaller datasets, however
for the larger ones, they take too long when
using the cross validation options for bandwidth selection ("cv.ls" or
"cv.ml"). Of course, they are much quicker with
2008 Oct 22
1
Problem about spawn nodes with Rmpi
Hi all,
now I'm testing R in a "virtual cluster", made it with VirtualBox. This one
has 3 nodes, running CentOS 5 and OpenMPI 1.2.8, and the principal node
(called "server") exports the /home to other nodes.
I have installed R and OpenMPI in /home, in fact, it seems work OK. Editing
the openmpi-default-hostfile and run "mpirun -np 3 hostname" I can see the
2008 Sep 27
1
Problem with R on dual core under Linux - can not execute mpi.spawn.Rslaves()
Hi
I am trying to utilize my dual core processor (and later a
High-performance clusters (HPC) ) by using the Rmpi, snow, snowfall,
... packages, but I am struggling at the beginning, i.e. to initialise
the "cluster" on my dual core computer. Whenever I try to initialize
it (via sfInit(parallel=TRUE, cpus=2) or mpi.spawn.Rslaves(nslaves=2)
), I get an error message:
>
2008 Mar 20
1
Rmpi and C Code, where to get the communicator
Hello,
I try to write parts of my code in C to accelerate the for-loops. But
basic operations I want to do in R (e.g. start cluster). My R code looks
something like this:
library(Rmpi)
mpi.spawn.Rslaves()
mpi.remote.exec(....)
dyn.load("test.so")
erg <- .Call("test", ....)
....
mpi.close.Rslaves()
mpi.quit()
And my C function looks something like this:
#include
2011 Aug 17
1
[statEt] Rmpi problem in Eclipse statEt
Hi,
I try use Rmpi package to my compute. In my work I'm using eclipse version
3.6.2 and statEt version 0.10.0 (launch Rterm or RJ). Actually I observed
strange behavior, when I try loading Rmpi directly I don't have any problem
i.e.:
library("Rmpi")
mpi.spawn.Rslaves()
8 slaves are spawned successfully. 0 failed.
master (rank 0, comm 1) of size 9 is running on: marcin-HP
2007 Sep 03
1
Snow on Windows Cluster
Hello,
the package snow is not working on a windows cluster with MPICH2 and
Rmpi. There is an error in makeCluster:
launch failed: CreateProcess(/usr/bin/env
"RPROG="C:\Programme\R\R-2.5.1\bin\R" "OUT=/dev/null" "R_LIBS="
C:/Programme/R/R-2.5.1/library/snow/RMPInode.sh) on 'cl1' failed, error
3 - Das System kann den angegbenen Pfad nicht finden.
I
2012 Jul 05
1
trouble installing Rmpi on a debian machine
Dear R People:
I'm having trouble installing Rmpi on a debian machine.
Here is my output:
bccd at node000:~$ /bccd/home/bccd
bccd at node000:~$
bccd at node000:~$ export RMPI_TYPE=OPENMPI
bccd at node000:~$ R CMD INSTALL Rmpi_0.5-9.tar.gz
* installing to library '/bccd/home/bccd/R/x86_64-pc-linux-gnu-library/2.15'
* installing *source* package 'Rmpi' ...
checking for gcc...
2007 Nov 28
0
Rmpi : openmpi and mpi.spawn.Rslaves
Hello,
I'm using R on a 10 blade dual quad core Rocks Cluster, and trying to use Rpmi and snow. I basically wondered if at the moment I ought to install Rmpi against another form of mpi (not openmpi) and wondered whether anyone could pass on any experience.
I'm mainly worried about (a) the R server taking up 100% cpu time (I think this is a known issue with Rmpi and openmpi) and (b)
2013 Nov 06
0
MPICH2 Rmpi and doSNOW
Hi
I have managed to install MPICH2 and Rmpi on my Windows 7 machine. I can
also run the following code
> library(Rmpi)
> mpi.spawn.Rslaves()
4 slaves are spawned successfully. 0 failed.
master (rank 0, comm 1) of size 5 is running on: MyMaster
slave1 (rank 1, comm 1) of size 5 is running on: MyMaster
slave2 (rank 2, comm 1) of size 5 is running on: MyMaster
slave3
2013 Nov 06
0
MPICH2 Rmpi and doSNOW
Hi
I have managed to install MPICH2 and Rmpi on my Windows 7 machine. I can
also run the following code
> library(Rmpi)
> mpi.spawn.Rslaves()
4 slaves are spawned successfully. 0 failed.
master (rank 0, comm 1) of size 5 is running on: MyMaster
slave1 (rank 1, comm 1) of size 5 is running on: MyMaster
slave2 (rank 2, comm 1) of size 5 is running on: MyMaster
slave3 (rank 3, comm
2007 Dec 20
2
Multicore computation in Windows network: How to set up Rmpi
R-users,
My question is related to earlier posts about benefits of quadcore over
dualcore computers; I am trying to setup a cluster of windows xp
computers so that eventually I could make use of 10-20 cpu:s, but for
learning how to do this, I am playing around with two laptops.
I thought that the package snow would come handy in this situation, but
to use snow, I would probably need to install