Displaying 20 results from an estimated 10000 matches similar to: "multi process support in R"
2012 Feb 07
1
using mclapply (multi core apply) to do matrix multiplication
Dear all,
I am trying to multiply three different matrices and each matrice is of size 16384,16384 the normal %*% multiplciation operator has not finished one day now. As I am running a system with many cores (and it seems that R is using only one of those) I would like to write fast a brief function that converts the typical for loops of a matrix multiplication to a set of lapply sets (mclapply
2011 Aug 22
3
Ignoring loadNamespace errors when loading a file
On a Unix machine I ran caret::rfe using the multicore package, and I
saved the resulting object using save(lm2, file = "lm2.RData").
[Reproducible example below.]
When I try to load("lm2.RData") on my Windows laptop, I get
Error in loadNamespace(name) : there is no package called 'multicore'
I completely understand the error and I would like to ignore it and
2011 Oct 10
5
multicore by(), like mclapply?
dear r experts---Is there a multicore equivalent of by(), just like
mclapply() is the multicore equivalent of lapply()?
if not, is there a fast way to convert a data.table into a list based
on a column that lapply and mclapply can consume?
advice appreciated...as always.
regards,
/iaw
----
Ivo Welch (ivo.welch at gmail.com)
2011 Jul 20
4
R on Multicore for Linux
Hi all,
I have R installed on a box, which is running on a machine with 16 core and
Redhat - Linux. I am handling huge (size of dataset will be 5 GB) dataset.
Lets assume that my data is in the form of structured (multiple) logs. I
access the data by using all.files(). Since by default basic version of R
utilizes single core, the processing of my analysis code is taking too much
time. I got to
2010 Sep 14
2
Multiple CPU HowTo in Linux?
Hello all,
I upgraded my R workstation, and to my dismay, only one core appears to
be used during intensive computation of a bioconductor function.
What I have now is two dual-core Xeon 5160 CPUs and 10 GB RAM. When I
fully load it, top reports about 25% user, 75% idle and 0.98 short-term
load.
The archives gave nothing helpful besides mention of snow. I thought of
posting to HPC, but this system
2011 Nov 02
3
Error: serialization is too large to store in a raw vector
Dear all,
I have quite large code (with lapply and mclapply)
and I am getting the following error.
Error: serialization is too large to store in a raw vector
Is it possible to ask from R to extend the Error messages with more details?
I would like to see where this problem exists.
B.R
Alex
[[alternative HTML version deleted]]
2020 Apr 29
2
mclapply returns NULLs on MacOS when running GAM
Thanks Simon,
I will take note of the sensible default for core usage. I?m trying to achieve small scale parallelism, where tasks take 1-5 seconds and make fuller use of consumer hardware. Its not a HPC-worthy computation but even laptops these days come with 4 cores and I don?t see a reason to not make use of it.
The goal for the current piece of code I?m working on is to bootstrap many
2011 Feb 16
1
error in optim, within polr(): "initial value in 'vmmin' is not finite"
Hi all. I'm just starting to explore ordinal multinomial regression. My dataset is 300,000 rows, with an outcome (ordinal factor from 1 to 9) and five independent variables (all continuous). My first stab at it was this:
pomod <- polr(Npf ~ o_stddev + o_skewness + o_kurtosis + o_acl_1e + dispersal, rlc, Hess=TRUE)
And that worked; I got a good model fit. However, a variety of other
2011 Apr 27
6
Assignments inside lapply
Dear all I would like to ask you if an assignment can be done inside a lapply statement.
For example
I would like to covert a double nested for loop
for (i in c(1:dimx)){
for (j in c(1:dimy)){
Powermap[i,j] <- Pr(c(i,j),c(PRX,PRY),f)
}
}
to something like that:
ij<-expand.grid(i=seq(1:dimx),j=(1:dimy))
unlist(lapply(1:nrow(ij),function(rowId) { return
2017 Jan 25
2
parallel::mc*: Is it possible for a child process to know it is a fork?
When using multicore-forking of the parallel package, is it possible
for a child process to know that it is a fork? Something like:
parallel::mclapply(1:10, FUN = function(i) { test_if_running_in_a_fork() })
I'm looking into ways to protect against further parallel processes
(including threads), which not necessarily are created via the
parallel:mc* API, are being spawned off recursively.
2010 Jan 15
1
Using multicore with an open pdf device results in corrupt pdf (PR#14186)
The attached code produces corrupted pdfs (test2.pdf, test4.pdf and
test5.pdf). The resulting pdf depends on how many cores are available on
the machine.
I don't see why there should be any difference between the pdfs (exept for
the timestamp). Doing many operations involving mclapply can increase the
size of the resulting pdf by ten times!
Thank you for checking this.
require(multicore)
2010 Jun 25
2
installing multicore package
Sir,
I want to apply mclapply() function for my analysis. So, I have to install
multicore package. But I can not install the package.
>install.packages("multicore")
It gives that package multicore is not available.
Can you help me?
Regards,
Suman Dhara
[[alternative HTML version deleted]]
2013 Oct 10
1
Rcpp and mclapply
Dear all,
I have an R script that uses Rcpp, and I have been trying to parallelize
it using mclapply (I tried with the multicore and the parallel library)
Sometimes (not always, interestingly), the CPU use for each core drops,
usually so that the total over all cores reaches 100%, i.e., as fast as if
using just one single core fully. I tried my code directly from within
emacs, and also using a
2012 Feb 05
2
vectors of matrix as iinput to lapply
Dear all
I am using lapply (actually mclapply that share the same syntax).
I want to call the same function that takes as input a vector. My initial data structure is a matrix that I want to cut it to multiple vectors (one vector for every row of the matrix) and then feed that to the function by using mclapply.
Could you please help me converting the matrices to nrow times vectors.
I would
2011 Apr 11
1
Mclapply and print statement
Dear all.
I am using the mclapply function to split my code to the many cores my system has. It seems that is working fine. This is the parallel version of lcapply.
The only problem that I seem to have is that the printf cannot print messages.
The ideal to me is to have fro my function an output of the form
Shadowlist<-mclapply(1:dimz, function(i) {
print(sprintf('Creating the
2012 Dec 31
3
weird bug with parallel, RSQlite and tcltk
Hello,
I spent a lot of a time on a weird bug, and I just managed to narrow it down.
In parallel code (here with parallel::mclappy, but I got it
doMC/multicore too), if the library(tcltk) is loaded, R hangs when
trying to open a DB connection.
I got the same behaviour on two different computers, one dual-core,
and one 2 xeon quad-core.
Here's the code:
library(parallel)
library(RSQLite)
2020 Apr 28
2
mclapply returns NULLs on MacOS when running GAM
Thanks Henrik,
That clears things up significantly. I did see the warning but failed to include it my initial email. It sounds like an RStudio issue, and it seems like that it?s quite intrinsic to how forks interact with RStudio. Given this code is eventually going to be a part of a package, should I expect it to fail mysteriously in RStudio for my users? Is the best solution here to migrate all
2011 Apr 09
1
For->lapply->parallel apply
Dear all,
I would like to ask your help understand the subsequent steps for making my program faster.
The following code:
Gauslist<-array(data=NA,dim=c(dimx,dimy,dimz))
for (i in c(1:dimz)){
print(sprintf('Creating the %d map',i));
Gauslist[,,i]<-f <- GaussRF(x=x, y=y, model=model, grid=TRUE,param=c(mean,variance,nugget,scale,Whit.alpha))
}
creates 100 GaussMaps (each
2011 Oct 04
1
Is there a way to disable / warn about forking?
Dear R developers,
with the inclusion of the package "parallel" in the upcoming release of R,
users and package developers are likely to make increasing usage of
parallelization features. In part, these features rely on forking the R
process. As ?mcfork points out, fork()ing in a GUI process is typically a bad
idea. In RKWard, we "only" seem to have problems with signals
2015 Aug 14
2
Why not pthreads on Windows in 'parallel' package?
On Windows there are a few 'pthreads' implementation, e.g.
pthreads-w32 and winpthreads
[https://cran.r-project.org/doc/manuals/r-devel/R-exts.html#Using-pthreads].
We're thinking of giving them a try for the matrixStats package, and
basic tests indicates it works, but since Windows pthreads are not
used by core R (or?) I've got a little bit worried that we will face
overwhelming