Displaying 20 results from an estimated 8000 matches similar to: "parallel error message extraction (in mclapply)?"
2011 Oct 10
5
multicore by(), like mclapply?
dear r experts---Is there a multicore equivalent of by(), just like
mclapply() is the multicore equivalent of lapply()?
if not, is there a fast way to convert a data.table into a list based
on a column that lapply and mclapply can consume?
advice appreciated...as always.
regards,
/iaw
----
Ivo Welch (ivo.welch at gmail.com)
2024 Dec 31
1
mclapply hanging occasionally on macos
? Mon, 30 Dec 2024 19:16:11 -0800
Ivo Welch <ivo.welch at gmail.com> ?????:
> useless.function <- function( ) {
> y <- rnorm(3); x <- rnorm(3)
> summary( lm( y ~ x )) ## useless
> NULL
> }
>
> run30 <- function(i) {
> message("run30=", i)
> useless.function()
> }
>
> run30( 0 )
> message("many mc")
2012 Mar 26
1
assigning vector or matrix sparsely (for use with mclapply)
Dear R wizards---
I have a wrapper on mclapply() that makes it a little easier for me to
do multiprocessing. (Posting this may make life easier for other
googlers.) I pass a data frame, a vector that tells me what rows
should be recomputed, and the function; and I get back a vector or
matrix of answers.
d <- data.frame( id=1:6, val=11:16 )
loc <- c(TRUE,TRUE,FALSE,TRUE,FALSE,TRUE)
2012 Dec 24
2
parallelized version of "by" and "ave"
Dear R experts---
Has anyone written parallel versions of "by" (i.e., mcby) and "ave"
(i.e. mcave) ? I did ask a question like this a year ago, and then
the answer was no.
for those who are googling the group for the answer to this question,
in the meantime, the poor man's version of "by" is mclapply( split(
ds, factor ), FUN )
I don't know the poor
2013 May 31
1
R 3.0.1 : parallel collection triggers "long memory not supported yet"
Dear R developers:
...
7: lapply(seq_len(cores), inner.do)
8: FUN(1:3[[3]], ...)
9: sendMaster(try(lapply(X = S, FUN = FUN, ...), silent = TRUE))
Selection: .....................Error in sendMaster(try(lapply(X = S, FUN =
FUN, ...), silent = TRUE)) :
long vectors not supported yet: memory.c:3100
admittedly, my outcome will be a very big list, with 30,000 elements, each
containing data frames
2019 Apr 05
2
Deep Replicable Bug With AMD Threadripper MultiCore
The following program is whittled down from a much larger program that
always works on Intel, and always works on AMD's threadripper with
lapply but not mclappy. With mclapply on AMD, all processes go into
"suspend" mode and the program then hangs. This bug is replicable on an
AMD Ryzen Threadripper 2950X 16-Core Processor (128GB RAM), running
latest ubuntu 18.04. The R version
2012 Apr 18
1
multi-machine parallel setup?
Dear R experts:
could someone please point me to a page that explains how to set up
more than 1 machine for library parallel (which is quickly becoming my
favorite!)
my dream setup would be a design where I just pass a list of
hostnames:user:password to my parallel master, and then start R
listener processes on each of my slaves by hand. R would start slave
processes automatically on each slave
2013 Feb 27
0
Parallelizing Other Apply Functions, e.g. by, the Easy (Wrong?) Way
Dear R Users---this is more curiosity than a real problem. I am wondering
how to add mc* functions for all of R's *apply functions. stackoverflow
3505701 has a nice overview of these functions. roughly,
apply ( function to rows and columns of matrix )
lapply ( function to each element of list, get back list )
sapply ( function to each element of list, get back vector )
vapply ( like
2011 Oct 11
2
SLOW split() function
dear R experts: ?apologies for all my speed and memory questions. ?I
have a bet with my coauthors that I can make R reasonably efficient
through R-appropriate programming techniques. this is not just for
kicks, but for work. for benchmarking, my [3 year old] Mac Pro has
2.8GHz Xeons, 16GB of RAM, and R 2.13.1.
right now, it seems that 'split()' is why I am losing my bet. ?(split
is an
2019 Apr 05
0
Deep Replicable Bug With AMD Threadripper MultiCore
On 4 April 2019 at 17:28, ivo welch wrote:
| The following program is whittled down from a much larger program that
| always works on Intel, and always works on AMD's threadripper with
| lapply but not mclappy. With mclapply on AMD, all processes go into
| "suspend" mode and the program then hangs. This bug is replicable on an
| AMD Ryzen Threadripper 2950X 16-Core Processor (128GB
2024 Dec 31
1
mclapply hanging occasionally on macos
sequoia, 15.2. R --vanilla : 4.4.2 (2024-10-31). I have the same
basic setup on three macs: a macbook air, a mac pro m1, and a mac mini
m4. The following code is running into a bug on the mac pro m1 and
the mac mini, but works just fine on my macbook air. (of course, it
doesn't do anything useful.) it's replicable!
```
$ R --vanilla
> source("debug.R")
```
and (after
2024 Nov 14
1
[EXT] Mac ARM for lm() ?
Not a direct answer but you may find lm.fit worth experimenting with.
Also try the high-performance computing task view on CRAN
Cheers,
Andrew
--
Andrew Robinson
Chief Executive Officer, CEBRA and Professor of Biosecurity,
School/s of BioSciences and Mathematics & Statistics
University of Melbourne, VIC 3010 Australia
Tel: (+61) 0403 138 955
Email: apro at unimelb.edu.au
Website:
2024 Nov 15
1
[EXT] Mac ARM for lm() ?
>>>>> Andrew Robinson via R-help
>>>>> on Thu, 14 Nov 2024 12:45:44 +0000 writes:
> Not a direct answer but you may find lm.fit worth
> experimenting with.
Yes, lm.fit() is already faster, and
.lm.fit() {added to base R by me, when a similar question
was asked years ago ...}
is even an order of magnitude faster in some cases.
See
2024 Nov 16
1
[EXT] Mac ARM for lm() ?
Thanks, and all well taken. But are my beautiful GPUs (with integrated
memory architecture) really nothing more than a cooling area for the chip?
On Fri, Nov 15, 2024 at 6:06?AM Martin Maechler <maechler at stat.math.ethz.ch>
wrote:
> >>>>> Andrew Robinson via R-help
> >>>>> on Thu, 14 Nov 2024 12:45:44 +0000 writes:
>
> > Not a direct
2015 Jul 24
1
Memory limitations for parallel::mclapply
Hello,
I have been having issues using parallel::mclapply in a memory-efficient
way and would like some guidance. I am using a 40 core machine with 96 GB
of RAM. I've tried to run mclapply with 20, 30, and 40 mc.cores and it has
practically brought the machine to a standstill each time to the point
where I do a hard reset.
When running mclapply with 10 mc.cores, I can see that each process
2012 Dec 13
1
possible bug in function 'mclapply' of package parallel
Dear parallel users and developers,
I might have encountered a bug in the function 'mclapply' of package
'parallel'. I construct a matrix using the same input data and code with a
single difference: Once I use mclapply and the other time lapply.
Shockingly the result is NOT the same.
To evaluate please unpack the attached archive and execute
Rscript mclapply_test.R
I put the
2013 Apr 11
1
parallel::mclapply does not return try-error objects with mc.preschedule=TRUE
Hello,
Consider this:
1)
library(parallel)
res <- mclapply(1:2, stop)
#Warning message:
#In mclapply(1:2, stop) :
# all scheduled cores encountered errors in user code
is(res[[1]], 'try-error')
#[1] FALSE
2)
library(parallel)
res <- mclapply(1:2, stop, mc.preschedule=FALSE)
#Warning message:
#In mclapply(1:2, stop, mc.preschedule = FALSE) :
# 2 function calls resulted in an
2010 Aug 22
2
on abort error, always show call stack?
Dear R Wizards---is it possible to get R to show its current call
stack (sys.calls()) upon an error abort? I don't use ESS for
execution, and it is often not obvious how to locate how I triggered
an error in an R internal function. Seeing the call stack would make
this easier. (right now, I sprinkle "cat" statements everywhere, just
to locate the line where the error appears.) Of
2013 Nov 11
2
problem using rJava with parallel::mclapply
Dear all,
I got an issue trying to parse excel files in parallel using XLConnect, the
process hangs forever.
Martin Studer, the maintainer of XLConnect kindly investigated the issue,
identified rJava as a possible cause of the problem:
This does not work (hangs):
library(parallel)
require(rJava)
.jinit()
res <- mclapply(1:2, function(i) {
2019 Apr 11
2
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
ISSUE:
Using *forks* for parallel processing in R is not always safe. The
`parallel::mclapply()` function uses forked processes to parallelize.
One example where it has been confirmed that forked processing causes
problems is when running R via RStudio. It is recommended to use
PSOCK clusters (`parallel::makeCluster()`) rather than *forked*
processes when running R from RStudio (