similar to: parallel::mclapply() dummy function on Windows?

Displaying 20 results from an estimated 5000 matches similar to: "parallel::mclapply() dummy function on Windows?"

2011 Jul 19
1
hanging spaces prior to linebreak from cat()
(re-sending after confirming list subscription; apologies if this ends up being sent to the list twice) Is the expected behavior from cat(), as used below, a hanging space before \n at the end of the emitted line? firstheader = gsub("\\s+$", "", paste(c("Hybridization REF", s, s), collapse = "\t")) cat(firstheader, "\n", file = filename) When
2011 Aug 17
1
R cmd check and multicore foreach loop
Hi, in R 2.12.1, R CMD check hangs when building a vignette that uses a foreach loop with the doMC parallel backend. This does not happen in R 2.13.1, nor if I use doSEQ instead of doMC. All versions of multicore, doMC and foreach are the same on both my R installations. Has anybody encountered a similar issue? Thank you. Renaud ### UNIVERSITY OF CAPE TOWN This e-mail is subject to the
2015 Jul 24
1
Memory limitations for parallel::mclapply
Hello, I have been having issues using parallel::mclapply in a memory-efficient way and would like some guidance. I am using a 40 core machine with 96 GB of RAM. I've tried to run mclapply with 20, 30, and 40 mc.cores and it has practically brought the machine to a standstill each time to the point where I do a hard reset. When running mclapply with 10 mc.cores, I can see that each process
2012 Dec 13
1
possible bug in function 'mclapply' of package parallel
Dear parallel users and developers, I might have encountered a bug in the function 'mclapply' of package 'parallel'. I construct a matrix using the same input data and code with a single difference: Once I use mclapply and the other time lapply. Shockingly the result is NOT the same. To evaluate please unpack the attached archive and execute Rscript mclapply_test.R I put the
2019 Apr 13
4
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
On Sat, 13 Apr 2019 at 03:51, Kevin Ushey <kevinushey at gmail.com> wrote: > > I think it's worth saying that mclapply() works as documented Mostly, yes. But it says nothing about fork's copy-on-write and memory overcommitment, and that this means that it may work nicely or fail spectacularly depending on whether, e.g., you operate on a long vector. -- I?aki ?car
2019 Apr 12
2
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
Just throwing my two cents in: I think removing/deprecating fork would be a bad idea for two reasons: 1) There are no performant alternatives 2) Removing fork would break existing workflows Even if replaced with something using the same interface (e.g., a function that automatically detects variables to export as in the amazing `future` package), the lack of copy-on-write functionality would
2013 Apr 11
1
parallel::mclapply does not return try-error objects with mc.preschedule=TRUE
Hello, Consider this: 1) library(parallel) res <- mclapply(1:2, stop) #Warning message: #In mclapply(1:2, stop) : # all scheduled cores encountered errors in user code is(res[[1]], 'try-error') #[1] FALSE 2) library(parallel) res <- mclapply(1:2, stop, mc.preschedule=FALSE) #Warning message: #In mclapply(1:2, stop, mc.preschedule = FALSE) : # 2 function calls resulted in an
2013 Nov 11
2
problem using rJava with parallel::mclapply
Dear all, I got an issue trying to parse excel files in parallel using XLConnect, the process hangs forever. Martin Studer, the maintainer of XLConnect kindly investigated the issue, identified rJava as a possible cause of the problem: This does not work (hangs): library(parallel) require(rJava) .jinit() res <- mclapply(1:2, function(i) {
2011 Aug 22
3
Ignoring loadNamespace errors when loading a file
On a Unix machine I ran caret::rfe using the multicore package, and I saved the resulting object using save(lm2, file = "lm2.RData"). [Reproducible example below.] When I try to load("lm2.RData") on my Windows laptop, I get Error in loadNamespace(name) : there is no package called 'multicore' I completely understand the error and I would like to ignore it and
2012 Dec 29
1
parallel error message extraction (in mclapply)?
dear R experts---I am looking at a fairly uninformative error in my program: Error in mclapply(1:nrow(opts), solveme) : (converted from warning) all scheduled cores encountered errors in user code the doc on ?mclapply tells me that In addition, each process is running the job inside try(..., silent=TRUE) so if error occur they will be stored as try-error objects in the list. I looked up
2019 Apr 13
1
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
On Sat, 13 Apr 2019 at 18:41, Simon Urbanek <simon.urbanek at r-project.org> wrote: > > Sure, but that a completely bogus argument because in that case it would fail even more spectacularly with any other method like PSOCK because you would *have to* allocate n times as much memory so unlike mclapply it is guaranteed to fail. With mclapply it is simply much more efficient as it will
2019 Apr 15
2
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
On Mon, 15 Apr 2019 at 08:44, Tomas Kalibera <tomas.kalibera at gmail.com> wrote: > > On 4/13/19 12:05 PM, I?aki Ucar wrote: > > On Sat, 13 Apr 2019 at 03:51, Kevin Ushey <kevinushey at gmail.com> wrote: > >> I think it's worth saying that mclapply() works as documented > > Mostly, yes. But it says nothing about fork's copy-on-write and memory >
2019 Apr 11
2
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
ISSUE: Using *forks* for parallel processing in R is not always safe. The `parallel::mclapply()` function uses forked processes to parallelize. One example where it has been confirmed that forked processing causes problems is when running R via RStudio. It is recommended to use PSOCK clusters (`parallel::makeCluster()`) rather than *forked* processes when running R from RStudio (
2019 Apr 13
3
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
Hi Inaki, > "Performant"... in terms of what. If the cost of copying the data > predominates over the computation time, maybe you didn't need > parallelization in the first place. Performant in terms of speed. There's no copying in that example using `mclapply` and so it is significantly faster than other alternatives. It is a very simple and contrived example, but
2020 Jan 10
2
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
I'd like to pick up this thread started on 2019-04-11 (https://hypatia.math.ethz.ch/pipermail/r-devel/2019-April/077632.html). Modulo all the other suggestions in this thread, would my proposal of being able to disable forked processing via an option or an environment variable make sense? I've prototyped a working patch that works like: > options(fork.allowed = FALSE) >
2020 Jan 10
2
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
If I understand the thread correctly this is an RStudio issue and I would suggest that the developers consider using pthread_atfork() so RStudio can handle forking as they deem fit (bail out with an error or make RStudio work). Note that in principle the functionality requested here can be easily implemented in a package so R doesn?t need to be modified. Cheers, Simon Sent from my iPhone
2018 Sep 19
5
segfault issue with parallel::mclapply and download.file() on Mac OS X
I have an lapply function call that I want to parallelize. Below is a very simplified version of the code: url_base <- "https://cloud.r-project.org/src/contrib/" files <- c("A3_1.0.0.tar.gz", "ABC.RAP_0.9.0.tar.gz") res <- parallel::mclapply(files, function(s) download.file(paste0(url_base, s), s)) Instead of download a couple of files in parallel, I get a
2020 Jan 11
1
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
> On Jan 10, 2020, at 3:10 PM, G?bor Cs?rdi <csardi.gabor at gmail.com> wrote: > > On Fri, Jan 10, 2020 at 7:23 PM Simon Urbanek > <simon.urbanek at r-project.org> wrote: >> >> Henrik, >> >> the example from the post works just fine in CRAN R for me - the post was about homebrew build so it's conceivably a bug in their libraries. > > I
2020 Jan 11
2
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
Henrik, the whole point and only purpose of mc* functions is to fork. That's what the multicore package was about, so if you don't want to fork, don't use mc* functions - they don't have any other purpose. I really fail to see the point - if you use mc* functions you're very explicitly asking for forking - so your argument is like saying that print() should have an option to
2012 Dec 11
1
Bug in mclapply?
I've been using mclapply and have encountered situations where it gives errors or returns incorrect results. Here's a minimal example, which gives the error on R 2.15.2 on Mac and Linux: library(parallel) f <- function(x) NULL mclapply(1, f, mc.preschedule = FALSE, mc.cores = 1) # Error in sum(sapply(res, inherits, "try-error")) : # invalid 'type' (list) of argument