Displaying 20 results from an estimated 2000 matches similar to: "Memory limitations for parallel::mclapply"
2020 Apr 29
2
mclapply returns NULLs on MacOS when running GAM
Thanks Simon,
I will take note of the sensible default for core usage. I?m trying to achieve small scale parallelism, where tasks take 1-5 seconds and make fuller use of consumer hardware. Its not a HPC-worthy computation but even laptops these days come with 4 cores and I don?t see a reason to not make use of it.
The goal for the current piece of code I?m working on is to bootstrap many
2020 Jan 11
2
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
Henrik,
the whole point and only purpose of mc* functions is to fork. That's what the multicore package was about, so if you don't want to fork, don't use mc* functions - they don't have any other purpose. I really fail to see the point - if you use mc* functions you're very explicitly asking for forking - so your argument is like saying that print() should have an option to
2012 Dec 13
1
possible bug in function 'mclapply' of package parallel
Dear parallel users and developers,
I might have encountered a bug in the function 'mclapply' of package
'parallel'. I construct a matrix using the same input data and code with a
single difference: Once I use mclapply and the other time lapply.
Shockingly the result is NOT the same.
To evaluate please unpack the attached archive and execute
Rscript mclapply_test.R
I put the
2013 Apr 11
1
parallel::mclapply does not return try-error objects with mc.preschedule=TRUE
Hello,
Consider this:
1)
library(parallel)
res <- mclapply(1:2, stop)
#Warning message:
#In mclapply(1:2, stop) :
# all scheduled cores encountered errors in user code
is(res[[1]], 'try-error')
#[1] FALSE
2)
library(parallel)
res <- mclapply(1:2, stop, mc.preschedule=FALSE)
#Warning message:
#In mclapply(1:2, stop, mc.preschedule = FALSE) :
# 2 function calls resulted in an
2013 Nov 11
2
problem using rJava with parallel::mclapply
Dear all,
I got an issue trying to parse excel files in parallel using XLConnect, the
process hangs forever.
Martin Studer, the maintainer of XLConnect kindly investigated the issue,
identified rJava as a possible cause of the problem:
This does not work (hangs):
library(parallel)
require(rJava)
.jinit()
res <- mclapply(1:2, function(i) {
2012 Dec 29
1
parallel error message extraction (in mclapply)?
dear R experts---I am looking at a fairly uninformative error in my program:
Error in mclapply(1:nrow(opts), solveme) :
(converted from warning) all scheduled cores encountered errors in user code
the doc on ?mclapply tells me that
In addition, each process is running the job inside try(...,
silent=TRUE) so if error occur they will be stored as try-error
objects in the list.
I looked up
2019 Apr 11
2
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
ISSUE:
Using *forks* for parallel processing in R is not always safe. The
`parallel::mclapply()` function uses forked processes to parallelize.
One example where it has been confirmed that forked processing causes
problems is when running R via RStudio. It is recommended to use
PSOCK clusters (`parallel::makeCluster()`) rather than *forked*
processes when running R from RStudio (
2012 Dec 11
1
Bug in mclapply?
I've been using mclapply and have encountered situations where it gives
errors or returns incorrect results. Here's a minimal example, which gives
the error on R 2.15.2 on Mac and Linux:
library(parallel)
f <- function(x) NULL
mclapply(1, f, mc.preschedule = FALSE, mc.cores = 1)
# Error in sum(sapply(res, inherits, "try-error")) :
# invalid 'type' (list) of argument
2019 Apr 13
3
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
Hi Inaki,
> "Performant"... in terms of what. If the cost of copying the data
> predominates over the computation time, maybe you didn't need
> parallelization in the first place.
Performant in terms of speed. There's no copying in that example
using `mclapply` and so it is significantly faster than other
alternatives.
It is a very simple and contrived example, but
2019 Apr 12
2
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
Just throwing my two cents in:
I think removing/deprecating fork would be a bad idea for two reasons:
1) There are no performant alternatives
2) Removing fork would break existing workflows
Even if replaced with something using the same interface (e.g., a
function that automatically detects variables to export as in the
amazing `future` package), the lack of copy-on-write functionality
would
2019 Apr 13
1
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
On Sat, 13 Apr 2019 at 18:41, Simon Urbanek <simon.urbanek at r-project.org> wrote:
>
> Sure, but that a completely bogus argument because in that case it would fail even more spectacularly with any other method like PSOCK because you would *have to* allocate n times as much memory so unlike mclapply it is guaranteed to fail. With mclapply it is simply much more efficient as it will
2013 Feb 02
1
best practice for packages using mclapply to avoid tcltk
Dear R-devel friends:
I'm back to bother you again about the conflict between mclapply and
tcltk. I've been
monitoring several packages that want to use mclapply to parallelize
computations and
need to figure out what should be done.
It appears tcltk cannot be safely unloaded, so the best we can do is
check for the presence of tcltk and stop if it is found before
mclapply() is used.
I
2020 Jan 11
0
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
On Fri, Jan 10, 2020 at 7:23 PM Simon Urbanek
<simon.urbanek at r-project.org> wrote:
>
> Henrik,
>
> the whole point and only purpose of mc* functions is to fork. That's what the multicore package was about, so if you don't want to fork, don't use mc* functions - they don't have any other purpose.
But, with that same argument I'm surprised we have fake
2023 May 16
1
mclapply enters into an infinite loop....
Dear members,
I am using arfima in an mclapply construction (from the parallel package):
Browse[2]> LYG <- mclapply(LYGH, FUN = arfima, mc.cores = detectCores())
^C
Browse[2]> LYG <- mclapply(LYGH[1:10], FUN = arfima, mc.cores = detectCores())
^C
Browse[2]> LYG <- mclapply(LYGH[1:2], FUN = arfima, mc.cores = detectCores())
^C
You can see that I am
2019 Apr 13
4
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
On Sat, 13 Apr 2019 at 03:51, Kevin Ushey <kevinushey at gmail.com> wrote:
>
> I think it's worth saying that mclapply() works as documented
Mostly, yes. But it says nothing about fork's copy-on-write and memory
overcommitment, and that this means that it may work nicely or fail
spectacularly depending on whether, e.g., you operate on a long
vector.
--
I?aki ?car
2020 Apr 29
0
mclapply returns NULLs on MacOS when running GAM
On Tue, Apr 28, 2020 at 9:00 PM Shian Su <su.s at wehi.edu.au> wrote:
>
> Thanks Simon,
>
> I will take note of the sensible default for core usage. I?m trying to achieve small scale parallelism, where tasks take 1-5 seconds and make fuller use of consumer hardware. Its not a HPC-worthy computation but even laptops these days come with 4 cores and I don?t see a reason to not make
2023 May 17
1
mclapply enters into an infinite loop....
Dear Jeff,
There was a problem in LYGH and lapply threw an error, but mclapply got stuck in an infinite loop. The doc for mclapply says that mclapply runs under try() with silent = TRUE. So that means mclapply should run properly, i.e output a try class object and exit. But it didn't. Can you shed some light on why this happened?
THanking you,
Yours sincerely,
AKSHAY M
2011 Mar 22
2
Problem with mclapply -- losing output/data
Hello,
I am running large simulations, which unfortunately I can't really
replicate here because the code is so extensive. I rely heavily on
mclapply, but I realize that I'm losing data somewhere.
There are two worrisome symptoms:
1) I am getting 'NULL' as a return value for some (but not all) elements
of the output when I use mclapply, but not if I use lapply
> tmp2[1:3]
2020 Jan 10
2
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
I'd like to pick up this thread started on 2019-04-11
(https://hypatia.math.ethz.ch/pipermail/r-devel/2019-April/077632.html).
Modulo all the other suggestions in this thread, would my proposal of
being able to disable forked processing via an option or an
environment variable make sense? I've prototyped a working patch that
works like:
> options(fork.allowed = FALSE)
>
2023 May 18
1
mclapply enters into an infinite loop....
On Wed, 17 May 2023 13:55:59 +0000
akshay kulkarni <akshay_e4 at hotmail.com> wrote:
> So that means mclapply should run properly, i.e output a try class
> object and exit. But it didn't. Can you shed some light on why this
> happened?
What's your sessionInfo()? Are you using a GUI frontend?
mclapply() relies on the fork() system call, which is tricky to get
right in a