similar to: Passing command line arguments using parallel package

Displaying 20 results from an estimated 2000 matches similar to: "Passing command line arguments using parallel package"

2018 Mar 09
0
parallel:::newPSOCKnode(): background worker fails immediately if socket on master is not set up in time (BUG?)
I'm happy to look at a patch that does this. I'd start with a small interval and increase it by 50%, say, on each try wit a max retry time limit. This isn't eliminating the problem,only reducing the probability, but still worth it. I had considered doing something like this but it didn't seem necessary at the time. You don't want to retry indefinitely since the connection
2019 Mar 27
0
SUGGESTION: Proposal to mitigate problem with stray processes left behind by parallel::makeCluster()
The problem causing the stray worker processes when the master fails to open a server socket to listen to connections from workers is not related to timeout in socketConnection(), because socketConnection() will fail right away. It is caused by a bug in checking the setup timeout (PR 17391). Fixed in 76275. Best Tomas On 3/18/19 2:23 AM, Henrik Bengtsson wrote: > (Bcc: CRAN) > >
2019 Mar 18
2
SUGGESTION: Proposal to mitigate problem with stray processes left behind by parallel::makeCluster()
(Bcc: CRAN) This is a proposal helping CRAN and alike as well as individual developers to avoid stray R processes being left behind that might be produced when an example or a package test fails to set up a parallel::makeCluster(). ISSUE If a package test sets up a PSOCK cluster and then the master process dies for one reason or the other, the PSOCK worker processes will remain running for 30
2018 Mar 10
1
parallel:::newPSOCKnode(): background worker fails immediately if socket on master is not set up in time (BUG?)
Great. For the record of this thread, I've submitted patch PR17391 (https://bugs.r-project.org/bugzilla3/show_bug.cgi?id=17391). I've patched it against the latest R-devel on the SVN, passes 'make check-all', and I've verified it works with the above tests. /Henrik On Fri, Mar 9, 2018 at 4:37 AM, <luke-tierney at uiowa.edu> wrote: > I'm happy to look at a
2018 Mar 09
2
parallel:::newPSOCKnode(): background worker fails immediately if socket on master is not set up in time (BUG?)
A solution is to have parallel:::.slaveRSOCK() attempt to connect multiple times before failing, e.g. makeSOCKmaster <- function(master, port, timeout, useXDR, maxTries = 10L, interval = 1.0) { port <- as.integer(port) for (i in seq_len(maxTries)) { con <- tryCatch({ socketConnection(master, port = port, blocking = TRUE,
2018 Mar 09
0
parallel:::newPSOCKnode(): background worker fails immediately if socket on master is not set up in time (BUG?)
I just noticed that parallel:::.slaveRSOCK() passes 'timeout' to socketConnection() as a character, i.e. there's a missing timeout <- as.integer(timeout), cf. port <- as.integer(port) and useXDR <- as.logical(value): > parallel:::.slaveRSOCK function () { makeSOCKmaster <- function(master, port, timeout, useXDR) { port <- as.integer(port) con
2017 Dec 04
1
PSOCK cluster and renice
Hi Henrik, Thanks for the detailed in fast reply! My guess would be that the confusion comes from the different use of nice and renice. The workraund you provided work fine! Thanks a lot. Best, Andreas Henrik Bengtsson <henrik.bengtsson at gmail.com> writes: > Looks like a bug to me due to wrong assumptions about 'nice' > arguments, but could be because a
2017 Dec 04
0
PSOCK cluster and renice
Looks like a bug to me due to wrong assumptions about 'nice' arguments, but could be because a "non-standard" 'nice' is used. If we do: > trace(system, tracer = quote(print(command))) Tracing function "system" in package "base" we see that the system call used is: > cl <- parallel::makePSOCKcluster(2L, renice = 19) Tracing system(cmd, wait
2005 Nov 11
1
Snow parLapply
Dear R-user, I am trying to use the function 'parLapply' from the 'snow' package which is supposed to work the same wys as 'lapply' but for a parallelized cluster of computers. The function I am trying to call in parallel is 'dudi.pca' (from the 'ade4' package) which performs principal component analyses. When I call this function on a list of
2013 Dec 24
2
Parallel computing: how to transmit multiple parameters to a function in parLapply?
Hi R-developers In the package Parallel, the function parLapply(cl, x, f) seems to allow transmission of only one parameter (x) to the function f. Hence in order to compute f(x, y) parallelly, I had to define f(x, y) as f(x) and tried to access y within the function, whereas y was defined outside of f(x). Script: library(parallel) f <- function(x) { z <- 2 * x + .GlobalEnv$y # Try to
2018 Mar 09
2
parallel:::newPSOCKnode(): background worker fails immediately if socket on master is not set up in time (BUG?)
BACKGROUND: While troubleshooting random, occasionally occurring, errors from parallel::makePSOCKcluster("localhost", port = 11000); Error in socketConnection("localhost", port = port, server = TRUE, blocking = TRUE, : cannot open the connection I had another look at parallel:::newPSOCKnode(), which is used internally to set up each background worker. It is designed to,
2017 Dec 11
0
document environment passing in parallel::parLapply
The runtime of parallel::parLapply depends on variables unrelated to the parLapply call. However, this is not clearly documented. Therefore I would like to suggest expanding the relevant documentation to explain this behaviour. Consider this example: parallel_demo <- function(random_values_count) { some_data <- runif(random_values_count) dummy_function <- function(x) { x }
2006 Nov 07
1
variable problem
Hi everyone, I am not sure this is possible so I would be interested in your responses. Say I have a variable 'v' with the string "myargument" in and I have a function 'f' that takes this argument as follows; f <- function( myargument=5 ) { ... does something... } Is there anyway I can say something like; f( v=10 ) such that it will be evaluated as f(
2018 Mar 15
0
clusterApply arguments
On Thu, Mar 15, 2018 at 3:39 AM, <FlorianSchwendinger at gmx.at> wrote: > Thank you for your answer! > I agree with you except for the 3 (Error) example and > I realize now I should have started with that in the explanation. > > From my point of view > parLapply(cl = clu, X = 1:2, fun = fun, c = 1) > shouldn't give an error. > > This could be easily avoided by
2018 Mar 14
2
clusterApply arguments
Hi! I recognized that the argument matching of clusterApply (and therefore parLapply) goes wrong when one of the arguments of the function is called "c". In this case, the argument "c" is used as cluster and the functions give the following error message "Error in checkCluster(cl) : not a valid cluster". Of course, "c" is for many reasons an unfortunate
2018 Mar 15
1
clusterApply arguments
On 03/15/2018 05:25 PM, Henrik Bengtsson wrote: > On Thu, Mar 15, 2018 at 3:39 AM, <FlorianSchwendinger at gmx.at> wrote: >> Thank you for your answer! >> I agree with you except for the 3 (Error) example and >> I realize now I should have started with that in the explanation. >> >> From my point of view >> parLapply(cl = clu, X = 1:2, fun = fun, c =
2018 Mar 15
2
clusterApply arguments
Thank you for your answer! I agree with you except for the 3 (Error) example and I realize now I should have started with that in the explanation. >From my point of view parLapply(cl = clu, X = 1:2, fun = fun, c = 1) shouldn't give an error. This could be easily avoided by using all the argument names in the custerApply call of parLapply which means changing, parLapply <-
2018 Sep 12
0
Environments and parallel processing
This is all normal, a fork cluster works with processes, that do not share memory. When you create a fork cluster, you create a new process, that has the same memory layout as the parent. But from this moment its memory is independent of the parent process. When parLapply is done, the results are serialized and copied back to the parent process. The serialized environment is independent of the
2010 Jul 22
0
snow: hierarchical parallelization
I'm parallelizing some computation on hierarchical data, and would find it natural to do something like this (where a call to parLapply is embedded in outer call to parLapply): cl <- makeCluster(rep.int('localhost', 5), type='SOCK') clusterExport(cl, 'cl') parLapply(cl, 1:5, function(i) parLapply(cl, 1:5, function(j) i * j)) Snow
2019 Jun 05
0
MacOS parallel::makeCluster fails
Hi Dominik, from the output, the master process could not "listen" on the port where it expects a connection from the worker. We need to find out why. I'd recommend first to create a minimal reproducible example (and one that does not use future, only parallel, and a minimal number of threads, ideally just 2). Then I'd recommend to check if the problem still exists with