Displaying 20 results from an estimated 1000 matches similar to: "error in parallel:::sendMaster"
2019 Dec 04
0
error in parallel:::sendMaster
Hi all,
With the help of Tomas, I was able to track the issue down: Prior to R v3.6.0 the parallel package passes an uninitialized variable as the file descriptor argument to the close system call.
In my particular R session this uninitialized variable (reproducibly) was holding the value 7, which corresponded to the file descriptor of the write end of the pipe the second child would use to
2019 Nov 28
1
error in parallel:::sendMaster
Hi Andreas,
thank you very much, good job finding it was EBADF. Now the question is
why the pipe has been closed prematurely; it could be accidentally by R
(a race condition in the cleanup code in fork.c) or possibly by some
other code running in the same process (maybe the R program itself or
some other code it runs). Maybe we can take this off the list and come
back when we know the cause
2019 Nov 28
0
error in parallel:::sendMaster
Hi Tomas,
Thanks for your prompt reply and your offer to help. I might need to get back to this since I am not too experienced in debugging these kinds of issues. Anyway, I gave it a try and I think I have found the immediate cause:
I installed the debug symbols (r-base-core-dbg), placed https://github.com/wch/r-source/blob/tags/R-3-5-2/src/library/parallel/src/fork.c in cwd and changed the
2019 Nov 27
2
error in parallel:::sendMaster
Hi Andreas,
the error is reported when some child process cannot send results to the
master process, which originates from an error returned by write() -
when write() returns -1 or 0. The logic around the writing has not
changed since R 3.5.2. It should not be related to the printing in the
child, only to returning the value. The problem may be originating from
the execution environment,
2019 Nov 27
0
error in parallel:::sendMaster
Hi again,
One important correction of my first message: I misinterpreted the output. Actually in that R session 2 input files were processed one after the other in a loop. The first (with 88 parts went fine). The second (with 85 parts) produced the sendMaster errors and failed. If (in a new session via Rscript) I only process the second input file it will work. The other observations on R vs
2019 Nov 27
2
error in parallel:::sendMaster
Hi,
I am facing a very weird problem with parallel::mclapply. I have a script which does some data wrangling on an input dataset in parallel and then writes the results to disk. I have been using this script daily for more than one year always on an EC2 instance launched from the same AMI (no updates installed after launch) and processed thousands of different input data sets successfully. I now
2012 Apr 10
1
multicore/mcparallel error
Hello everyone,
I'm trying to parallelize an R script I have written. To do this, I am
first trying to use the multicore package, because I've had some previous
success with that.
The function I'm trying to parallelize is illumqc. I'd like to create a
separate process for each of 8 files, contained in the vector "files".
Below is my code:
for(i in
2012 Mar 23
1
serialization regression in 2.15.0 beta
Hi,
I am experiencing a problem related to serialization behavior in
2.15.0 beta (binary installed from Debian unstable) and 2.16.0 (from
svn) that is not present in 2.14.2 (binary from Debian testing).
I don't fully understand the problem. Also, I tried but have not yet
been able to create a small, self-contained example that reproduces
the problem. However, I do have a large, not
2018 Jun 21
1
DOCUMENTATION(?): parallel::mcparallel() gives various types of "Error in unserialize(r) : ..." errors if value is of type raw
I stumbled upon the following:
f <- parallel::mcparallel(raw(0L))
parallel::mccollect(f)
# $`77083`
# NULL
but
f <- parallel::mcparallel(raw(1L))
parallel::mccollect(f)
# Error in unserialize(r) : read error
traceback()
# 2: unserialize(r)
# 1: parallel::mccollect(f)
(restarting because the above appears to corrupt the R session)
f <- parallel::mcparallel(raw(2L))
2013 May 31
1
R 3.0.1 : parallel collection triggers "long memory not supported yet"
Dear R developers:
...
7: lapply(seq_len(cores), inner.do)
8: FUN(1:3[[3]], ...)
9: sendMaster(try(lapply(X = S, FUN = FUN, ...), silent = TRUE))
Selection: .....................Error in sendMaster(try(lapply(X = S, FUN =
FUN, ...), silent = TRUE)) :
long vectors not supported yet: memory.c:3100
admittedly, my outcome will be a very big list, with 30,000 elements, each
containing data frames
2015 Jun 21
0
Listing all spawned jobs/processed after parallel::mcparallel()?
On 20/06/2015 22:21, Henrik Bengtsson wrote:
> QUESTION:
> Is it possible to query number of active jobs running after launching
> them with parallel::mcparallel()?
>
> For example, if I launch 3 jobs using:
>
>> library(parallel)
>> f <- lapply(1:3, FUN=mcparallel)
>
> then I can inspect them as:
>
>> str(f)
> List of 3
> $ :List of 2
>
2015 Jun 20
2
Listing all spawned jobs/processed after parallel::mcparallel()?
QUESTION:
Is it possible to query number of active jobs running after launching
them with parallel::mcparallel()?
For example, if I launch 3 jobs using:
> library(parallel)
> f <- lapply(1:3, FUN=mcparallel)
then I can inspect them as:
> str(f)
List of 3
$ :List of 2
..$ pid: int 142225
..$ fd : int [1:2] 8 13
..- attr(*, "class")= chr [1:3] "parallelJob"
2018 Aug 31
0
Detecting whether a process exists or not by its PID?
On 08/31/2018 03:13 PM, G?bor Cs?rdi wrote:
> On Fri, Aug 31, 2018 at 2:51 PM Tomas Kalibera <tomas.kalibera at gmail.com> wrote:
> [...]
>> kill(sig=0) is specified by POSIX but indeed as you say there is a race
>> condition due to PID-reuse. In principle, detecting that a worker
>> process is still alive cannot be done correctly outside base R.
> I am not sure
2010 Sep 16
2
problem reading Matlab file into R
Hi,
I'm trying to read a .mat file into R (2.11.1) with medium success so far. The file I have is a MATLAB 5.0 MAT-file exported from RiverSurveyor LIVE software (http://www.sontek.com/software.php). I have R.matlab and Rcompression installed and readMat() starts reading the file, as can be seen in verbose mode (hence medium success), but then gives the following error:
Error in dim(matrix)
2013 Feb 07
1
How to NAMESPACE OS-specific importFrom?
I'd like to importFrom(parallel, mccollect, mcparallel) but on Windows these are
not exported because this
if(tools:::.OStype() == "unix") {
export(mccollect, mcparallel, mc.reset.stream, mcaffinity)
}
appears at src/library/parallel/NAMESPACE:6 of svn r61857. So should I be doing
if (tools:::.OStype() == "unix") {
importFrom(parallel, mccollect, mcparallel)
}
2019 May 19
2
Race condition on parallel package's mcexit and rmChild
I've been hacking with parallel package for some time and built a
parallel processing framework with it. However, although very rarely,
I did notice "ignoring SIGPIPE signal" error every now and then.
After a deep dig into the source code, I think I found something worth
noticing.
In short, wring to pipe in the C function mc_exit(SEXP sRes) may cause
a SIGPIPE. Code from
2012 Aug 30
4
Leading plus in numeric fields
Hello R experts,
I have go this data frame:
'data.frame': 1 obs. of 20 variables:
$ Anno : chr "PREVISIONI VS TARGET"
$ OreTot: num 41
$ GioTot: logi NA
$ OrGTot: logi NA
$ OreCli: num 99
$ GioCli: logi NA
$ OrGCli: logi NA
$ OreFor: num -27
$ GioFor: logi NA
$ OrGFor: logi NA
$ OreOrt: num -18
$ GioOrt: logi NA
$ OrGOrt: logi NA
$ OreSpo: num -6
$ GioSpo: logi
2016 Aug 30
1
mcparallel / mccollect
Hi there,
I've tried to implement an asynchronous job scheduler using
parallel::mcparallel() and parallel::mccollect(..., wait=FALSE). My
goal was to send processes to the background, leaving the R session
open for interactive use while all jobs store their results on the
file system. To keep track of the running jobs I've stored the process
ids and written a little helper to not spawn
2020 Jan 10
0
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
On 1/10/20 7:33 AM, Henrik Bengtsson wrote:
> I'd like to pick up this thread started on 2019-04-11
> (https://hypatia.math.ethz.ch/pipermail/r-devel/2019-April/077632.html).
> Modulo all the other suggestions in this thread, would my proposal of
> being able to disable forked processing via an option or an
> environment variable make sense?
I don't think R should be doing
2019 May 20
1
Race condition on parallel package's mcexit and rmChild
Have read the latest code, but I still don't understand why mc_exit
needs to write zero on exit. If a child closes its pipe, parent will
know that on next select.
Best,
Yijiang
Tomas Kalibera <tomas.kalibera at gmail.com> ?2019?5?20??? ??10:52???
>
> This issue has already been addressed in 76462 (R-devel) and also ported
> to R-patched. In fact rmChild() is used in