similar to: BUG: tools::pskill() returns incorrect values or non-initated garbage values [PATCH]

Displaying 20 results from an estimated 600 matches similar to: "BUG: tools::pskill() returns incorrect values or non-initated garbage values [PATCH]"

2018 Aug 31
2
Detecting whether a process exists or not by its PID?
On Fri, Aug 31, 2018 at 2:51 PM Tomas Kalibera <tomas.kalibera at gmail.com> wrote: [...] > kill(sig=0) is specified by POSIX but indeed as you say there is a race > condition due to PID-reuse. In principle, detecting that a worker > process is still alive cannot be done correctly outside base R. I am not sure why you think so. > At user-level I would probably consider some
2018 Aug 30
3
Detecting whether a process exists or not by its PID?
Hi, I'd like to test whether a (localhost) PSOCK cluster node is still running or not by its PID, e.g. it may have crashed / core dumped. I'm ok with getting false-positive results due to *another* process with the same PID has since started. I can the PID of each cluster nodes by querying them for their Sys.getpid(), e.g. pids <- parallel::clusterEvalQ(cl, Sys.getpid()) Is there
2018 Aug 31
0
Detecting whether a process exists or not by its PID?
On 08/31/2018 03:13 PM, G?bor Cs?rdi wrote: > On Fri, Aug 31, 2018 at 2:51 PM Tomas Kalibera <tomas.kalibera at gmail.com> wrote: > [...] >> kill(sig=0) is specified by POSIX but indeed as you say there is a race >> condition due to PID-reuse. In principle, detecting that a worker >> process is still alive cannot be done correctly outside base R. > I am not sure
2018 Aug 31
0
Detecting whether a process exists or not by its PID?
On 08/31/2018 01:18 AM, Henrik Bengtsson wrote: > Hi, I'd like to test whether a (localhost) PSOCK cluster node is still > running or not by its PID, e.g. it may have crashed / core dumped. > I'm ok with getting false-positive results due to *another* process > with the same PID has since started. kill(sig=0) is specified by POSIX but indeed as you say there is a race
2019 Nov 27
2
error in parallel:::sendMaster
Hi, I am facing a very weird problem with parallel::mclapply. I have a script which does some data wrangling on an input dataset in parallel and then writes the results to disk. I have been using this script daily for more than one year always on an EC2 instance launched from the same AMI (no updates installed after launch) and processed thousands of different input data sets successfully. I now
2019 Nov 27
2
error in parallel:::sendMaster
Hi Andreas, the error is reported when some child process cannot send results to the master process, which originates from an error returned by write() - when write() returns -1 or 0. The logic around the writing has not changed since R 3.5.2. It should not be related to the printing in the child, only to returning the value. The problem may be originating from the execution environment,
2019 Nov 28
1
error in parallel:::sendMaster
Hi Andreas, thank you very much, good job finding it was EBADF. Now the question is why the pipe has been closed prematurely; it could be accidentally by R (a race condition in the cleanup code in fork.c) or possibly by some other code running in the same process (maybe the R program itself or some other code it runs). Maybe we can take this off the list and come back when we know the cause
2018 Jun 21
1
DOCUMENTATION(?): parallel::mcparallel() gives various types of "Error in unserialize(r) : ..." errors if value is of type raw
I stumbled upon the following: f <- parallel::mcparallel(raw(0L)) parallel::mccollect(f) # $`77083` # NULL but f <- parallel::mcparallel(raw(1L)) parallel::mccollect(f) # Error in unserialize(r) : read error traceback() # 2: unserialize(r) # 1: parallel::mccollect(f) (restarting because the above appears to corrupt the R session) f <- parallel::mcparallel(raw(2L))
2015 Jun 20
2
Listing all spawned jobs/processed after parallel::mcparallel()?
QUESTION: Is it possible to query number of active jobs running after launching them with parallel::mcparallel()? For example, if I launch 3 jobs using: > library(parallel) > f <- lapply(1:3, FUN=mcparallel) then I can inspect them as: > str(f) List of 3 $ :List of 2 ..$ pid: int 142225 ..$ fd : int [1:2] 8 13 ..- attr(*, "class")= chr [1:3] "parallelJob"
2013 Feb 07
1
How to NAMESPACE OS-specific importFrom?
I'd like to importFrom(parallel, mccollect, mcparallel) but on Windows these are not exported because this if(tools:::.OStype() == "unix") { export(mccollect, mcparallel, mc.reset.stream, mcaffinity) } appears at src/library/parallel/NAMESPACE:6 of svn r61857. So should I be doing if (tools:::.OStype() == "unix") { importFrom(parallel, mccollect, mcparallel) }
2020 Jan 10
2
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
I'd like to pick up this thread started on 2019-04-11 (https://hypatia.math.ethz.ch/pipermail/r-devel/2019-April/077632.html). Modulo all the other suggestions in this thread, would my proposal of being able to disable forked processing via an option or an environment variable make sense? I've prototyped a working patch that works like: > options(fork.allowed = FALSE) >
2016 Aug 30
1
mcparallel / mccollect
Hi there, I've tried to implement an asynchronous job scheduler using parallel::mcparallel() and parallel::mccollect(..., wait=FALSE). My goal was to send processes to the background, leaving the R session open for interactive use while all jobs store their results on the file system. To keep track of the running jobs I've stored the process ids and written a little helper to not spawn
2012 Apr 10
1
multicore/mcparallel error
Hello everyone, I'm trying to parallelize an R script I have written. To do this, I am first trying to use the multicore package, because I've had some previous success with that. The function I'm trying to parallelize is illumqc. I'd like to create a separate process for each of 8 files, contained in the vector "files". Below is my code: for(i in
2020 Jan 10
2
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
If I understand the thread correctly this is an RStudio issue and I would suggest that the developers consider using pthread_atfork() so RStudio can handle forking as they deem fit (bail out with an error or make RStudio work). Note that in principle the functionality requested here can be easily implemented in a package so R doesn?t need to be modified. Cheers, Simon Sent from my iPhone
2005 Jul 20
1
R function to kill a process
Is there an R function to kill a process? I found one in package fork but it is specific to UNIX and I want something that also works on Windows. The XP console command, taskkill, will do it so I can easily get the effect but it won't work on other Windows systems, even 2000 and NT. I found a free utility pskill.exe by googling around that does work across 2000/NT/XP but was still
2018 Aug 31
1
Detecting whether a process exists or not by its PID?
On Fri, Aug 31, 2018 at 3:35 PM Tomas Kalibera <tomas.kalibera at gmail.com> wrote: > > On 08/31/2018 03:13 PM, G?bor Cs?rdi wrote: > > On Fri, Aug 31, 2018 at 2:51 PM Tomas Kalibera <tomas.kalibera at gmail.com> wrote: > > [...] > >> kill(sig=0) is specified by POSIX but indeed as you say there is a race > >> condition due to PID-reuse. In
2019 May 03
1
Strange error messages from parallel::mcparallel family under 3.6.0
Dear All, Since upgrading to 3.6.0, I've been getting a strange error messages from the child process when using mcparallel/mccollect. Before filing a report in the Bugzilla, I want to figure out whether I had been doing something wrong all this time and R 3.6.0 has exposed it, or whether something else is going on. # Background # Ultimately, what I want to do is to be able to set a time
2017 Apr 25
3
tempdir() may be deleted during long-running R session
>>>>> Jeroen Ooms <jeroenooms at gmail.com> >>>>> on Tue, 25 Apr 2017 15:05:51 +0200 writes: > On Tue, Apr 25, 2017 at 1:00 PM, Martin Maechler > <maechler at stat.math.ethz.ch> wrote: >> As I've found it is not at all hard to add an option >> which checks the existence and if the directory is no >>
2020 Jan 11
1
SUGGESTION: Settings to disable forked processing in R, e.g. parallel::mclapply()
> On Jan 10, 2020, at 3:10 PM, G?bor Cs?rdi <csardi.gabor at gmail.com> wrote: > > On Fri, Jan 10, 2020 at 7:23 PM Simon Urbanek > <simon.urbanek at r-project.org> wrote: >> >> Henrik, >> >> the example from the post works just fine in CRAN R for me - the post was about homebrew build so it's conceivably a bug in their libraries. > > I
2013 Aug 02
1
segfault and RunSnowWorker: not found
Hi, While I suspect that this is an issue peculiar to my machine (Debian squeeze amd64, R version 3.0.1, up-to-date packages), I'm hoping that somebody on this list may be able to give me suggestions on how to troubleshoot and fix the following: > library (snow) > cl <- makeSOCKcluster(c("localhost","localhost")) sh: 1: RunSnowWorker: not found I presume/hope