similar to: Detecting whether a process exists or not by its PID?

Displaying 20 results from an estimated 1000 matches similar to: "Detecting whether a process exists or not by its PID?"

2018 Aug 31
2
Detecting whether a process exists or not by its PID?
On Fri, Aug 31, 2018 at 2:51 PM Tomas Kalibera <tomas.kalibera at gmail.com> wrote: [...] > kill(sig=0) is specified by POSIX but indeed as you say there is a race > condition due to PID-reuse. In principle, detecting that a worker > process is still alive cannot be done correctly outside base R. I am not sure why you think so. > At user-level I would probably consider some
2018 Aug 31
0
Detecting whether a process exists or not by its PID?
On 08/31/2018 01:18 AM, Henrik Bengtsson wrote: > Hi, I'd like to test whether a (localhost) PSOCK cluster node is still > running or not by its PID, e.g. it may have crashed / core dumped. > I'm ok with getting false-positive results due to *another* process > with the same PID has since started. kill(sig=0) is specified by POSIX but indeed as you say there is a race
2018 Aug 31
0
Detecting whether a process exists or not by its PID?
On 08/31/2018 03:13 PM, G?bor Cs?rdi wrote: > On Fri, Aug 31, 2018 at 2:51 PM Tomas Kalibera <tomas.kalibera at gmail.com> wrote: > [...] >> kill(sig=0) is specified by POSIX but indeed as you say there is a race >> condition due to PID-reuse. In principle, detecting that a worker >> process is still alive cannot be done correctly outside base R. > I am not sure
2018 Aug 31
1
Detecting whether a process exists or not by its PID?
On Fri, Aug 31, 2018 at 3:35 PM Tomas Kalibera <tomas.kalibera at gmail.com> wrote: > > On 08/31/2018 03:13 PM, G?bor Cs?rdi wrote: > > On Fri, Aug 31, 2018 at 2:51 PM Tomas Kalibera <tomas.kalibera at gmail.com> wrote: > > [...] > >> kill(sig=0) is specified by POSIX but indeed as you say there is a race > >> condition due to PID-reuse. In
2018 Mar 18
1
BUG: tools::pskill() returns incorrect values or non-initated garbage values [PATCH]
For the record, I've just filed the following bug report with a patch to https://bugs.r-project.org/bugzilla3/show_bug.cgi?id=17395: tools::pskill() returns either random garbage or incorrect values, because the internal ps_kill() (a) it does not initiate the returned logical, and (b) it assigns the logical returned the 0/-1 value of C's kill(). # Example 1: returns garbage due to
2025 May 11
2
Is it possible to gracefully interrupt a child R process on MS Windows?
In help("pskill", package = "tools") is says: Only SIGINT and SIGTERM will be defined on Windows, and pskill will always use the Windows system call TerminateProcess. As far as I understand it, TerminateProcess [1] terminates the process "quite abruptly". Specifically, it is not possible for the process to intercept the termination and gracefully shutdown. In R
2025 May 12
1
Is it possible to gracefully interrupt a child R process on MS Windows?
I think for reliability and portability of termination, one needs to implement an application-specific termination protocol on both ends. Only within specific application constraints, one can also define what graceful termination means. Typically, one also has other expectations from the termination process - such that the processes will terminate in some finite time/soon. In some cases one
2015 Mar 25
2
nested parallel workers
Hi Simon, I'm having trouble with nested parallel workers, specifically, forking inside socket connections. When mclapply is called inside a SOCK, PSOCK or FORK worker I get an error in unserialize(). cl <- makeCluster(1, "SOCK") fun = function(i) { library(parallel) mclapply(1:2, sqrt) } Failure occurs after multiple calls to clusterApply: > clusterApply(cl, 1,
2015 Mar 30
2
nested parallel workers
On 03/25/2015 07:48 PM, Simon Urbanek wrote: > On Mar 25, 2015, at 3:46 PM, Valerie Obenchain <vobencha at fredhutch.org> wrote: > >> Hi Simon, >> >> I'm having trouble with nested parallel workers, specifically, forking inside socket connections. >> > > You simply can't by definition - when you fork *all* the workers share the same connection
2018 Jun 21
1
DOCUMENTATION(?): parallel::mcparallel() gives various types of "Error in unserialize(r) : ..." errors if value is of type raw
I stumbled upon the following: f <- parallel::mcparallel(raw(0L)) parallel::mccollect(f) # $`77083` # NULL but f <- parallel::mcparallel(raw(1L)) parallel::mccollect(f) # Error in unserialize(r) : read error traceback() # 2: unserialize(r) # 1: parallel::mccollect(f) (restarting because the above appears to corrupt the R session) f <- parallel::mcparallel(raw(2L))
2019 Apr 30
2
mccollect with NULL in R 3.6
Dear All, I'm running into issues with calling mccollect on a list containing NULL using R 3.6 (this used to work in 3.5.3): jobs <- lapply( list(NULL, 'foobar'), function(x) mcparallel(identity(x))) mccollect(jobs, wait = FALSE, timeout = 0) #> Error in names(res) <- pnames[match(s, pids)] : #> 'names' attribute [2] must be the same length as the vector
2019 May 03
2
mccollect with NULL in R 3.6
On Thu, May 2, 2019 at 7:24 PM Tomas Kalibera <tomas.kalibera at gmail.com> wrote: > > On 5/1/19 12:25 AM, Gergely Dar?czi wrote: > > Dear All, > > > > I'm running into issues with calling mccollect on a list containing NULL > > using R 3.6 (this used to work in 3.5.3): > > > > jobs <- lapply( > > list(NULL, 'foobar'), >
2013 Feb 07
1
How to NAMESPACE OS-specific importFrom?
I'd like to importFrom(parallel, mccollect, mcparallel) but on Windows these are not exported because this if(tools:::.OStype() == "unix") { export(mccollect, mcparallel, mc.reset.stream, mcaffinity) } appears at src/library/parallel/NAMESPACE:6 of svn r61857. So should I be doing if (tools:::.OStype() == "unix") { importFrom(parallel, mccollect, mcparallel) }
2015 Jun 20
2
Listing all spawned jobs/processed after parallel::mcparallel()?
QUESTION: Is it possible to query number of active jobs running after launching them with parallel::mcparallel()? For example, if I launch 3 jobs using: > library(parallel) > f <- lapply(1:3, FUN=mcparallel) then I can inspect them as: > str(f) List of 3 $ :List of 2 ..$ pid: int 142225 ..$ fd : int [1:2] 8 13 ..- attr(*, "class")= chr [1:3] "parallelJob"
2016 Aug 30
1
mcparallel / mccollect
Hi there, I've tried to implement an asynchronous job scheduler using parallel::mcparallel() and parallel::mccollect(..., wait=FALSE). My goal was to send processes to the background, leaving the R session open for interactive use while all jobs store their results on the file system. To keep track of the running jobs I've stored the process ids and written a little helper to not spawn
2019 May 03
1
Strange error messages from parallel::mcparallel family under 3.6.0
Dear All, Since upgrading to 3.6.0, I've been getting a strange error messages from the child process when using mcparallel/mccollect. Before filing a report in the Bugzilla, I want to figure out whether I had been doing something wrong all this time and R 3.6.0 has exposed it, or whether something else is going on. # Background # Ultimately, what I want to do is to be able to set a time
2014 May 21
2
issue with parallel package
Dear maintainers of the parallel package, I ran into an issue with the parallel package in R-3.1.0. The following code prints the message "NULL!" quite a lot. library(parallel) for (n in 1:1000) { p <- mcparallel(sqrt(n)) res <- mccollect(p, wait=FALSE, timeout=1000) mccollect(p) if (is.null(res)) cat(n," NULL!\n") } It does not happen in
2017 Nov 09
2
check does not check that package examples remove tempdir()
I think recreating tempdir() is ok in an emergency situation, but package code should not be removing tempdir() - it may contain important information. Bill Dunlap TIBCO Software wdunlap tibco.com On Wed, Nov 8, 2017 at 4:55 PM, Henrik Bengtsson <henrik.bengtsson at gmail.com > wrote: > Related to this problem - from R-devel NEWS >
2020 Apr 29
2
mclapply returns NULLs on MacOS when running GAM
Thanks Simon, I will take note of the sensible default for core usage. I?m trying to achieve small scale parallelism, where tasks take 1-5 seconds and make fuller use of consumer hardware. Its not a HPC-worthy computation but even laptops these days come with 4 cores and I don?t see a reason to not make use of it. The goal for the current piece of code I?m working on is to bootstrap many
2020 Nov 01
2
parallel PSOCK connection latency is greater on Linux?
I'm exploring latency overhead of parallel PSOCK workers and noticed that serializing/unserializing data back to the main R session is significantly slower on Linux than it is on Windows/MacOS with similar hardware. Is there a reason for this difference and is there a way to avoid the apparent additional Linux overhead? I attempted to isolate the behavior with a test that simply returns