Displaying 20 results from an estimated 30 matches for "prescheduling".
Did you mean:
rescheduling
2012 Dec 11
1
Bug in mclapply?
I've been using mclapply and have encountered situations where it gives
errors or returns incorrect results. Here's a minimal example, which gives
the error on R 2.15.2 on Mac and Linux:
library(parallel)
f <- function(x) NULL
mclapply(1, f, mc.preschedule = FALSE, mc.cores = 1)
# Error in sum(sapply(res, inherits, "try-error")) :
# invalid 'type' (list) of argument
2013 Apr 11
1
parallel::mclapply does not return try-error objects with mc.preschedule=TRUE
Hello,
Consider this:
1)
library(parallel)
res <- mclapply(1:2, stop)
#Warning message:
#In mclapply(1:2, stop) :
# all scheduled cores encountered errors in user code
is(res[[1]], 'try-error')
#[1] FALSE
2)
library(parallel)
res <- mclapply(1:2, stop, mc.preschedule=FALSE)
#Warning message:
#In mclapply(1:2, stop, mc.preschedule = FALSE) :
# 2 function calls resulted in an
2012 Nov 16
0
Bug in parallel / mclapply
Hi,
there seem to be some (small) bugs in the mclapply function in parallel.
I discovered this in the current R release version, and I checked that it is
still present in R-devel.
I think it only occurs in the part of the code corresponding to argument option
mc.preschedule = FALSE.
Here are two examples:
a)
library(parallel)
mclapply(list(), identity, mc.preschedule=FALSE)
Error in
2023 Jun 09
2
inconsistency in mclapply.....
Dear members,
I am using pbmcapply to parellise my code. But the following code doesn't work:
> LYG <- pbmclapply(LYGH,FUN = arfima,mc.cores = 2,mc.preschedule = FALSE)
| | 0%, ETA NA^
It just hangs.
But the
2010 Apr 13
0
Multicore mapply
Quick question regarding multicore versions of mapply. Package 'multicore'
provides a parallelized version of 'lapply', called 'mclapply'. I haven't
found any parallelized versions of 'mapply', however (although one can use
the lower level function 'parallel', it becomes harder to control the number
of spawned processes etc).
Is anyone aware of a
2023 Jun 09
1
inconsistency in mclapply.....
On Fri, 9 Jun 2023 18:01:44 +0000
akshay kulkarni <akshay_e4 at hotmail.com> wrote:
> > LYG <- pbmclapply(LYGH,FUN = arfima,mc.cores = 2,mc.preschedule =
> > FALSE)
> |
> |
> 0%, ETA NA^
>
> It just hangs.
My questions from the last time still stand:
0) What is your
2013 Aug 21
1
[LLVMdev] PrescheduleNodesWithMultipleUses() probable mistake.
...st.cpp b/lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp
index f5fe168..6e888da
--- a/lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp
+++ b/lib/CodeGen/SelectionDAG/ScheduleDAGRRList.cpp
@@ -2850,7 +2850,7 @@ void RegReductionPQBase::PrescheduleNodesWithMultipleUses() {
continue;
// Avoid prescheduling to copies from virtual registers, which don't behave
// like other nodes from the perspective of scheduling heuristics.
- if (SDNode *N = SU->getNode())
+ if (SDNode *N = PredSU->getNode())
if (N->getOpcode() == ISD::CopyFromReg &&
TargetRegisterInf...
2012 Feb 23
1
segfault when using data.table package in conjunction with foreach
Hi all,
I'm trying to use the package read.table within a foreach loop. I'm
grabbing 500M rows of data at a time from two different files and then
doing an aggregate/tapply like function in read.table after that. I
had planned on doing a foreach loop 39 times at once for the 39 files
I have, but obviously that won't work until I figure out why the
segfault is occurring. The
2013 Aug 21
0
[LLVMdev] PrescheduleNodesWithMultipleUses() causing failure in PickNodeToScheduleBottomUp() ???
Here is a bit more data.
After PrescheduleNodesWithMultipleUses has been run, the following Predecessor/Successor links are 'dumpAll'ed.
(I attach the full dumpAll before & after "Prescheduling SU #7 next to PredSU #4 to guide scheduling in the presence of multiple uses")
SU(3)
Predecessors:
val SU(5): Latency=1
ch SU(7): Latency=1
val SU(7): Latency=1
SU(7):
ch SU(3): Latency=1
val SU(3): Latency=1
val SU(5): Latency=1
It looks odd but seems to be fine as al...
2004 Jan 11
3
newbie question on contrasts and aov
I try to move from SPSS to R/S and am trying to reproduce the results of SPSS
in R. I calculated a one-way anova with "spk" as experimental factor and erp
as depended variable.
The result of the Anova are the same concearning the mean square, F and p
values. But I also wanted to caculate the contr.sdif(4) contrast on spk. The
results are completely different now. I hope anybody can
2013 Aug 20
2
[LLVMdev] PrescheduleNodesWithMultipleUses() causing failure in PickNodeToScheduleBottomUp() ???
Hi,
I have an assert firing due to PickNodeToScheduleBottomUp():
1. having a CallResource in use pushing an interference of current SUnit.
2. having no more SUnits in the AvailableQueue
3. The only interference being the SUnit that just failed due to a Call Resource.
4. An attempt to duplicate this node which has the 'Call Resource' as a physical register.
Thus the call
2013 Aug 21
2
[LLVMdev] PrescheduleNodesWithMultipleUses() causing failure in PickNodeToScheduleBottomUp() ???
...Re: [LLVMdev] PrescheduleNodesWithMultipleUses() causing failure in PickNodeToScheduleBottomUp() ???
Here is a bit more data.
After PrescheduleNodesWithMultipleUses has been run, the following Predecessor/Successor links are 'dumpAll'ed.
(I attach the full dumpAll before & after "Prescheduling SU #7 next to PredSU #4 to guide scheduling in the presence of multiple uses")
SU(3)
Predecessors:
val SU(5): Latency=1
ch SU(7): Latency=1
val SU(7): Latency=1
SU(7):
ch SU(3): Latency=1
val SU(3): Latency=1
val SU(5): Latency=1
It looks odd but seems to be fine as al...
2020 Oct 08
2
exiting mclapply early on error
Hey folks,
Is there any way to exit an mclapply early on error?
For example, in the following mclapply loop, I have to wait for all the processes to finish before the error is returned.
```
mclapply(X = 1:12, FUN = function(x) {Sys.sleep(0.1); if(x == 4) stop()}, mc.cores = 4, mc.preschedule = F)
```
When there are many calculations in FUN, it takes a long time before the error is returned.
2020 Jun 06
0
R 4.0.1 is released
The build system rolled up R-4.0.1.tar.gz (codename "See Things Now") this morning.
The list below details the changes in this release.
You can get the source code from
http://cran.r-project.org/src/base/R-4/R-4.0.1.tar.gz
or wait for it to be mirrored at a CRAN site nearer to you.
Binaries for various platforms will appear in due course.
For the R Core Team,
Peter Dalgaard
2020 Jun 06
0
R 4.0.1 is released
The build system rolled up R-4.0.1.tar.gz (codename "See Things Now") this morning.
The list below details the changes in this release.
You can get the source code from
http://cran.r-project.org/src/base/R-4/R-4.0.1.tar.gz
or wait for it to be mirrored at a CRAN site nearer to you.
Binaries for various platforms will appear in due course.
For the R Core Team,
Peter Dalgaard
2020 Jun 06
0
R 4.0.1 is released
The build system rolled up R-4.0.1.tar.gz (codename "See Things Now") this morning.
The list below details the changes in this release.
You can get the source code from
http://cran.r-project.org/src/base/R-4/R-4.0.1.tar.gz
or wait for it to be mirrored at a CRAN site nearer to you.
Binaries for various platforms will appear in due course.
For the R Core Team,
Peter Dalgaard
2011 Jul 12
2
MC-Simulation with foreach: Some cores finish early
Dear R-Users,
I run a MC-Simulation using the the packages "foreach" and "doMC" on a
PowerMac with 24 cores. There are roughly a hundred parametersets and I
parallelized the program in a way, that each core computes one of these
parametersets completely.
The problem ist, that some parametersets take a lot longer to compute than
others. After a while there are only a quarter
2019 Nov 27
2
error in parallel:::sendMaster
Hi Andreas,
the error is reported when some child process cannot send results to the
master process, which originates from an error returned by write() -
when write() returns -1 or 0. The logic around the writing has not
changed since R 3.5.2. It should not be related to the printing in the
child, only to returning the value. The problem may be originating from
the execution environment,
2013 Sep 24
0
[LLVMdev] MI Scheduler Update (was Experimental Evaluation of the Schedulers in LLVM 3.3)
On Sep 17, 2013, at 11:04 AM, Ghassan Shobaki <ghassan_shobaki at yahoo.com> wrote:
> 1. The SD schedulers significantly impact the spill counts and the execution times for many benchmarks, but the machine instruction (MI) scheduler in 3.3 has very limited impact on both spill counts and execution times. Is this because most of you work on MI did not make it into the 3.3 release?
2019 Nov 27
2
error in parallel:::sendMaster
Hi,
I am facing a very weird problem with parallel::mclapply. I have a script which does some data wrangling on an input dataset in parallel and then writes the results to disk. I have been using this script daily for more than one year always on an EC2 instance launched from the same AMI (no updates installed after launch) and processed thousands of different input data sets successfully. I now