similar to: foreach with registerDoMC on R 2.12.0 OSX 10.6 --- errors and warnings

Displaying 20 results from an estimated 1000 matches similar to: "foreach with registerDoMC on R 2.12.0 OSX 10.6 --- errors and warnings"

2011 Oct 17
2
Foreach (doMC)
Hello, I am trying to run a small example with foreach, but I am having some problems. Here is the code: *library(doMC) registerDoMC() zappa = list() frank = list() foreach (i = 1:4) %dopar% { zappa[[i]] = kmeans (iris[-5],4) frank[[i]] = warnings() }* The code runs without error. However the zappa and frank will be empty lists. If I use regular *for *instead, the list will be filled up
2012 Feb 18
3
foreach %do% and %dopar%
Hi everyone, I'm working on a script trying to use foreach %dopar% but without success, so I manage to run the code with foreach %do% and looks like this: The code is part of a MCMC model for projects valuation, returning the most important results (VPN, TIR, EVA, etc.) of the simulation. foreach (simx = NsimT, .combine=cbind, .inorder=FALSE, .verbose=TRUE) %do% { MCPVMPA = MCVAMPA[simx]
2011 Jul 04
1
writeLines + foreach/doMC
Hi I'm processing sequencing data trying to collapsing the locations of each unique sequence and write the results to a file (as storing that in a table will require 10GB mem at least) so I wrote a function that, given a sequence id, provide the needed line to be stored library(doMC) # load library registerDoMC(12) # assign the Number of CPU
2011 Jun 28
1
doMC - compiler - concatenate an expression vector into a single expression?
Hi, this post is about foreach operators, the compiler package and the last update of doMC that includes support for the compiler functionality. I am using a home-made %dopar%-like operator that adds some custom expression to be executed before the foreach loop expression itself (see sample code below). It used to work perfectly with doMC 1.2.1, but with the introduction of the compiler
2011 Aug 17
1
R cmd check and multicore foreach loop
Hi, in R 2.12.1, R CMD check hangs when building a vignette that uses a foreach loop with the doMC parallel backend. This does not happen in R 2.13.1, nor if I use doSEQ instead of doMC. All versions of multicore, doMC and foreach are the same on both my R installations. Has anybody encountered a similar issue? Thank you. Renaud ### UNIVERSITY OF CAPE TOWN This e-mail is subject to the
2012 Jan 19
1
converting a for loop into a foreach loop
Dear all, Just wondering if someone could help me out converting my code from a for() loop into a foreach() loop or using one of the apply() function. I have a very large dataset and so I'm hoping to make use of a parallel backend to speed up the processing time. I'm having trouble getting selecting three variables in the dataset to use in the foreach() loops. My for() loop code is:
2011 Jul 02
5
%dopar% parallel processing experiment
dear R experts--- I am experimenting with multicore processing, so far with pretty disappointing results. Here is my simple example: A <- 100000 randvalues <- abs(rnorm(A)) minfn <- function( x, i ) { log(abs(x))+x^3+i/A+randvalues[i] } ?## an arbitrary function ARGV <- commandArgs(trailingOnly=TRUE) if (ARGV[1] == "do-onecore") { ?library(foreach) ?discard <-
2012 Feb 23
1
segfault when using data.table package in conjunction with foreach
Hi all, I'm trying to use the package read.table within a foreach loop. I'm grabbing 500M rows of data at a time from two different files and then doing an aggregate/tapply like function in read.table after that. I had planned on doing a foreach loop 39 times at once for the 39 files I have, but obviously that won't work until I figure out why the segfault is occurring. The
2010 Nov 03
1
Auto-killing processes spawned by foreach::doMC
Hi all, Sometimes I'll find myself "ctrl-c"-ing like a madman to kill some code that's parallelized via foreach/doMC when I realized that I just set my cpu off to do something boneheaded, and it will keep doing that thing for a while. In these situations, since I interrupted its normal execution, foreach/doMC doesn't "clean up" after itself by killing the
2012 Jul 24
1
untaring files in parallel with foreach and doSNOW?
Hello, I'm running some code that requires untaring many files in the first step. This takes a lot of time and I'd like to do this in parallel, if possible. If it's the disk reading speed that is the bottleneck I guess I should not expect an improvement, but perhaps it's the processor. So I want to try this out. I'm working on windows 7 with R 2.15.1 and the latest foreach
2010 Dec 08
2
Parallel Scan of Large File
Is it possible to parallel scan a large file into a character vector in 1M chunks using scan() with the "doMC" package? Furthermore, can I specify the tasks for each child? i.e. I'm working on a Linux box with 8 cores and would like to scan in 8M records at time (all 8 cores scan 1M records at a time) from a file with 40M records total. file <-
2012 Feb 20
1
bigmemory not really parallel
Hi, all, I have a really big matrix that I want to run k-means on. I tried: >data <- read.big.memory('mydata.csv',type='double',backingfile='mydata.bin',descriptorfile='mydata.desc') I'm using doMC to register multicore. >library(doMC) >registerDoMC(cores=8) >ans<-bigkmeans(data,k) In system monitor, it seems only one thread running R. Is
2015 Feb 09
2
R CMD check: Uses the superseded package: ‘doSNOW’
Dear list, When I run an R CMD check --as-cran on my package (pROC) I get the following note: > Uses the superseded package: ?doSNOW? The fact that it uses the doSNOW package is correct as I have the following example in an .Rd file: > #ifdef windows > if (require(doSNOW)) { > registerDoSNOW(cl <- makeCluster(2, type = "SOCK")) > ci(roc2,
2010 Nov 16
2
Debugging segfault in foreach
Hi, I'm using R-2.12 on a linux 64bit machine. When I run a chunk of code inside a foreach() %do% { ...} or %dopar% {...} (with doMC backend) I keep getting a segfault. Running the *same* code within lapply(something, function(x) ... ) doesn't result in any segfaults. I'll paste the output below, but I'm not sure it would be helpful. I'm more curious how to go about smoking
2010 Nov 11
0
logging interim results using foreach/doMC
Dear all, I am converting a large process to a parallel backhend using doMC and foreach. Basically, I havea long list of input graph files and each of them calls soem basic igraph package functions. I am parallelizing the run, in order to save time. All works fine, and each %dopar% call ends with a vector of results that at the end got fed into a data frame and saved as a csv table. When I
2010 Jun 16
2
Parallel computing on Windows (foreach) (Sergey Goriatchev)
foreach (or virtually anything you might use for concurrent programming) only really makes sense if the work the "clients" are doing is substantial enough to overwhelm the communication overhead. And there are many ways to accomplish the same task more or less efficiently (for example, doing blocks of tasks in chunks rather than passing each one as an individual job). But more to the
2011 Jul 12
2
MC-Simulation with foreach: Some cores finish early
Dear R-Users, I run a MC-Simulation using the the packages "foreach" and "doMC" on a PowerMac with 24 cores. There are roughly a hundred parametersets and I parallelized the program in a way, that each core computes one of these parametersets completely. The problem ist, that some parametersets take a lot longer to compute than others. After a while there are only a quarter
2010 Feb 16
2
for loop Vs apply function Vs foreach (REvolution enhancement)
Dear all, I know this topic has already been covered in other posts (at least the for loop Vs apply family of function), but I am looking for fresh / up-to-date opinion and feedback on those 3 methods to run unavoidable loops in R. I realise that it may be too general question for many, so any feedback appreciated. 1. apply Vs for loop >> Seems apply is (was?) supposed to be faster than
2010 Sep 16
2
parallel computation with plyr 1.2.1
Hi, I have been trying to use the new .parallel argument with the most recent version of plyr [1] to speed up some tasks. I can run the example in the NEWS file [1], and it seems to be working correctly. However, R will only use a single core when I try to apply this same approach with ddply(). 1. http://cran.r-project.org/web/packages/plyr/NEWS Watching my CPUs I see that in both cases
2011 Feb 27
0
foreach() package for parallel computing
dear R experts---I have been experimenting with the foreach package (with doMC) for a while. my first impression is that it is a very easy way to acquire parallel processing capabilities. (thanks, revolution R.) the only two gotchas were about installation (it required an exit and restart), and the precedence order of the foreach (higher than '+', I think), but once I understood this,