search for: chunk1

Displaying 13 results from an estimated 13 matches for "chunk1".

Did you mean: chunks
2007 Mar 13
2
Sweave question: prevent expansion of unevaluated reused code chunk
Hi, Consider the following (much simplified) Sweave example: -------------- First, we set the value of $x$: <<chunk1,eval=FALSE>>= x <- 1 @ Then we set the value of $y$: <<chunk2,eval=FALSE>>= y <- 2 @ Thus, the overall algorithm has this structure: <<combined,eval=FALSE>>= <<chunk1>> <<chunk2>> @ <<justDoIt,echo=FALSE>>= <<combined&...
2012 Jul 06
2
Maximum number of patterns and speed in grep
...get an error message when I do the following: data <- array() for (j in 1:length(x)) { array[j] <- length(grep(paste(patterns[1:7700], collapse = "|"), x[j], value = T)) } When I break this up into 4 chunks of patterns it works: data <- array() for (j in 1:length(x)) { array$chunk1[j] <- length(grep(paste(patterns[1:2500], collapse = "|"), x[j], value = T)) array$chunk1[j] <- length(grep(paste(patterns[2501:5000], collapse = "|"), x[j], value = T)) array$chunk1[j] <- length(grep(paste(patterns[5001:7500], collapse = "|"), x[j], value...
2011 Dec 21
1
Looping over files
Hi, ?I have a list of files in one of my working directories: "chr17.chunk1.dose.fvd" "chr17.chunk1.dose.fvi" "chr17.chunk1.prob.fvd"? "chr17.chunk1.prob.fvi"? ........... ......... ........ "chr17.chunk10.dose.fvd" "chr17.chunk10.dose.fvi" "chr17.chunk10.prob.fvd" "chr17.chunk10.prob.fvi&qu...
2011 Nov 15
1
gsub help
Hi, ?I am working with the following list of files: [1] "study_chr1.one.phased.impute2.chunk1"?????????????? [2] "study_chr1.one.phased.impute2.chunk1_info"????????? [3] "study_chr1.one.phased.impute2.chunk1_info_by_sample" [4] "study_chr1.one.phased.impute2.chunk1_summary"?????? [5] "study_chr1.one.phased.impute2.chunk1_warnings"?????? The f...
2010 Dec 11
5
(S|odf)weave : how to intersperse (\LaTeX{}|odf) comments in source code ? Delayed R evaluation ?
...paedaogical reasons, I wish to produce a document presenting some source code with interspersed comments in the source (see Knuth's books rendering TeX and metafont sources to see what I mean). I seemed to remember that a code chunk could be defined piecewise, like in Comments... <<Chunk1, eval=FALSE, echo=TRUE>>= SomeCode @ Some other comments... <<Chunk2, eval=FALSE, echo=TRUE>>= MoreCode @ And finally, <<Chunk3, eval=TRUE, echo=TRUE>>= <<Chunk1>> <<Chunk2>> EndOfTheCode @ That works ... as long as SomeCode, MoreCode and E...
2007 Oct 23
0
Residuals from biglm package
...arge dataset that is to big for my computer memory, and I found quite useful the package biglm. Now everything is working perfectly. But if I want the residuals, how I can do it? Let's say that we are running the example: > data(trees)> ff<-log(Volume)~log(Girth)+log(Height)> > chunk1<-trees[1:10,]> chunk2<-trees[11:20,]> chunk3<-trees[21:31,]> > a <- biglm(ff,chunk1)> a <- update(a,chunk2)> a <- update(a,chunk3)> > summary(a)Large data regression model: biglm(ff, chunk1)Sample size = 31 Coef (95% CI) SE p(Intercep...
2023 Feb 11
1
scan(..., skip=1e11): infinite loop; cannot interrupt
...OF ever arrives. (We never skip lines when reading from the console? I suppose it makes sense. I think this needs to be documented and can write a documentation patch.) If you actually have 1e11 lines in your file and would like to read it in chunks, it may help to use f <- file('...') chunk1 <- scan(f, n = n1, skip = nskip1) # the following will continue reading where chunk1 had ended chunk2 <- scan(f, n = n2, skip = nskip2) ...in order to avoid having to skip over chunks you have already read, which otherwise makes the algorithm quadratic in number of lines instead of linear. (...
2012 Oct 25
2
Regarding the memory allocation problem
...n computing the distances for first 3 chunks, I obtained similar error (cannot allocate vector of size 102.3Mb). Q) Here what I could not understand is, how come memory become insufficient when dealing with 4th chunk? Q) Suppose if i computed a matrix 'm' during calculation associated with chunk1, then is this matrix not replaced when I again compute 'm' when dealing with chunk 2? Regards, Purna
2012 Jan 24
1
Sweave driver extension
Almost all of the coxme package and an increasing amount of the survival package are now written in noweb, i.e., .Rnw files. It would be nice to process these using the Sweave function + a special driver, which I can do using a modified version of Sweave. The primary change is to allow the following type of construction <<coxme>> coxme <- function(formula, data, subset, blah blah
2008 Feb 07
6
Buffer flushing
...building a file transferring app. I send Mashal.dump''ed metadata first, and then - the file contents (chunked). I found a silly bug: receive_data() gets marshalled metadata and the first chunk of the file in a single variable. Like that: c1.send_data("meta") c1.send_data("chunk1") c1.send_data("chunk2") receiver.receive_data(data): data == "metachunk1chunk2" I have two possible solutions: 1) Some kind of flush between some of the #send_data calls 2) Explicitly split incoming data The first one looks better, but i don''t know is it a righ...
2006 May 17
1
Re : Large database help
...product matrix). It also computes the Huber/White sandwich variance estimate in the same single pass over the data. Assuming I haven't messed up the package checking it will appear in the next couple of day on CRAN. The syntax looks like a <- biglm(log(Volume) ~ log(Girth) + log(Height), chunk1) a <- update(a, chunk2) a <- update(a, chunk3) summary(a) where chunk1, chunk2, chunk3 are chunks of the data. -thomas ______________________________________________ R-help at stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting...
2023 Feb 11
1
scan(..., skip=1e11): infinite loop; cannot interrupt
Hello, All: I have a 4.54 GB file that I'm trying to read in chunks using "scan(..., skip=__)". It works as expected for small values of "skip" but goes into an infinite loop for "skip=1e11" and similar large values of skip: I cannot even interrupt it; I must kill R. Below please find sessionInfo() with a toy example. My real problem is a large
2012 Jan 22
2
Best practices?
Suppose I start building nodes with (say) 24 drives each in them. Would the standard/recommended approach be to make each drive its own filesystem, and export 24 separate bricks, server1:/data1 .. server1:/data24 ? Making a distributed replicated volume between this and another server would then have to list all 48 drives individually. At the other extreme, I could put all 24 drives into some