Displaying 9 results from an estimated 9 matches for "chunk2".
Did you mean:
chunks
2007 Mar 13
2
Sweave question: prevent expansion of unevaluated reused code chunk
Hi,
Consider the following (much simplified) Sweave example:
--------------
First, we set the value of $x$:
<<chunk1,eval=FALSE>>=
x <- 1
@
Then we set the value of $y$:
<<chunk2,eval=FALSE>>=
y <- 2
@
Thus, the overall algorithm has this structure:
<<combined,eval=FALSE>>=
<<chunk1>>
<<chunk2>>
@
<<justDoIt,echo=FALSE>>=
<<combined>>
@
---------------
I'd like to be able to do something like this,...
2010 Dec 11
5
(S|odf)weave : how to intersperse (\LaTeX{}|odf) comments in source code ? Delayed R evaluation ?
...h interspersed
comments in the source (see Knuth's books rendering TeX and metafont
sources to see what I mean).
I seemed to remember that a code chunk could be defined piecewise, like in
Comments...
<<Chunk1, eval=FALSE, echo=TRUE>>=
SomeCode
@
Some other comments...
<<Chunk2, eval=FALSE, echo=TRUE>>=
MoreCode
@
And finally,
<<Chunk3, eval=TRUE, echo=TRUE>>=
<<Chunk1>>
<<Chunk2>>
EndOfTheCode
@
That works ... as long as SomeCode, MoreCode and EndOfTheCode are self-
standing pieces of R code, but *not* code fragments. You can...
2007 Oct 23
0
Residuals from biglm package
...for my computer memory, and I found quite useful the package biglm. Now everything is working perfectly. But if I want the residuals, how I can do it?
Let's say that we are running the example:
> data(trees)> ff<-log(Volume)~log(Girth)+log(Height)> > chunk1<-trees[1:10,]> chunk2<-trees[11:20,]> chunk3<-trees[21:31,]> > a <- biglm(ff,chunk1)> a <- update(a,chunk2)> a <- update(a,chunk3)> > summary(a)Large data regression model: biglm(ff, chunk1)Sample size = 31 Coef (95% CI) SE p(Intercept) -6.632 -8.231 -5.032 0.80...
2007 Mar 15
1
Sweave bug using 'FDR' in chunk label (PR#9567)
...or message:
> Stangle("bug.Rnw")
Writing to file bug.R
Warning message:
reference to unknown chunk 'getFDR' in: Sweave(file = file, driver = driver,
...)
Here is the relevant part of the "bug.R" file produced by Stangle. Note that
the label has been truncated on chunk2 (should be getFDR) but is not affected on
chunk3. Also note that chunk4 has not been expanded properly.
###################################################
### chunk number 2: getF
###################################################
x <- 1
###################################################
##...
2008 Feb 07
6
Buffer flushing
.... I send Mashal.dump''ed metadata
first, and then - the file contents (chunked). I found a silly bug:
receive_data() gets marshalled metadata and the first chunk of the
file in a single variable.
Like that:
c1.send_data("meta")
c1.send_data("chunk1")
c1.send_data("chunk2")
receiver.receive_data(data): data == "metachunk1chunk2"
I have two possible solutions:
1) Some kind of flush between some of the #send_data calls
2) Explicitly split incoming data
The first one looks better, but i don''t know is it a right design
decision.
Thanks in adv...
2006 May 17
1
Re : Large database help
...utes the Huber/White sandwich variance estimate in the same
single pass over the data.
Assuming I haven't messed up the package checking it will appear in the
next couple of day on CRAN. The syntax looks like
a <- biglm(log(Volume) ~ log(Girth) + log(Height), chunk1)
a <- update(a, chunk2)
a <- update(a, chunk3)
summary(a)
where chunk1, chunk2, chunk3 are chunks of the data.
-thomas
______________________________________________
R-help at stat.math.ethz.ch mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide!
http://www.R-project....
2023 Feb 11
1
scan(..., skip=1e11): infinite loop; cannot interrupt
...is needs to be documented and can write a
documentation patch.)
If you actually have 1e11 lines in your file and would like to read it
in chunks, it may help to use
f <- file('...')
chunk1 <- scan(f, n = n1, skip = nskip1)
# the following will continue reading where chunk1 had ended
chunk2 <- scan(f, n = n2, skip = nskip2)
...in order to avoid having to skip over chunks you have already read,
which otherwise makes the algorithm quadratic in number of lines
instead of linear. (I couldn't determine whether you're already doing
this, sorry.)
Skipping a fixed number of lines...
2012 Jan 22
2
Best practices?
Suppose I start building nodes with (say) 24 drives each in them.
Would the standard/recommended approach be to make each drive its own
filesystem, and export 24 separate bricks, server1:/data1 ..
server1:/data24 ? Making a distributed replicated volume between this and
another server would then have to list all 48 drives individually.
At the other extreme, I could put all 24 drives into some
2023 Feb 11
1
scan(..., skip=1e11): infinite loop; cannot interrupt
Hello, All:
I have a 4.54 GB file that I'm trying to read in chunks using
"scan(..., skip=__)". It works as expected for small values of "skip"
but goes into an infinite loop for "skip=1e11" and similar large values
of skip: I cannot even interrupt it; I must kill R. Below please find
sessionInfo() with a toy example.
My real problem is a large