Displaying 13 results from an estimated 13 matches similar to: "gsub help"
2011 Dec 21
1
Looping over files
Hi,
?I have a list of files in one of my working directories:
"chr17.chunk1.dose.fvd"
"chr17.chunk1.dose.fvi"
"chr17.chunk1.prob.fvd"?
"chr17.chunk1.prob.fvi"?
...........
.........
........
"chr17.chunk10.dose.fvd"
"chr17.chunk10.dose.fvi"
"chr17.chunk10.prob.fvd"
"chr17.chunk10.prob.fvi"
And I am
2007 Mar 13
2
Sweave question: prevent expansion of unevaluated reused code chunk
Hi,
Consider the following (much simplified) Sweave example:
--------------
First, we set the value of $x$:
<<chunk1,eval=FALSE>>=
x <- 1
@
Then we set the value of $y$:
<<chunk2,eval=FALSE>>=
y <- 2
@
Thus, the overall algorithm has this structure:
<<combined,eval=FALSE>>=
<<chunk1>>
<<chunk2>>
@
2012 Jul 06
2
Maximum number of patterns and speed in grep
Hi,
I am using R's grep function to find patterns in vectors of strings. The
number of patterns I would like to match is 7,700 (of different sizes). I
noticed that I get an error message when I do the following:
data <- array()
for (j in 1:length(x))
{
array[j] <- length(grep(paste(patterns[1:7700], collapse = "|"), x[j],
value = T))
}
When I break this up into 4 chunks of
2007 Oct 23
0
Residuals from biglm package
Hi all,
first of all, I'm not an expert on R, I'm still learning, so sorry if this is a stupid question...
I have a large dataset that is to big for my computer memory, and I found quite useful the package biglm. Now everything is working perfectly. But if I want the residuals, how I can do it?
Let's say that we are running the example:
> data(trees)>
2010 Dec 11
5
(S|odf)weave : how to intersperse (\LaTeX{}|odf) comments in source code ? Delayed R evaluation ?
Dear list,
Inspired by the original Knuth tools, and for paedaogical reasons, I wish
to produce a document presenting some source code with interspersed
comments in the source (see Knuth's books rendering TeX and metafont
sources to see what I mean).
I seemed to remember that a code chunk could be defined piecewise, like in
Comments...
<<Chunk1, eval=FALSE, echo=TRUE>>=
2006 May 17
1
Re : Large database help
Thanks for doing this Thomas, I have been thinking about what it would
take to do this, but if it were left to me, it would have taken a lot
longer.
Back in the 80's there was a statistical package called RUMMAGE that did
all computations based on sufficient statistics and did not keep the
actual data in memory. Memory for computers became cheap before
datasets turned huge so there
2023 Feb 11
1
scan(..., skip=1e11): infinite loop; cannot interrupt
On Fri, 10 Feb 2023 23:38:55 -0600
Spencer Graves <spencer.graves at prodsyse.com> wrote:
> I have a 4.54 GB file that I'm trying to read in chunks using
> "scan(..., skip=__)". It works as expected for small values of
> "skip" but goes into an infinite loop for "skip=1e11" and similar
> large values of skip: I cannot even interrupt it; I
2012 Oct 25
2
Regarding the memory allocation problem
Dear All,
My main objective was to compute the distance of 100000 vectors from a
set having 900 other vectors. I've a file named "seq_vec" containing
100000 records and 256 columns.
While computing, the memory was not sufficient and resulted in error
"cannot allocate vector of size 152.1Mb"
So I've approached the problem in the following:
Rather than reading the data
2008 Feb 07
6
Buffer flushing
Short question: is there way to tell EM to actually send data after
send_data call?
I''m building a file transferring app. I send Mashal.dump''ed metadata
first, and then - the file contents (chunked). I found a silly bug:
receive_data() gets marshalled metadata and the first chunk of the
file in a single variable.
Like that:
c1.send_data("meta")
2006 Jan 27
0
How to put peers into Realtime
I have something like below in my sip.conf. How can I put this into
Real-time?
[voipbuster]
type=friend ; (or "peer" if we don't need incoming calls, or if there is
a separate section with "type=user")
host=sip1.voipbuster.com
disallow=all
allow=ulaw
allow=alaw
allow=gsm
allow=g726
username=abcd1 ;={{YOURUSERNAME}}
fromuser=abcd1
2012 Jan 24
1
Sweave driver extension
Almost all of the coxme package and an increasing amount of the survival
package are now written in noweb, i.e., .Rnw files. It would be nice to
process these using the Sweave function + a special driver, which I can
do using a modified version of Sweave. The primary change is to allow
the following type of construction
<<coxme>>
coxme <- function(formula, data, subset, blah blah
2023 Feb 11
1
scan(..., skip=1e11): infinite loop; cannot interrupt
Hello, All:
I have a 4.54 GB file that I'm trying to read in chunks using
"scan(..., skip=__)". It works as expected for small values of "skip"
but goes into an infinite loop for "skip=1e11" and similar large values
of skip: I cannot even interrupt it; I must kill R. Below please find
sessionInfo() with a toy example.
My real problem is a large
2012 Jan 22
2
Best practices?
Suppose I start building nodes with (say) 24 drives each in them.
Would the standard/recommended approach be to make each drive its own
filesystem, and export 24 separate bricks, server1:/data1 ..
server1:/data24 ? Making a distributed replicated volume between this and
another server would then have to list all 48 drives individually.
At the other extreme, I could put all 24 drives into some