Displaying 16 results from an estimated 16 matches for "davidkatzconsult".
2007 Mar 06
4
Memory Limits in Ubuntu Linux
...e the
problem)
2) Recompile R to get bigger memory capability? (I'll have to cross-post to
some R forums too)
This will be a challenge for a Linux newbie...like me.
3) Any other suggestions? My goal is to create a bigger neural network than
fits in my Windows R version.
--
David Katz
www.davidkatzconsulting.com
541 482-1137
[[alternative HTML version deleted]]
2007 Jan 30
1
SparseM and Stepwise Problem
...the user is willing to specify the design matrix in matrix.csr form.
This is often advantageous in large problems to reduce memory requirements.
I need some help or a reference that will show how to create the design matrix from
data in matrix.csr form.
Thanks for any help.
--
David Katz
www.davidkatzconsulting.com
541 482-1137
[[alternative HTML version deleted]]
2010 Mar 22
1
Bayesian Networks and Bayesian Survival Analysis
Looking for help with a project for the US Navy, requires knowledge of
Bayesian Statistics, Bayesian Networks and Survival Analysis. Please respond
with CV. Thanks.
--
David Katz
www.davidkatzconsulting.com
[[alternative HTML version deleted]]
2007 Nov 11
4
Largest N Values Efficiently?
What is the most efficient alternative to x[order(x)][1:n] where
length(x)>>n?
I also need the positions of the mins/maxs perhaps by preserving names.
Thanks for any suggestions.
--
View this message in context: http://www.nabble.com/Largest-N-Values-Efficiently--tf4788033.html#a13697535
Sent from the R help mailing list archive at Nabble.com.
2010 Feb 02
1
subset function unexpected behavior
I was surprised to see this unexpected behavior of subset in a for loop. I
looked in subset.data.frame and it seemed to me that both versions should
work, since the subset call should be evaluated in the global environment -
but perhaps I don't understand environments well enough. Can someone
enlighten me? In any case, this is a bit of a gotcha for naive users of
subset.
input.data <-
2008 Dec 18
4
autologistic modelling in R
Hi,
I have spatially autocorrelated data (with a binary response variable and
continuous predictor variables). I believe I need to do an autologistic
model, does anyone know a method for doing this in R?
Many thanks
C Bell
2011 Jan 11
3
list concatenation
Dear R gurus,
first let me apologize for a question that might hve been answered
before. I was not able to find the solution yet. I want to concatenate
two lists of lists at their lowest level.
Suppose I have two lists of lists:
list.1 <- list("I"=list("A"=c("a", "b", "c"), "B"=c("d", "e", "f")),
2009 Apr 06
3
how to subsample all possible combinations of n species taken 1:n at a time?
Hello
I apologise for the length of this entry but please bear with me.
In short:
I need a way of subsampling communities from all possible communities of n
taxa taken 1:n at a time without having to calculate all possible
combinations (because this gives me a memory error - using
combn() or expand.grid() at least). Does anyone know of a function? Or can
you help me edit the
combn
or
2007 Aug 15
1
RFclustering - is it available in R?
Several searches turned up nothing. Perhaps I will try to implement it if
nobody else has. Thanks.
--
View this message in context: http://www.nabble.com/RFclustering---is-it-available-in-R--tf4274225.html#a12165636
Sent from the R help mailing list archive at Nabble.com.
2008 Apr 06
0
mgcv::gam prediction using lpmatrix
The documentation for predict.gam in library mgcv gives an example of using
an "lpmatrix" to do approximate prediction via interpolation. However, the
code is specific to the example wrt the number of smooth terms, df's for
each,etc. (which is entirely appropriate for an example)
Has anyone generalized this to directly generate code from a gam object (eg
SAS or C code)? I wanted to
2008 Apr 09
1
mgcv::predict.gam lpmatrix for prediction outside of R
This is in regards to the suggested use of type="lpmatrix" in the
documentation for mgcv::predict.gam. Could one not get the same result more
simply by using type="terms" and interpolating each term directly? What is
the advantage of the lpmatrix approach for prediction outside R? Thanks.
--
View this message in context:
2008 Jun 11
1
mgcv::gam error message for predict.gam
Sometimes, for specific models, I get this error from predict.gam in library
mgcv:
Error in complete.cases(object) : negative length vectors are not allowed
Here's an example:
model.calibrate <-
gam(meansalesw ~ s(tscore,bs="cs",k=4),
data=toplot,
weights=weight,
gam.method="perf.magic")
> test <- predict(model.calibrate,newdata)
Error in
2009 Jul 10
0
Windows Graphics Device Lockups with Rterm
I've been using Rterm with ESS to run R for some time. Recently I've
experienced lockups when displaying graphics; the first display seems to
work, but then refuses to respond and must be killed with dev.off(). Rgui
has no problems. I've tried eliminating all other processes that might cause
conflicts, to no avail.
I'm using win XP and R 2.9.0. Here's a transcript using rterm:
2008 May 06
1
mgcv::gam shrinkage of smooths
In Dr. Wood's book on GAM, he suggests in section 4.1.6 that it might be
useful to shrink a single smooth by adding S=S+epsilon*I to the penalty
matrix S. The context was the need to be able to shrink the term to zero if
appropriate. I'd like to do this in order to shrink the coefficients towards
zero (irrespective of the penalty for "wiggliness") - but not necessarily
all the
2008 May 15
1
proto naming clash?
Trying to learn Proto. This threw me:
#startup r...
> > library(proto)
> a <- proto(x=10)
> a$x
[1] 10
> x <- proto(x=100)
> x$x
Error in get("x", env = x, inherits = TRUE) : invalid 'envir' argument
>
Do I simply need to be careful to name proto objects and proto components
uniquely? Is this the desired behavior for proto objects?
Thanks.
--
View
2008 Jul 05
1
Random Forest %var(y)
The verbose option gives a display like:
> rf.500 <-
+ randomForest(new.x,trn.y,do.trace=20,ntree=100,nodesize=500,
+ importance=T)
| Out-of-bag |
Tree | MSE %Var(y) |
20 | 0.9279 100.84 |
What is the meaning of %var(y)>100%? I expected that to correspond to a
model that was worse than random, but the predictions seem much better than
that on