Displaying 20 results from an estimated 72 matches for "parsimonious".
2011 Dec 19
0
Global model more parsimonious (minor QAICc)
...The global model has
52 estimable parameters and sample size is made of about 1500 observations.
The global model seems not to have problems of parameters estimability nor
get troubles with the convergence of these models.
I have run all the models and at the end I get the global model as the more
parsimonious one (least QAICc -> I have set c-hat=1.15 according to goodness
of fit on the global model).
This is the first time it occurs to me and I am somehow confused.
I believe the data set is not that poor with respect to the number of
parameters (it should not be the Friedman paradox) and the only thi...
2011 Jan 05
0
Nnet and AIC: selection of a parsimonious parameterisation
Hi All,
I am trying to use a neural network for my work, but I am not sure about my
approach to select a parsimonious model. In R with nnet, the IAC has
not been defined for a feed-forward neural network with a single hidden layer.
Is this because it does not make sens mathematically in this case?
For example, is this pseudo code sensible?
Thanks in advance for your help. I am sorry if this has been answered bef...
2017 Jun 06
2
Subject: glm and stepAIC selects too many effects
If AIC is giving you a model that is too large, then use BIC (log(n) as the penalty for adding a term in the model). This will yield a more parsimonious model. Now, if you ask me which is the better option, I have to refer you to the huge literature on model selection.
Best,
Ravi
[[alternative HTML version deleted]]
2012 Jul 24
1
Linear mixed-effect models and model selection
...els?
2.The effects of TREATMENT and TIME and their interaction were all significant (Table 5). Because of the significant interaction, the analysis was split by TIME.
Comment; Given that you have interactions, you should do a model selection to show whether the interaction model is in fact more parsimonious.
Can someone explain these and tell me how and when should I do model selection?
Thanks,
F
[[alternative HTML version deleted]]
2008 Feb 26
2
AIC and anova, lme
Dear listers,
Here we have a strange result we can hardly cope with. We want to
compare a null mixed model with a mixed model with one independent
variable.
> lmmedt1<-lme(mediane~1, random=~1|site, na.action=na.omit, data=bdd2)
> lmmedt9<-lme(mediane~log(0.0001+transat), random=~1|site,
na.action=na.omit, data=bdd2)
Using the Akaike Criterion and selMod of the package pgirmess
2000 Sep 28
0
Occams Razor Was: lm -- significance of x ...
> From: Peter Dalgaard BSA <p.dalgaard at biostat.ku.dk>
> Date: 28 Sep 2000 13:58:22 +0200
>
> Peter Dalgaard BSA <p.dalgaard at biostat.ku.dk> writes:
>
> > I think Occam/Ockham himself wrote in Latin. By my failing memory, the
> > quote is
> >
> > "Entia non runt multiplicanda praeter necessitam"
> >
> > give or take
2009 Jan 06
1
Selecting variables from a list of data.frames
.... The structure of all data
frames is the same, only the values are different.
I then want to aggregate the various runs. Currently I use the following
method (for three runs):
means = (df.list[[1]]$variable + df.list[[2]]$variable +
df.list[[3]]$variable)/3
I would like to do this in a more parsimonious way, for example using lapply
or related commands, but I can't seem to figure out the magic touch. Any
thoughts on the best way to accomplish this?
Thanks and regards,
Magnus
2010 Nov 13
1
truncate integers
.../100) to get 4 or floor(x/10) to get 45, but I'd
like to return only 5 or only 6, for example, in cases where I don't know
what the numbers are going to be.
I'm sure there is something very logical that I am missing, but my code is
getting too complicated and I can't seem to find a parsimonious solution.
Tyler
--
View this message in context: http://r.789695.n4.nabble.com/truncate-integers-tp3041086p3041086.html
Sent from the R help mailing list archive at Nabble.com.
2011 Apr 28
1
Nomograms from rms' fastbw output objects
...n a
subscript out of range error.
That I can't do this may speak to technical failings, but I suspect it
is because Prof Harrell thinks/knows it injudicious. However, I can't
invent a reason why nomograms should be restricted to the full models,
if the purpose of fastbw is to generate parsimonious models with
appropriate standard errors.
I'd welcome comments on either the technical or the theoretical issues.
Many thanks in advance,
Rob James
2017 Jun 06
0
Subject: glm and stepAIC selects too many effects
...________
From: Ravi Varadhan
Sent: Tuesday, June 6, 2017 10:16 AM
To: r-help at r-project.org
Subject: Subject: [R] glm and stepAIC selects too many effects
If AIC is giving you a model that is too large, then use BIC (log(n) as the penalty for adding a term in the model). This will yield a more parsimonious model. Now, if you ask me which is the better option, I have to refer you to the huge literature on model selection.
Best,
Ravi
[[alternative HTML version deleted]]
2010 Oct 12
2
repeating an analysis
...because of its
incredible statistical resources.
My problem is this .........I am running a regression tree analysis using
"rpart" and I need to run the calculation repeatedly (say n=50 times) to
obtain a distribution of results from which I will pick the median one to
represent the most parsimonious tree size. Unfortunately rpart does not
contain this ability so it will have to be coded for.
Could anyone help me with this? I have provided the code (and relevant
output) for the analysis I am running. I need to run it n=50 times and from
each output pick the appropriate tree size and post it to...
2003 Aug 14
1
Re: Samba vs. Windows : significant difference in timestamphandling ?
...is by far
the best on all counts. I've read postings that xfs
excells with very large files, e.g. movies, but I
couldn't see any difference - reiserfs was just as
fast). It's incredibly fast in directory manipulations,
especially for very large directories with lakhs of
files. Very parsimonious too. It doesn't compress data,
not yet, but it doesn't waste space like other fs's.
A full reiserfs volume could probably not be restored
in an equally large ext3 volume. It works since about
a year impeccably.
> As I am about to upgrade our nt4 domain, and this is
> the time...
2013 Apr 14
1
Model selection: On the use of the coefficient determination(R2) versus the frequenstist (AIC) and Bayesian (AIC) approaches
Dear all,
I'm modeling growth curve of some ecosystems with respect to their rainfall-productivity relationship using a simple linear regression (ANPP(t)=a+b*Rain(t)) and a modified version of the Brody Model ANPP(t)=a*(1-exp(-b*rain(t)))
I would like to know why the "best model" is function of the criteria that I use (maximizing the fit using R2 or testing the Null hypothesis with
2007 Jul 17
1
Speed up computing: looping through data?
Dear all,
Please excuse my ignorance, but I am having difficulty with this, and am
unable to find help on the website/Google.
I have a series of explanatory variables that I am aiming to get
parsimony out of.
For example, if I have 10 variables, a-j, I am initially looking at the
linear relationships amongst them:
my.lm1 <- lm(a ~ b+c+d+e+f+g+h+i+j, data=my.data)
summary(my.lm1)
my.lm2
2008 Dec 17
1
pruning trees using rpart
...tree and rpart to build a classification tree to
predict a 0/1 outcome. The package rpart has the advantage that the function
plotcp gives a visual representation of the cross-validation results with a
horizontal line indicating the 1 standard error rule, i.e. the
recommendation to select the most parsimonious model (the smallest tree)
whose error is not more than one standard error above the error of the best
model.
However, in the rpart package I am not getting trees of all sizes but for
example three sizes are 1,2,5 in one example I am working with, while with
cv.tree in package tree it gives 1,2,3,4...
2005 Sep 19
1
How to mimic pdMat of lme under lmer?
...my
models that worked fine with lme.
I have problems with the pdMat classes.
Below a toy dataset with a fixed effect F and a random effect R. I gave
also 2 similar lme models.
The one containing pdLogChol (lme1) is easy to translate (as it is an
explicit notation of the default model)
The more parsimonious model with pdDiag replacing pdLogChol I cannot
reproduce with lmer. The obvious choice for me would be my model lmer2,
but this is yielding different result.
Somebody any idea?
Thanks,
Joris
I am using R version 2.1.0 for Linux
and the most recent downloads of Matrix and nlme
#dataset from Mc...
2004 Sep 12
2
Variable Importance in pls: R or B? (and in glpls?)
Dear R-users, dear Ron
I use pls from the pls.pcr package for classification. Since I need to
know which variables are most influential onto the classification
performance, what criteria shall I look at:
a) B, the array of regression coefficients for a certain model (means a
certain number of latent variables) (and: squared or absolute values?)
OR
b) the weight matrix RR (or R in the De
2005 Sep 05
2
model comparison and Wald-tests (e.g. in lmer)
...;Statistical
computing" one may consider to supply "traditional" ANOVA tables as an
additional explanation for the reader (e.g. field biologists).
An example:
one has fitted 5 models m1..m5 and after:
>anova(m1,m2,m3,m4,m5) # giving AIC and LRT-tests
he selects m3 as the most parsimonious model and calls anova with the
best model (Wald-test):
>anova(m3) # the additional explanatory table
My questions:
* Do people outside the S-PLUS/R world still understand us?
* Is it wise to add such an explanatory table (in particular when the
results are the same) to make th...
2006 Dec 20
2
RuleFit & quantreg: partial dependence plots; showing an effect
Dear List,
I would greatly appreciate help on the following matter:
The RuleFit program of Professor Friedman uses partial dependence plots
to explore the effect of an explanatory variable on the response
variable, after accounting for the average effects of the other
variables. The plot method [plot(summary(rq(y ~ x1 + x2,
t=seq(.1,.9,.05))))] of Professor Koenker's quantreg program
2018 Aug 31
1
ROBUSTNESS: x || y and x && y to give warning/error if length(x) != 1 or length(y) != 1
...>
> The 0-length case I don't think we should change as I do find
> NA (is logical!) to be an appropriate logical answer.
Can you explain your reasoning a bit more here? I'd like to understand
the general principle, because from my perspective it's more
parsimonious to say that the inputs to || and && must be length 1,
rather than to say that inputs could be length 0 or length 1, and in
the length 0 case they are replaced with NA.
Hadley
I would say the value NA would cause warnings later on, that are easy to track down, so a retu...