Displaying 20 results from an estimated 72 matches for "parsimony".
2011 Dec 19
0
Global model more parsimonious (minor QAICc)
Hi all,
I know this a general question, not specific for any R package, even so I
hope someone may give me his/her opinion on this.
I have a set of 20 candidate models in a binomial GLM. The global model has
52 estimable parameters and sample size is made of about 1500 observations.
The global model seems not to have problems of parameters estimability nor
get troubles with the convergence of
2011 Jan 05
0
Nnet and AIC: selection of a parsimonious parameterisation
Hi All,
I am trying to use a neural network for my work, but I am not sure about my
approach to select a parsimonious model. In R with nnet, the IAC has
not been defined for a feed-forward neural network with a single hidden layer.
Is this because it does not make sens mathematically in this case?
For example, is this pseudo code sensible?
Thanks in advance for your help. I am sorry if this
2017 Jun 06
2
Subject: glm and stepAIC selects too many effects
If AIC is giving you a model that is too large, then use BIC (log(n) as the penalty for adding a term in the model). This will yield a more parsimonious model. Now, if you ask me which is the better option, I have to refer you to the huge literature on model selection.
Best,
Ravi
[[alternative HTML version deleted]]
2012 Jul 24
1
Linear mixed-effect models and model selection
Hi,
I am looking at the effect of allelochemicals produced by two freshwater macrophyte species on two different algal species at different days. I am comparing the effect of each macrophyte on each algae at each day. I received help from someone doing the LMEM (Linear mixed-effect models) and he told me to do ANOVA to analyse the LMEM. However, I received these feedback from my examinor;
1. An
2008 Feb 26
2
AIC and anova, lme
...AICc
deltAICc w_ic
2 log(1e-04 + transat) 44.63758 4 7.5 -81.27516 0.000000 0.65 -79.67516
0.000000 0.57
1 1 43.02205 3 10.0 -80.04410 1.231069 0.35 -79.12102
0.554146 0.43
The usual conclusion would be that the two models are equivalent and to
keep the null model for parsimony (!).
However, an anova shows that the variable 'log(1e-04 + transat)' is
significantly different from 0 in model 2 (lmmedt9)
> anova(lmmedt9)
numDF denDF F-value p-value
(Intercept) 1 20 289.43109 <.0001
log(1e-04 + transat) 1 20 31....
2000 Sep 28
0
Occams Razor Was: lm -- significance of x ...
...icanda praeter necessitatem"
>
> The principle states that "Entities should not be multiplied
> unnecessarily." Sometimes it is quoted in one of its original Latin
> forms to give it an air of authenticity.
When I was a student this was referred to as the `principle of
parsimony', which has the advantage of having the essence in the name.
My (Concise Oxford) dictionary has this under
`law of parsimony'
that no more causes or forces should be assumed than are necessary to
account for the facts.
It seems that William of Ockham has become as generously
(mis-)acknowl...
2009 Jan 06
1
Selecting variables from a list of data.frames
I have a simulation program that generates a data frame for each run. I
aggregate the data.frames into a list (df.list). The structure of all data
frames is the same, only the values are different.
I then want to aggregate the various runs. Currently I use the following
method (for three runs):
means = (df.list[[1]]$variable + df.list[[2]]$variable +
df.list[[3]]$variable)/3
I would like
2010 Nov 13
1
truncate integers
Is there any really easy way to truncate integers with several consecutive
digits without rounding and without converting from numeric to character
(using strsplit, etc.)?? Something along these lines:
e.g. = 456
truncfun(e.g., location=1)
= 4
truncfun(e.g., location=1:2)
= 45
truncfun(e.g., location=2:3)
= 56
truncfun(e.g., location=3)
= 6
It's one thing using floor(x/100) to get 4 or
2011 Apr 28
1
Nomograms from rms' fastbw output objects
There is both a technical and a theoretical element to my question...
Should I be able to use the outputs which arise from the fastbw function
as inputs to nomogram(). I seem to be failing at this, -- I obtain a
subscript out of range error.
That I can't do this may speak to technical failings, but I suspect it
is because Prof Harrell thinks/knows it injudicious. However, I can't
2017 Jun 06
0
Subject: glm and stepAIC selects too many effects
More principled would be to use a lasso-type approach, which combines selection and estimation in one fell swoop!
Ravi
________________________________
From: Ravi Varadhan
Sent: Tuesday, June 6, 2017 10:16 AM
To: r-help at r-project.org
Subject: Subject: [R] glm and stepAIC selects too many effects
If AIC is giving you a model that is too large, then use BIC (log(n) as the penalty for adding
2010 Oct 12
2
repeating an analysis
Hi All,
I have to say upfront that I am a complete neophyte when it comes to
programming. Nevertheless I enjoy the challenge of using R because of its
incredible statistical resources.
My problem is this .........I am running a regression tree analysis using
"rpart" and I need to run the calculation repeatedly (say n=50 times) to
obtain a distribution of results from which I will pick
2003 Aug 14
1
Re: Samba vs. Windows : significant difference in timestamphandling ?
>>>> Fine. Use reiserfs and don't worry about ctime.
>>>>
>>> But reiserfs doesn´t support ACLs. Does it?
>>
>> Oh yes, it does. Big way.
>>
> ??
>
> I was under the impression that if i wanted acls, i
> should use xfs, ext3 (or jsf i believe) but NOT
> reisersf.
>
> Am I wrong? Does (for example) SuSE 8.2 with
>
2013 Apr 14
1
Model selection: On the use of the coefficient determination(R2) versus the frequenstist (AIC) and Bayesian (AIC) approaches
....
To compute the R2, I used the following formula r2=mss/(mss+rss) where mss=sum((fitted(model)-mean(fitted(model)))^2) and rss=sum(resid(model)^2)
I think that the R2 is good enough for the model selection knowing the candidate models both have two parameters (so no to care about the principle of parsimony) and my guess is that the models needs to have the same form (which is not the case here: linear form vs exponential form) ) or nested to be compared with frequentist or Bayesian approaches such as the AIC and BIC criterion .
Thank you very much in advance
Armel
[[alternative HTML version delet...
2007 Jul 17
1
Speed up computing: looping through data?
Dear all,
Please excuse my ignorance, but I am having difficulty with this, and am
unable to find help on the website/Google.
I have a series of explanatory variables that I am aiming to get
parsimony out of.
For example, if I have 10 variables, a-j, I am initially looking at the
linear relationships amongst them:
my.lm1 <- lm(a ~ b+c+d+e+f+g+h+i+j, data=my.data)
summary(my.lm1)
my.lm2 <- lm(b ~ a+c+d+e+f+g+h+i+j, data=my.data)
etc
Instead of repeatedly typing this in, is there a way t...
2008 Dec 17
1
pruning trees using rpart
Hi,
I am using the packages tree and rpart to build a classification tree to
predict a 0/1 outcome. The package rpart has the advantage that the function
plotcp gives a visual representation of the cross-validation results with a
horizontal line indicating the 1 standard error rule, i.e. the
recommendation to select the most parsimonious model (the smallest tree)
whose error is not more than one
2005 Sep 19
1
How to mimic pdMat of lme under lmer?
Dear members,
I would like to switch from nlme to lme4 and try to translate some of my
models that worked fine with lme.
I have problems with the pdMat classes.
Below a toy dataset with a fixed effect F and a random effect R. I gave
also 2 similar lme models.
The one containing pdLogChol (lme1) is easy to translate (as it is an
explicit notation of the default model)
The more parsimonious
2004 Sep 12
2
Variable Importance in pls: R or B? (and in glpls?)
Dear R-users, dear Ron
I use pls from the pls.pcr package for classification. Since I need to
know which variables are most influential onto the classification
performance, what criteria shall I look at:
a) B, the array of regression coefficients for a certain model (means a
certain number of latent variables) (and: squared or absolute values?)
OR
b) the weight matrix RR (or R in the De
2005 Sep 05
2
model comparison and Wald-tests (e.g. in lmer)
Dear expeRts,
there is obviously a general trend to use model comparisons, LRT and AIC
instead of Wald-test-based significance, at least in the R community.
I personally like this approach. And, when using LME's, it seems to be
the preferred way (concluded from postings of Brian Ripley and Douglas
Bates' article in R-News 5(2005)1), esp. because of problems with the
d.f. approximation.
2006 Dec 20
2
RuleFit & quantreg: partial dependence plots; showing an effect
Dear List,
I would greatly appreciate help on the following matter:
The RuleFit program of Professor Friedman uses partial dependence plots
to explore the effect of an explanatory variable on the response
variable, after accounting for the average effects of the other
variables. The plot method [plot(summary(rq(y ~ x1 + x2,
t=seq(.1,.9,.05))))] of Professor Koenker's quantreg program
2018 Aug 31
1
ROBUSTNESS: x || y and x && y to give warning/error if length(x) != 1 or length(y) != 1
?On 30/08/2018, 20:15, "R-devel on behalf of Hadley Wickham" <r-devel-bounces at r-project.org on behalf of h.wickham at gmail.com> wrote:
On Thu, Aug 30, 2018 at 10:58 AM Martin Maechler
<maechler at stat.math.ethz.ch> wrote:
>
> >>>>> Joris Meys
> >>>>> on Thu, 30 Aug 2018 14:48:01 +0200 writes:
>