Tom,
I've never used lm.gls, but a quick look at the help page suggests
that, unlike gls, it doesn't have built-in functionality to estimate
parameters to structure the response/error covariance matrix. I
believe it assumes the covariance matrix is known (hopefully someone
will correct me if I'm wrong about that).
So, building off the answer I sent to you on April 29 (pasted below
rather than hyper-linked because parts of the conversation appear to
be missing from the archive), if y ~ N(XB, Sigma) where Sigma sigma^2(Lambda)
and Lambda is decomposed into VCV, where V is diagonal
and describes a variance structure and C has 1's on the diagonal and
the off-diagonals describe the correlation of the errors, then the
'weights' argument to gls will allow you to estimate a variance
structure for V (see ?varClasses) and the 'correlation' argument
allows you to estimate the correlation structure for C (see
?corClasses). You'll notice the first corClass listed on the help
page is corAR1 (there's also corCAR1 if you don't have discrete lags).
See example(gls) for an example of how it's used, and Pinheiro and
Bates (2000) for detailed examples.
Frank Harrell has commented on this list that gls is underused (
http://tolstoy.newcastle.edu.au/R/help/06/08/32487.html ). Given the
fact that it can drastically reduce the constraints of the constant
variance and independence assumptions of the ubiquitous linear model,
I agree.
good luck,
Kingsford Jones
Discussion on weights in gls from 2008-Apr-29
Thanks Kingsford! I will cc r-help.
varPower() works for me. I want to use lm because predict.gls does not
give lwr and upr values, and also the bptest and ncv.test only work
with lm models...
- Hide quoted text -
On 4/29/08, Kingsford Jones <kingsfordjones at gmail.com> wrote:
hi Tom,
Basically, a simple way to model non-constant variance when the
variance increases (the common case) or decreases with the conditional
mean of your response is to use "weights = varPower()", and gls
will
estimate the parameter \alpha that defines the power relationship
between the regression line and the error variances. If for some
reason you wanted to use lm rather then gls for your final model, then
after estimating the gls model you could supply a weights argument to
lm of the form:
weights = fitted(your.gls.model)^(-2*the.alpha.estimate.from.the.gls.model).
and you should get the same (or very close) estimates as from the gls
model but have a model of class 'lm'.
There are many different ways to model non-constant variance and
you'll need to choose one that is appropriate for your data. If the
model I described isn't appropriate then you should look at Ch 5 of
P&B to learn about the other varFunc classes.
good luck,
Kingsford
ps - would you mind forwarding to r-help in case this others have the
same question.
On Tue, Apr 29, 2008 at 3:24 PM, tom soyer <tom.soyer at gmail.com>
wrote:
> Thanks Kingsford! What are the varClasses? I don't know how to
use them....
> If I choose varPower(), then does it mean the function would generate
the
> weights, w, so that w gives the most likely explanation of the
relationship
> between the variance and the independent variable?
>
>
>
>
> On 4/29/08, Kingsford Jones <kingsfordjones at gmail.com> wrote:
> > On Tue, Apr 29, 2008 at 6:20 AM, tom soyer <tom.soyer at
gmail.com> wrote:
> > > Hi,
> > >
> > > I would like to use a weighted lm model to reduce
heteroscendasticity.
> I am
> > > wondering if the only way to generate the weights in R is
through the
> > > laborious process of trial and error by hand. Does anyone
know if R has
> a
> > > function that would automatically generate the weights need
for lm?
> >
> > Hi Tom,
> >
> > The 'weights' argument to the 'gls' function in
the nlme package
> > provides a great deal of flexibility in estimate weighting
parameters
> > and model coefficients. For example, if you want to model
monotonic
> > heteroscedasticity by estimating the weights $E(Y)^{-2\alpha}$,
> > you can use the varPower variance function class. E.g., something
like
> >
> > f1 <- gls(y ~ x1 + x2, data = your.data, weights = varPower())
> >
> > will estimate the regression coefficients and alpha parameter
together
> > via maximum likelihood. (note that the usual specification for
varPower
> is
> > varPower(form = ~ your.formula), but by default the mean is used.
See
> > Ch 5 of the Pinheiro and Bates Mixed-effects Models book for
details)
> >
> > Kingsford Jones
> >
>
>
>
> --
> Tom
On Fri, May 9, 2008 at 8:05 AM, tom soyer <tom.soyer at gmail.com>
wrote:> Hi,
>
> I need to correct for ar(1) behavior of my residuals of my model. I noticed
> that there are multiple gls models in R. I am wondering if anyone
> has experience in choosing between gls models. For example, how
> should one decide whether to use lm.gls in MASS, or gls in nlme for
> correcting ar(1)? Does anyone have a preference? Any advice is appreciated!
>
> Thanks,
>
> --
> Tom
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>
On Fri, May 9, 2008 at 8:05 AM, tom soyer <tom.soyer at gmail.com>
wrote:> Hi,
>
> I need to correct for ar(1) behavior of my residuals of my model. I noticed
> that there are multiple gls models in R. I am wondering if anyone
> has experience in choosing between gls models. For example, how
> should one decide whether to use lm.gls in MASS, or gls in nlme for
> correcting ar(1)? Does anyone have a preference? Any advice is appreciated!
>
> Thanks,
>
> --
> Tom
>
> [[alternative HTML version deleted]]
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>