Dear List:
I am estimating a gls model and am having to make some rather unconventional
modifications to handle a particular problem I have identified. My aim is to fit
a GLS with an AR1 structure, obtain the variance-covariance matrix (V), modify
it as needed given my research problem, and then reestimate the GLS by brute
force using matrix operations. All seems to be working almost perfectly, yet
there is one small issue that I cannot seem to resolve and would appreciate any
thoughts on my problem.
I have developed some code for simulating a longitudinal analysis of student
achievement test scores. For the current model, I want constant variance over
time with a correlation of r = .2 with means of 100, 150, 200, and 250 over
time. The code is below.
library(MASS)
library(nlme)
# These are the generating values
Sigma<-matrix(c(400,80,80,80,80,400,80,80,80,80,400,80,80,80,80,400),4,4)
mu<-c(100,150,200,250)
sample.size<-100
# Simulate data and restructure for longitudinal analysis
data<- as.data.frame(cbind(seq(1:sample.size),(mvrnorm(n = sample.size, mu,
Sigma))))
colnames(data)<-c("stuid","Grade 2","Grade
3","Grade 4","Grade 5")
long<-reshape(data, idvar="stuid", varying=list(names(data)[2:5]),
v.names="score", direction="long")
long$time<-long$time-1
With these data I then use the gls function to estimate the following model
Y_{ti} = mu + beta(time) + e_{ti}
fm1 <- gls(score ~ time, long, correlation=corAR1(form=~1|stuid),
method='ML')
>From here I can obtain V, the variance covariance matrix using the getVarCov
function as follows:
var.mat<-getVarCov(fm1)
I<-diag(1,sample.size) # The following 2 steps are needed to make V
conformable for multiplication later
V<-kronecker(I,var.mat)
I then need to modify the V matrix and then reestimate the gls() by brute force
using matrix operations. None of the current corClass or varFunc options would
work for my current application, which is why I am doing this manually for now.
For reasons outside the scope of this email, I don't show that code or
reasons why. Here is the code for brute force GLS which should replicate the
estimates obtained from the built in gls function:
score<-long$score
intercept<-rep(1,sample.size)
time<-long$time
X.mat<-cbind(intercept,time)
# Estimate the GLS by brute force with the obtained V
# Obtain point estimates (X'V^{-1}X)^{-1}X'V^{-1}y
pe.original<-solve(t(X.mat)%*%solve(V)%*%X.mat)%*%t(X.mat)%*%solve(V)%*%score
# Obtain variance covariance matrix (X'V^{-1}X)^{-1}
# This is to obtain the standard errors from the variance-covariance matrix
varcov.original<-solve(t(X.mat)%*%solve(V)%*%X.mat)
cat("Original GLS
Estimates","\n","Value","\t","\t","Std.
Error","\n", pe.original[1],
"\t",
sqrt(varcov.original[1,1]),"\n",pe.original[2],"\t",sqrt(varcov.original[2,2]),"\n")
All of this works perfectly when I execute the brute force method using the
Orthodont dataset available in the nlme package. That is, the point estimates
and standard errors are exactly the same as those estimated using the built-in
functionality. This gives me confidence that the code I have above for the brute
force model is accurate.
However, when I run the simulated data above through the exact same code, the
standard errors differ by a hair, but the point estimates are the same. This
difference decreases in larger sample sizes, but it is still off and I can't
seem to identify why. I first thought it was rounding error, but the difference
in standard errors are generally larger in the range of .1 to .3 between the
built-in gls function and the brute force model. In one of the many runs the
se's were off by almost .8, but this occurred only once.
In Orthodont, N=27 and the se's are exactly the same, so I'm not
convinced that rounding error or sample size is an issue here. The estimates
differ by just enough to seem "different" but are so close that I am
confident most things are working as expected. Actually, when I used
round((code),0) for generating the data the se were off still.
So, at this point, I am rather confused and cannot seem to account for the
difference. Can anyone offer any reason why my standard errors are ALMOST
exactly the same between built-in gls() function and the brute force operations
above using simulated data, but exactly the same when using Orthodont?
I also need to clean up the code above and would appreciate any thoughts. For
example, I know I should be using model.matrix instead of the steps outlined
above for constructing X.mat.
Thank you for any thoughts,
Harold
1.9.1
Windows 95
[[alternative HTML version deleted]]
Dear List:
(This is a re-post as my original message was sent over 24 hours ao and has not
posted)
I am estimating a gls model and am having to make some rather unconventional
modifications to handle a particular problem I have identified. My aim is to fit
a GLS with an AR1 structure, obtain the variance-covariance matrix (V), modify
it as needed given my research problem, and then reestimate the GLS by brute
force using matrix operations. Th modifications to V cannot be handled by any of
the varFunc or corClass options. All seems to be working almost perfectly, yet
there is one small issue that I cannot seem to resolve and would appreciate any
thoughts on my problem.
I have developed some code for simulating a longitudinal analysis of student
achievement test scores. For the current model, I want constant variance over
time with a correlation of r = .2 with means of 100, 150, 200, and 250 over
time. The code is below.
library(MASS)
library(nlme)
# These are the generating values
Sigma<-matrix(c(400,80,80,80,80,400,80,80,80,80,400,80,80,80,80,400),4,4)
mu<-c(100,150,200,250)
sample.size<-100
# Simulate data and restructure for longitudinal analysis
data<- as.data.frame(cbind(seq(1:sample.size),(mvrnorm(n = sample.size, mu,
Sigma))))
colnames(data)<-c("stuid","Grade 2","Grade
3","Grade 4","Grade 5")
long<-reshape(data, idvar="stuid", varying=list(names(data)[2:5]),
v.names="score", direction="long")
long$time<-long$time-1
With these data I then use the gls function to estimate the following model
Y_{ti} = mu + beta(time) + e_{ti}
fm1 <- gls(score ~ time, long, correlation=corAR1(form=~1|stuid),
method='ML')
>From here I can obtain V, the variance covariance matrix using the getVarCov
function as follows:
var.mat<-getVarCov(fm1)
I<-diag(1,sample.size) # The following 2 steps are needed to make V
conformable for multiplication later
V<-kronecker(I,var.mat)
I then need to modify the V matrix and then reestimate the gls() by brute force
using matrix operations. As mentioned, none of the current corClass or varFunc
options would work for my current application, which is why I am doing this
manually for now. For reasons outside the scope of this email, I don't show
that code or reasons why. Here is the code for brute force GLS which should
replicate the estimates obtained from the built in gls function:
score<-long$score
intercept<-rep(1,sample.size)
time<-long$time
X.mat<-cbind(intercept,time)
# Estimate the GLS by brute force with the obtained V
# Obtain point estimates (X'V^{-1}X)^{-1}X'V^{-1}y
pe.original<-solve(t(X.mat)%*%solve(V)%*%X.mat)%*%t(X.mat)%*%solve(V)%*%score
# Obtain variance covariance matrix (X'V^{-1}X)^{-1}
# This is to obtain the standard errors from the variance-covariance matrix
varcov.original<-solve(t(X.mat)%*%solve(V)%*%X.mat)
cat("Original GLS
Estimates","\n","Value","\t","\t","Std.
Error","\n", pe.original[1],
"\t",
sqrt(varcov.original[1,1]),"\n",pe.original[2],"\t",sqrt(varcov.original[2,2]),"\n")
All of this works perfectly when I execute the brute force method using the
Orthodont dataset available in the nlme package. That is, the point estimates
and standard errors are exactly the same as those estimated using the built-in
functionality. This gives me confidence that the code I have above for the brute
force model is accurate.
However, when I run the simulated data above through the exact same code, the
standard errors differ by a hair, but the point estimates are the same. This
difference decreases in larger sample sizes, but it is still off and I can't
seem to identify why. I first thought it was rounding error, but the difference
in standard errors are generally larger in the range of .1 to .3 between the
built-in gls function and the brute force model. In one of the many runs the
se's were off by almost .8, but this occurred only once.
In Orthodont, N=27 and the se's are exactly the same, so I'm not
convinced that rounding error or sample size is an issue here. The estimates
differ by just enough to seem "different" but are so close that I am
confident most things are working as expected. Actually, when I used
round((code),0) for generating the data the se were off still.
So, at this point, I am rather confused and cannot seem to account for the
difference. Can anyone offer any reason why my standard errors are ALMOST
exactly the same between built-in gls() function and the brute force operations
above using simulated data, but exactly the same when using Orthodont?
Thank you for any thoughts,
Harold
1.9.1
Windows 95
[[alternative HTML version deleted]]
Hi R-masters! First of All: Happy new Year ! I have one doubt. I one time for week command in R update.packages() for recive new versions of packages in my R, I also subscribe R-pkgs for recive notices of new packages. But I discover other packages in CRAN without notices in R-pkgs. So How to I mantain my R- system update with all packages in CRAN (New versions of my packages and install New packages) Thanks in advance Bernardo Rangel Tura, MD, MSc National Institute of Cardiology Laranjeiras Rio de Janeiro Brazil -- No virus found in this outgoing message. Checked by AVG Anti-Virus.
One possible solution is given by the function packageStatus() In order to check and install new packages in our system here we use: x <- packageStatus(repositories="http://cran.br.r-project.org/src/contrib") st <- x$avai["Status"] install.packages(rownames(st)[which(st$Status=="not installed")]) best P.J. On Fri, 31 Dec 2004, Bernardo Rangel Tura wrote:> Hi R-masters! > > First of All: Happy new Year ! > > I have one doubt. I one time for week command in R update.packages() for > recive new versions of packages in my R, I also subscribe R-pkgs for recive > notices of new packages. > > But I discover other packages in CRAN without notices in R-pkgs. So How to > I mantain my R- system update with all packages in CRAN (New versions of my > packages and install New packages) > > Thanks in advance > > Bernardo Rangel Tura, MD, MSc > National Institute of Cardiology Laranjeiras > Rio de Janeiro Brazil > > > -- > No virus found in this outgoing message. > Checked by AVG Anti-Virus. > > ______________________________________________ > R-help at stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html > >Paulo Justiniano Ribeiro Jr LEG (Laborat??rio de Estat??stica e Geoinforma????o) Departamento de Estat??stica Universidade Federal do Paran?? Caixa Postal 19.081 CEP 81.531-990 Curitiba, PR - Brasil Tel: (+55) 41 361 3573 Fax: (+55) 41 361 3141 e-mail: paulojus at est.ufpr.br http://www.est.ufpr.br/~paulojus