Dear R People: Here is some output from AR and ARIMA functions:> xb <- arima.sim(n=120,model=list(ar=0.85)) > xb.ar <- ar(xb) > xb.arCall: ar(x = xb) Coefficients: 1 0.6642 Order selected 1 sigma^2 estimated as 1.094> xb.arima <- arima(xb,order=c(1,0,0),include.mean=FALSE) > xb.arimaCall: arima(x = xb, order = c(1, 0, 0), include.mean = FALSE) Coefficients: ar1 0.6909 s.e. 0.0668 sigma^2 estimated as 1.04: log likelihood = -172.94, aic = 349.88>My question: shouldn't the ar1 and arima coefficients and sigma^2 be the same, please? Or at least closer than they are? Thanks, Erin -- Erin Hodgess Associate Professor Department of Computer and Mathematical Sciences University of Houston - Downtown mailto: erinm.hodgess at gmail.com
WARNING: The following might be **complete baloney** (and my apologies if so). Erin: I hope you get a definitive reply on this from a real expert, but if memory serves, they might be using two different estimation algorithms. ar() is just doing Yule-Walker recursive calculation as described in Box-Jenkins, while arima() is using numerical optimization. You can probably make them closer by changing convergence criteria for arima(), which would be a good test for my "explanation." Cheers, Bert On Thu, Jul 7, 2011 at 7:36 AM, Erin Hodgess <erinm.hodgess at gmail.com> wrote:> Dear R People: > > Here is some output from AR and ARIMA functions: > >> xb <- arima.sim(n=120,model=list(ar=0.85)) >> xb.ar <- ar(xb) >> xb.ar > > Call: > ar(x = xb) > > Coefficients: > ? ? 1 > 0.6642 > > Order selected 1 ?sigma^2 estimated as ?1.094 >> xb.arima <- arima(xb,order=c(1,0,0),include.mean=FALSE) >> xb.arima > > Call: > arima(x = xb, order = c(1, 0, 0), include.mean = FALSE) > > Coefficients: > ? ? ? ? ar1 > ? ? ?0.6909 > s.e. ?0.0668 > > sigma^2 estimated as 1.04: ?log likelihood = -172.94, ?aic = 349.88 >> > > My question: ?shouldn't the ar1 and arima coefficients and sigma^2 be > the same, please? ?Or at least closer than they are? > > > > Thanks, > Erin > > > -- > Erin Hodgess > Associate Professor > Department of Computer and Mathematical Sciences > University of Houston - Downtown > mailto: erinm.hodgess at gmail.com > > ______________________________________________ > R-help at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. >-- "Men by nature long to get on to the ultimate truths, and will often be impatient with elementary studies or fight shy of them. If it were possible to reach the ultimate truths without the elementary studies usually prefixed to them, these would not be preparatory studies but superfluous diversions." -- Maimonides (1135-1204) Bert Gunter Genentech Nonclinical Biostatistics
On Thu, 7 Jul 2011, peter dalgaard wrote:> > On Jul 7, 2011, at 19:52 , Prof Brian Ripley wrote: > >> Yes, ar and arima are using different estimation methods: arima is >> mle whereas the default is method-of-moments. >> >> With such a large ar coefficient the end effects will matter, and >> the mle (done by arima or ar.mle or ar(method="mle")) is the more >> accurate method since it makes maximal use of the ends of the >> series. > > Yes, but... > > MLE also has subtly stronger assumptions, namely that the whole > series is stationary. This boils down to the first observation(s) > having the stationary mean and variance. This is not always the case > if, e.g., the system is measured following some initial > perturbation.But Yule-Walker (as distinct from OLS) also makes that assumption.> > -- > Peter Dalgaard > Center for Statistics, Copenhagen Business School > Solbjerg Plads 3, 2000 Frederiksberg, Denmark > Phone: (+45)38153501 > Email: pd.mes at cbs.dk Priv: PDalgd at gmail.com > >-- Brian D. Ripley, ripley at stats.ox.ac.uk Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UK Fax: +44 1865 272595
On Jul 7, 2011, at 22:21 , Prof Brian Ripley wrote:> On Thu, 7 Jul 2011, peter dalgaard wrote: > >> >> On Jul 7, 2011, at 19:52 , Prof Brian Ripley wrote: >> >>> Yes, ar and arima are using different estimation methods: arima is mle whereas the default is method-of-moments. >>> >>> With such a large ar coefficient the end effects will matter, and the mle (done by arima or ar.mle or ar(method="mle")) is the more accurate method since it makes maximal use of the ends of the series. >> >> Yes, but... >> >> MLE also has subtly stronger assumptions, namely that the whole series is stationary. This boils down to the first observation(s) having the stationary mean and variance. This is not always the case if, e.g., the system is measured following some initial perturbation. > > But Yule-Walker (as distinct from OLS) also makes that assumption. >Right. I was thinking of the conditional MLE given the first p observations, which AFAIR is equivalent to regressing on the lagged values (except probably for the residual df). -- Peter Dalgaard Center for Statistics, Copenhagen Business School Solbjerg Plads 3, 2000 Frederiksberg, Denmark Phone: (+45)38153501 Email: pd.mes at cbs.dk Priv: PDalgd at gmail.com