Hello!
Looking on how people use optim to get MLE I also noticed that one can
use returned Hessian to get corresponding standard errors i.e. something
like
result <- optim(<< snip >>, hessian=T)
result$par # point estimates
vc <- solve(result$hessian) # var-cov matrix
se <- sqrt(diag(vc)) # standard errors
What is actually Hessian representing here? I appologize for lack of
knowledge, but ... Attached PDF can show problem I am facing with this
issue.
Thank you very much in advance.
--
Lep pozdrav / With regards,
Gregor Gorjanc
----------------------------------------------------------------------
University of Ljubljana PhD student
Biotechnical Faculty
Zootechnical Department URI: http://www.bfro.uni-lj.si/MR/ggorjan
Groblje 3 mail: gregor.gorjanc <at> bfro.uni-lj.si
SI-1230 Domzale tel: +386 (0)1 72 17 861
Slovenia, Europe fax: +386 (0)1 72 17 888
----------------------------------------------------------------------
"One must learn by doing the thing; for though you think you know it,
you have no certainty until you try." Sophocles ~ 450 B.C.
----------------------------------------------------------------------
-------------- next part --------------
A non-text attachment was scrubbed...
Name: standardError.pdf
Type: application/pdf
Size: 65330 bytes
Desc: not available
Url :
https://stat.ethz.ch/pipermail/r-help/attachments/20060321/79ced8f5/attachment-0003.pdf
On Tue, 21 Mar 2006, Gregor Gorjanc wrote:> Hello! > > Looking on how people use optim to get MLE I also noticed that one can > use returned Hessian to get corresponding standard errors i.e. something > like > > result <- optim(<< snip >>, hessian=T) > result$par # point estimates > vc <- solve(result$hessian) # var-cov matrix > se <- sqrt(diag(vc)) # standard errors > > What is actually Hessian representing here? I appologize for lack of > knowledge, but ... Attached PDF can show problem I am facing with this > issue. >The Hessian is the second derivative of the objective function, so if the objective function is minus a loglikelihood the hessian is the observed Fisher information. The inverse of the hessian is thus an estimate of the variance-covariance matrix of the parameters. For some models this is exactly I/n in your notation, for others it is just close (and there are in fact theoretical reasons to prefer the observed information). I don't remember whether the two-parameter gamma family is one where the observed and expected information are identical. -thomas PS: \stackrel{d}{\to}