Dear All, I am trying to solve a Generalized Method of Moments problem which necessitate the gradient of moments computation to get the standard errors of estimates. I know optim does not output the gradient, but I can use numericDeriv to get that. My question is: is this the best function to do this? Thank you Jean,
optim(..., hessian=TRUE, ...) outputs a list with a component hessian, which is the second derivative of the log(likelihood) at the minimum. If your objective function is (-log(likelihood)), then optim(..., hessian=TRUE)$hessian is the observed information matrix. If eigen(...$hessian)$values are all positive with at most a few orders of magnitude between the largest and smallest, then it is invertable, and the square roots of the diagonal elements of the inverse give standard errors for the normal approximation to the distribution of parameter estimates. With objective functions that may not always be well behaved, I find that optim sometimes stops short of the optimum. I run it with method = "Nelder-Mead", "BFGS", and "CG", then restart the algorithm giving the best answer to one of the other algorithms. Doug Bates and Brian Ripley could probably suggest something better, but this has produced acceptable answers for me in several cases, and I did not push it beyond that. hope this helps. Jean Eid wrote:>Dear All, >I am trying to solve a Generalized Method of Moments problem which >necessitate the gradient of moments computation to get the >standard errors of estimates. >I know optim does not output the gradient, but I can use numericDeriv to >get that. My question is: is this the best function to do this? > >Thank you >Jean, > >______________________________________________ >R-help at stat.math.ethz.ch mailing list >https://www.stat.math.ethz.ch/mailman/listinfo/r-help >PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html > >
True, True, However I am not estimating via MLE. The objective function is bunch of moment conditions weighted according to the uncertainty of the moment ( i.e. an estimate of the asymptotic Var-Cov matrix of the moments (not the estimates)) Technically it looks more like a weighted nonlinear least square problem. I have a bunch of momnets that look like this E(e_{ik} z_i)=0 where e_{ik} is the error term and is a nonlinear function of the paramaters at observation i. . z_i is an instrument ( the model have endogenous covariates). k above indicates that there is more than one functional form for the residuals (simultaneous equation system that is nonlinear). one of them look like e_{ik}=\ln(p-{1\over \alpha} \Delta^{-1})-W\theta There are two more. I am interseted in estimating \alpha, \theta, (\theta \in R^{k}) in addition to other paramaters in the other equations. I only want to use these moment conditions rather than assuming knowledge of the distribution oof the error term. At the end of the day, I need to use the delta method to get at an estimate for the standard errors. Hope this clarifies some bit more On Wed, 28 Apr 2004, Spencer Graves wrote:> optim(..., hessian=TRUE, ...) outputs a list with a component > hessian, which is the second derivative of the log(likelihood) at the > minimum. If your objective function is (-log(likelihood)), then > optim(..., hessian=TRUE)$hessian is the observed information matrix. If > eigen(...$hessian)$values are all positive with at most a few orders of > magnitude between the largest and smallest, then it is invertable, and > the square roots of the diagonal elements of the inverse give standard > errors for the normal approximation to the distribution of parameter > estimates. With objective functions that may not always be well > behaved, I find that optim sometimes stops short of the optimum. I run > it with method = "Nelder-Mead", "BFGS", and "CG", then restart the > algorithm giving the best answer to one of the other algorithms. Doug > Bates and Brian Ripley could probably suggest something better, but this > has produced acceptable answers for me in several cases, and I did not > push it beyond that. > > hope this helps. > > Jean Eid wrote: > > >Dear All, > >I am trying to solve a Generalized Method of Moments problem which > >necessitate the gradient of moments computation to get the > >standard errors of estimates. > >I know optim does not output the gradient, but I can use numericDeriv to > >get that. My question is: is this the best function to do this? > > > >Thank you > >Jean, > > > >______________________________________________ > >R-help at stat.math.ethz.ch mailing list > >https://www.stat.math.ethz.ch/mailman/listinfo/r-help > >PLEASE do read the posting guide! http://www.R-project.org/posting-guide.html > > > > > >
Jean Eid wrote:>Dear All, >I am trying to solve a Generalized Method of Moments problem which >necessitate the gradient of moments computation to get the >standard errors of estimates. >I know optim does not output the gradient, but I can use numericDeriv to >get that. My question is: is this the best function to do this? > >'Best' depends on what you want. If you want an accurate numerical estimate of the gradient then you might look at the program gradRichardson in the curve package of the dseplus bundle in the devel area of CRAN. It uses Richarson extrapolation to improve the accuracy. However, this is not the program to use if you want a quick numerical estimate of the gradient. Most anything else you might think of using will be quicker. There are some other programs around too. A year or two ago there was some discussion of putting various gradient calculation techniques together in one place. I don't think anything has happen yet. Paul Gilbert
Hi All: Along the lines of this thread, I was wondering about the usefulness of putting together a package for numerical differentiation, to perform tasks such as gradient, jacobian, and hessian calculations for exact functions, as well as for noisy functions (via some type of smoothing). Based on finite difference calculus, I have written a number of (simple) R functions for gradient, jacobian, and hessian computations of different orders of accuracy. If such functions aren't already available, I could supply these to whoever may be interested in undertaking this project. Best, Ravi. ----- Original Message ----- From: Paul Gilbert <pgilbert at bank-banque-canada.ca> Date: Friday, April 30, 2004 2:19 pm Subject: Re: [R] numericDeriv> Jean Eid wrote: > > >Dear All, > >I am trying to solve a Generalized Method of Moments problem which > >necessitate the gradient of moments computation to get the > >standard errors of estimates. > >I know optim does not output the gradient, but I can use > numericDeriv to > >get that. My question is: is this the best function to do this? > > > > > 'Best' depends on what you want. If you want an accurate numerical > estimate of the gradient then you might look at the program > gradRichardson in the curve package of the dseplus bundle in the > devel > area of CRAN. It uses Richarson extrapolation to improve the > accuracy. > However, this is not the program to use if you want a quick > numerical > estimate of the gradient. Most anything else you might think of > using > will be quicker. > > There are some other programs around too. A year or two ago there > was > some discussion of putting various gradient calculation techniques > together in one place. I don't think anything has happen yet. > > Paul Gilbert > > ______________________________________________ > R-help at stat.math.ethz.ch mailing list > https://www.stat.math.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide! http://www.R-project.org/posting- > guide.html