Doing a hessian estimate at each Nelder-Mead iteration is rather like going from
den Haag
to Delft as a pedestrian walking and swimming via San Francisco. The structure
of the
algorithm means the Hessian estimate is done in addition to the NM work.
While my NM code was used for optim(), I didn't do the interfacing. The
reporting choices
are reasonably good, but don't necessarily suit your current needs. I'd
recommend going to
r-forge and installing my updated BFGS code. See
http://r-forge.r-project.org/R/?group_id=395 for a list of the codes -- Rvmmin
is the one
you want) which is all in R so you can put in output where you choose. It also
has bounds
constraints, which are quite useful to avoid roaming into unsuitable areas of
the
parameter space. While Rvmmin works best with analytic gradients, it does OK
most of the
time with numeric approximations. It keeps an approximate inverse hessian, but I
would not
assume that bears too much resemblance to the real hessian. Package ucminf uses
essentially the same algorithm (unconstrained only), and the detailed tactics
seem to be
well-thought out. However, I don't know how well reporting can be
controlled (it is R
interfaced to Fortran).
A derivative free method that may be worth a try is bobyqa in the minqa package
at the
same site as above. This is Mike Powell's code. The output can be set quite
detailed by
pushing the reporting control (iprint) higher. Again R -> Fortran interface
(thanks to
Kate Mullen).
Ravi Varadhan has several NM versions in R also, but I don't think they are
yet on r-forge.
If you try any of these, you can help us improve them by reporting
success/failure off
list. We believe that they are in pretty good shape, but there are always
interfacing and
tuning issues.
Cheers, JN
> Message: 1
> Date: Thu, 10 Dec 2009 12:40:17 +0100
> From: Lisanne Sanders <lisan_sanders at hotmail.com>
> Subject: [R] obtain intermediate estimate using optim
> To: <r-help at r-project.org>
> Message-ID: <COL110-W27FFA963674AA82FE4BC23958D0 at phx.gbl>
> Content-Type: text/plain
>
>
> Hi,
>
> Currently I am trying to solve a minimization problem using optim as method
Nelder-Mead. However, Neldel-Mead needs many iterations until it finally
converges. I have set $control.trace and $control.report such that I can see the
value of the function at each iteration. I do see that I set the convergence
criteria to strict in the sense that the function value does not change much.
However, before loosening my convergence criteria, I was wondering how to
progamm that I can see the estimates of the true parameters and of the hessian
such that I can see whether they do not change much either. Than I can adjust my
convergence criteria such that he ends at that point. I do know how to adjust
the convergence parameters but I do not know how to obtain intermediate
estimates of the parameters. I was wondering whether someone can help me with
this.
>
> Kind regards,
>
> Lisanne Sanders