It's difficult to say much at this level of generality, but I have
four suggestions:
1. Have you tried creating a reasonable grid of starting values
using "expand.grid" and then plotting the resulting likelihood
surface?
If you have more than 2 parameters, you may want to use 'lattice'
graphics. This should tell you if the functions seems unimodal, convex,
etc., in the region you covered and at the resolution of your grid.
2. Have you tried method="SANN" = simulated annealing? I might
try
one pass with SANN, then refine the solution found by SANN using BFGS.
3. After you have a solution, you can then try profile likelihod.
Unfortunately, my experience with profile.mle has been mixed. I
actually made local copies of mle and profile.mle and found and fixed
some of the deficiencies of each. I didn't test them enough to offer
the results to the R Core Team, however.
4. Have you looked at Venables and Ripley (2002) Modern Applied
Statistics with S, 4th ed. (Springer)? It's a great book for many
things, including the use of expand.grid and 'optim'.
hope this helps.
Spencer Graves
Rainer M Krug wrote:> Hi
>
> I hope this is the right forum - if not, point me please to a better one.
>
> I am using R 2.3.0 on Linux, SuSE 10.
>
>
> I have a question concerning mle (method="BFGS").
>
> I have a few models which I am fitting to existing data points. I
> realised, that the likelihood is quite sensitive to the start values for
> one parameter.
>
> I am wondering: what is the best approach to identify the right initial
> values? Do I have to do it recursively, and if yes, how can I automate
> it? Or do I have to play with the system?
>
> I am quite confident that the resulting parameters are the optimal for
> my problem - but can I verify it?
>
> Thanks,
>
> Rainer
>
>