optimx with BFGS uses optim, so you actually incur some overhead unnecessarily.
And BFGS
really needs good gradients (as does Rvmmin and Rcgmin which are updated BFGS
and CG, but
all in R and with bounds or box constraints).
>From the Hessian, your function is (one of the many!) that have pretty bad
numerical
properties. With all 0s, Newton is spinning in his grave. Probably the gradient
is small
also. So the optimizers decide they are at a minimum.
As a first step, I'd suggest
- checking that the function is computed correctly. That is, does your function
give the
correct value?
- try a few other points "nearby". Are any "lower" than your
first point?
- Use numDeriv and get the gradient (and possibly Hessian) at each of these
nearby points.
These steps may reveal either that you have a bug in the function, or that it is
pretty
nasty numerically. In the latter case, you really need to try to find an
equivalent
function e.g., log(f) that can be minimized more easily.
For information, I'm rather slowly working on a function test suite to do
this. Also a lot
of changes are going on in optimx to try to catch some of the various nasties.
These
appear first in the R-forge development versions. Use and comments welcome.
If you DO find a lower point, then I'd give Nelder-Mead a try. Ravi Varadhan
has a variant
of this that may do a little better in dfoptim.
You could also be a bit lazy and try optimx with the control
"all.methods=TRUE". Not
recommended for production use, but often helpful in seeing if any method can
get some
traction.
Cheers,
JN
On 08/13/2011 06:00 AM, r-help-request at r-project.org
wrote:> ------------------------------ Message: 47 Date: Sat, 13 Aug 2011 01:12:09
-0700 (PDT)
> From: Kathie <kathryn.lord2000 at gmail.com> To: r-help at
r-project.org Subject: [R]
> optimization problems Message-ID: <1313223129383-3741005.post at
n4.nabble.com> Content-Type:
> text/plain; charset=us-ascii Dear R users I am trying to use OPTIMX(OPTIM)
for nonlinear
> optimization. There is no error in my code but the results are so weird
(see below). When
> I ran via OPTIM, the results are that Initial values are that theta0 = 0.6
1.6 0.6 1.6
> 0.7. (In fact true vales are 0.5,1.0,0.8,1.2, 0.6.)
>
--------------------------------------------------------------------------------------------
>> > optim(par=theta0, fn=obj.fy, method="BFGS",
control=list(trace=1,
>> > maxit=10000), hessian=T)
> initial value -0.027644
> final value -0.027644
> converged
> $par
> [1] 0.6 1.6 0.6 1.6 0.7
>
> $value
> [1] -0.02764405
>
> $counts
> function gradient
> 1 1
>
> $convergence
> [1] 0
>
> $message
> NULL
>
> $hessian
> [,1] [,2] [,3] [,4] [,5]
> [1,] 0 0 0 0 0
> [2,] 0 0 0 0 0
> [3,] 0 0 0 0 0
> [4,] 0 0 0 0 0
> [5,] 0 0 0 0 0
>
--------------------------------------------------------------------------------------------
>
> When I ran via OPTIMX, the results are that
>
>
--------------------------------------------------------------------------------------------
>> > optimx(par=theta0, fn=obj.fy, method="BFGS",
control=list(maxit=10000),
>> > hessian=T)
> par fvalues method fns grs itns
> conv KKT1 KKT2 xtimes
> 1 0.6, 1.6, 0.6, 1.6, 0.7 -0.02764405 BFGS 1 1 NULL 0 TRUE
> NA 8.71
>> >
>
--------------------------------------------------------------------------------------------
>
> Whenever I used different initial values, the initial ones are the answer
of
> OPTIMX(OPTIM).
>
> Would you plz explain why it happened? or any suggestion will be greatly
> appreciated.
>
> Regards,
>
> Kathryn Lord
>
> --
> View this message in context:
http://r.789695.n4.nabble.com/optimization-problems-tp3741005p3741005.html
> Sent from the R help mailing list archive at Nabble.com.
>
>