Dear R users
When I use OPTIMX with BFGS, I've got the following error message.
-----------------------------------------------------------------> optimx(par=theta0, fn=obj.fy, gr=gr.fy, method="BFGS")
Error: Gradient function might be wrong - check it!
-----------------------------------------------------------------
So, I checked and checked my gradient function line by line. However, I
could not find anything wrong.
When I remove the gradient, I've got
-----------------------------------------------------------------> optimx(par=theta0, fn=obj.fy, method="BFGS")
par fvalues method
fns grs itns conv KKT1 KKT2 xtimes
1 0.4423958, 0.9665069, 0.7920856, 1.1952092, 0.3083377 -0.01733672 BFGS
35 22 NULL 0 TRUE FALSE 76.02
-----------------------------------------------------------------
where the true theta is (0.5, 1.0, 0.8, 1.2, 0.6).
However, I've got better results below when I tried OPTIM with the gradient.
-----------------------------------------------------------------> optim(par=theta0, fn=obj.fy, gr=gr.fy, method="BFGS")
$par
[1] 0.5004394 0.9999669 0.8035140 1.1996053 0.5989842
$value
[1] -0.01717598
$counts
function gradient
54 8
$convergence
[1] 0
$message
NULL
-----------------------------------------------------------------
Of course, I tried several different data and received similar results.
If the gradient function is really wrong, why is the results of OPTIM with
the gradient better? Weird, isn't it?
OPTIMX has better gradient computation as I know. Would you plz explain why
these results happened?
Regards,
Kathryn Lord
--
View this message in context:
http://r.789695.n4.nabble.com/Error-Gradient-function-might-be-wrong-in-OPTIMX-tp3776040p3776040.html
Sent from the R help mailing list archive at Nabble.com.