On Mon, 11 Oct 2004, Laura Holt wrote:
> Dear R People:
>
> I am trying to duplicate the example from Dennis and Schnabel's
"Numerical
> Methods for Unconstrained Optimization and Nonlinear Equations", which
starts
> on page 149.
>
> My reason for doing so: to try to understand the "nlm" function.
>
> Here is the function:
>> mfun1
> function(x) {
> z <- matrix(0,nrow=2,ncol=1)
> z[1,1] <- x[1]^2 + x[2]^2 - 2
> z[2,1] <- exp(x[1]-1) + x[2]^3 - 2
> res <- 0.5*t(z)%*%z
> res
> }
>
> This function has a root at c(1,1).
> When I use the following:
>> nlm(mfun1,c(2,0.5))
> $minimum
> [1] 0.09168083
>
> $estimate
> [1] 1.485078e+00 -4.973395e-07
>
> $gradient
> [1] -8.120649e-09 1.096345e-09
>
> $code
> [1] 1
>
> $iterations
> [1] 19
>
> I get the solution of 1.485078 and zero.
>
You might want to look at the function surface, eg:
dd<-expand.grid(x=seq(0,2,length=20),y=seq(0,2,length=20))
yy<-apply(dd,1,mfun1)
contour(seq(0,2,length=20),seq(0,2,length=20),matrix(yy,20),nlevels=40)
The function is nearly flat on a fairly wide band where either x or y is 1
or slightly larger, and so is far from quadratic in that region. You need
better starting values, or some rescaling of the objective function.
Another optimisation algorithm might give different results, and in fact
if you try optim() you end up at (1,1) with method="Nelder-Mead" and
method="BFGS", but at (1.48,0) with "CG" or
"L-BFGS-B".
-thomas