1) Why use Nelder-Mead with optimx when it is an optim() function. You are going from New York to Philadelphia via Beijing because of the extra overhead. The NM method is there for convenience in comparisons. 2) NM cannot work with NA when it wants to compute the centroid of points and search direction. So you've got to find a way to make sure your likelihood is properly defined. This seems to be the issue for about 90% of failures with optim(x) or other ML methods in my recent experience. Note that returning a large value (and make it a good deal smaller than the .Machine$double.xmax, say that number *1e-6 to avoid computation troubles) often works, but it is a quick and dirty fix. JN On 13-08-13 06:00 AM, r-help-request at r-project.org wrote:> Message: 36 > Date: Tue, 13 Aug 2013 10:38:05 +0200 > From: Carlos Nasher<carlos.nasher at googlemail.com> > To:r-help at r-project.org > Subject: [R] [optim/bbmle] function returns NA at ... distance from x > Message-ID: > <CAP=BVWPxj991fBYt9ou5x1jf9NOL3VTq1Svtjvw82jWfjYz3xQ at mail.gmail.com> > Content-Type: text/plain > > Dear R helpers, > > I try to find the model parameters using mle2 (bbmle package). As I try to > optimize the likelihood function the following error message occurs: > > Error in grad.default(objectivefunction, coef) : > function returns NA at > 1e-040.001013016911639890.0003166929388711890.000935163594829395 distance > from x. > In addition: Warning message: > In optimx(par = c(0.5, 10, 0.7, 10), fn = function (p) : > Gradient not computable after method Nelder-Mead > > I can't figure out what that means exactly and how to fix it. I understand > that mle2 uses optim (or in my case optimx) to optimize the likelihood > function. As I use the Nelder-Mead method it should not be a problem if the > function returns NA at any iteration (as long as the initial values don't > return NA). Can anyone help me with that? > > Here a small example of my code that reproduces the problem: > > library(plyr) > library(optimx) > > ### Sample data ### > x <- c(1,1,4,2,3,0,1,6,0,0) > tx <- c(30.14, 5.14, 24.43, 10.57, 25.71, 0.00, 14.14, 32.86, 0.00, 0.00) > T <- c(32.57, 29.14, 33.57, 34.71, 27.71, 38.14, 36.57, 37.71, 35.86, 30.57) > data <- data.frame(x=x, tx=tx, T=T) > > ### Likelihood function ### > Likelihood <- function(data, r, alpha, s, beta) { > with(data, { > if (r<=0 | alpha<=0 | s<=0 | beta<=0) return (NaN) > f <- function(x, tx, T) > { > g <- function(y) > (y + alpha)^(-( r + x))*(y + beta)^(-(s + 1)) > integrate(g, tx, T)$value > } > integral <- mdply(data, f) > L <- > exp(lgamma(r+x)-lgamma(r)+r*(log(alpha)-log(alpha+T))-x*log(alpha+T)+s*(log(beta)-log(beta+T)))+exp(lgamma(r+x)-lgamma(r)+r*log(alpha)+log(s)+s*log(beta)+log(integral$V1)) > f <- sum(log(L)) > return (f) > }) > } > > ### ML estimation function ### > Estimate_parameters_MLE <- function(data, initValues) { > llhd <- function(r, alpha, s, beta) { > return (Likelihood(data, r, alpha, s, beta)) > } > library(bbmle) > fit <- mle2(llhd, initValues, skip.hessian=TRUE, optimizer="optimx", > method="Nelder-Mead", control=list(maxit=1e8)) > return (fit) > } > > ### Parameter estimation ### > Likelihood(data=data, r=0.5, alpha=10, s=0.7, beta=10) ### check initial > parameters --> -72.75183 --> initial parameters do return value > MLE_estimation <- Estimate_parameters_MLE(data=data, list(r=0.5, alpha=10, > s=0.7, beta=10)) > > 'Error in grad.default(objectivefunction, coef) : > function returns NA at > 1e-040.001013016911639890.0003166929388711890.000935163594829395 distance > from x. > In addition: Warning message: > In optimx(par = c(0.5, 10, 0.7, 10), fn = function (p) : > Gradient not computable after method Nelder-Mead' > > > Best regards, > Carlos > > ----------------------------------------------------------------- > Carlos Nasher > Buchenstr. 12 > 22299 Hamburg > > tel: +49 (0)40 67952962 > mobil: +49 (0)175 9386725 > mail:carlos.nasher at gmail.com > > [[alternative HTML version deleted]] >