Displaying 20 results from an estimated 30000 matches similar to: "calculating a gradient in optim without calling a separate function"
2003 Mar 27
1
How to obtain final gradient estimation from optim
I use optim to compute maximum likelihood estimations without giving an
analytical gradient to optim. However, I would like to
get an output of the final numerical gradient vector and the final matrix of
contributions to the gradient. But I did not
find any mention of this kind of output in help pages. Does anyone know how to
do that ?
Stephane Luchini
GREQAM
Marseille, France
2011 Nov 30
1
How can I pick a matrix from a function? (Out Product of Gradient)
Hi all,
I would like to use optim() to estimate the equation by the log-likelihood
function and gradient function which I had written. I try to use OPG(Out
Product of Gradient) to calculate the Hessian matrix since sometime Hessian
matrix is difficult to calculate. Thus I want to pick the Gradient matrix
from the gradient function.
Moreover, could R show the process of calculation on gradient
2009 Nov 29
1
optim or nlminb for minimization, which to believe?
I have constructed the function mml2 (below) based on the likelihood function described in the minimal latex I have pasted below for anyone who wants to look at it. This function finds parameter estimates for a basic Rasch (IRT) model. Using the function without the gradient, using either nlminb or optim returns the correct parameter estimates and, in the case of optim, the correct standard
2011 Sep 23
0
Error message when using 'optim' for numerical maximum likelihood
Hello All,
I am trying to estimate the parameters of a stochastic differential equation
(SDE) using quasi-maximum likelihood methods but I am having trouble with
the 'optim' function that I am using to optimise the log-likelihood
function.
After simulating the SDE I generated samples of the simulated data of
varying size (I want to see what effect adding more observations has on the
2003 Sep 08
1
Probit and optim in R
I have had some weird results using the optim() function. I wrote a
probit likelihood and wanted to run it with optim() with simulated
data. I did not include a gradient at first and found that optim()
would not even iterate using BFGS and would only occasionally work
using SANN. I programmed in the gradient and it iterates fine but the
estimates it returns are wrong. The simulated data work
2003 Jan 17
1
supplying gradient to constrOptim()
Hi, I'm very interested in using the constrOptim() function currently in
the R-devel sources. In particular, I'm trying to fit point process
conditional intensity models via maximum likelihood. However, I noticed
that the gradient of the objective function must be supplied for all but
the Nelder-Mead method. I was wondering why this was because optim()
itself does not require a gradient
2009 Mar 02
1
initial gradient and vmmin not finite
Dear Rhelpers
I have the problem with initial values, could you please tell me how to solve it?
Thank you
June
> p = summary(maxLik(fr,start=c(0,0,0,1,0,-25,-0.2)))
Error in maxRoutine(fn = logLik, grad = grad, hess = hess, start = start, :
NA in the initial gradient
> p = summary(maxLik(fr,start=c(0,0,0,1,0,-25,-0.2),method="BFGS"))
Error in optim(start, func, gr =
2002 Mar 21
2
optim with gradient
> Date: Wed, 20 Mar 2002 14:31:03 +0100 (CET)
> From: =?iso-8859-1?Q?G=F6ran_Brostr=F6m?= <gb at stat.umu.se>
> Subject: [R] optim with gradient
>
> I want to maximise a function using 'optim' with a method that requires
> the gradient, so I supply two functions, 'fun' for the function value
> and 'd.fun' for its gradient. My question is: Since
2003 Nov 17
0
gradient option in 'nlm' function
<FONT face="Default Sans Serif, Verdana, Arial, Helvetica, sans-serif" size=2><DIV>Dear list members,</DIV><DIV> </DIV><DIV>I am trying to use "nlm" function to maximize a mixture likelihood of beta densities. There are five unknown parameters in the likelihood. Since I can get the analytic gradient, I attach the "gradient"
2008 Sep 28
0
constrained logistic regression: Error in optim() with method = "L-BFGS-B"
Dear R Users/Experts,
I am using a function called logitreg() originally described in MASS (the
book 4th Ed.) by Venebles & Ripley, p445. I used the code as provided but
made couple of changes to run a 'constrained' logistic regression, I set the
method = "L-BFGS-B", set lower/upper values for the variables.
Here is the function,
logitregVR <- function(x, y, wt =
2008 Sep 29
0
Logistic Regression using optim() give "L-BFGS-B" error, please help
Sorry, I deleted my old post. Pasting the new query below.
Dear R Users/Experts,
I am using a function called logitreg() originally described in MASS (the
book 4th Ed.) by Venebles & Ripley, p445. I used the code as provided but
made couple of changes to run a 'constrained' logistic regression, I set the
method = "L-BFGS-B", set lower/upper values for the variables.
Here
2006 Aug 09
2
optim error
Dear all,
There have been one or two questions posted to the list regarding the optim
error "non-finite finite-difference value [4]." The error apparently means
that the 4th element of the gradient is non-finite. My question is what
part(s) of my program should I fiddle with in an attempt to fix it?
Starting values? Something in the log-likelihood itself? Perhaps the data
2012 Sep 27
0
problems with mle2 convergence and with writing gradient function
Dear R help,
I am trying solve an MLE convergence problem: I would like to estimate
four parameters, p1, p2, mu1, mu2, which relate to the probabilities,
P1, P2, P3, of a multinomial (trinomial) distribution. I am using the
mle2() function and feeding it a time series dataset composed of four
columns: time point, number of successes in category 1, number of
successes in category 2, and
2002 Jun 28
1
Problem in optim(method="L-BFGS-B") (PR#1717)
Full_Name: Jörg Polzehl
Version: 1.5.1
OS: Windows 2000
Submission from: (NULL) (193.175.148.198)
When calculating MLE's in a variance component model using constrained
optimization, i.e. optim(...,method="L-BFGS-B",...) I observed an inproper
behaviour in cases where
the likelihood function was evalueted at the constraint. Parameters and value of
the
function at the constraint
2012 Apr 14
0
R-help: Censoring data (actually an optim issue
Your function is giving NaN's during the optimization.
The R-forge version of optimx() has functionality specifically intended to deal with this.
NOTE: the CRAN version does not, and the R-forge version still has some glitches!
However, I easily ran the code you supplied by changing optim to optimx in the penultimate
line. Here's the final output.
KKT condition testing
Number of
2009 Sep 30
1
Optim(...) estimate of stDev far too low
R-help,
I'm just trying to find the ML (maximum likelihood) estimates
of the mean and standard deviation of a set of observations:
>xx=c(2.5,3.5,4,6,6.5,7.5)
fn<-function(params,x=xx)
{
media<-params[1]
st <-params[2]
pdf=-sum(dnorm(log(xx),log(media),st,TRUE))
return(pdf)
}
optim(c(mu,stdev),fn,method="L-BFGS-B",lower=c(0.001, 0.001)
,upper = rep(Inf, 2),
2004 Aug 11
0
always NaN after some running in R, but all fine in S-plus
Hello, S-plus and R helpers,(sorry for cross-post)
I wrote some simple C code for one likelihood to be optimized (using
optim(MASS)). I use same function, same data, same starting points and same
DLL in R and S-plus for comparison. (I compiled it with 'Rcmd SHLIB
likelihood.c' and the header files of it include only R.h and math.h). While
it works quite fine in S-plus, it forever returns
2001 Aug 28
2
fitting a mixture of distributions with optim and max log likelihood ?
hi
Suppose I have a mixture of 2 distributions generated by
rtwonormals <- function(npnt,m1,s1,m2,s2,p2){
rv<-vector(npnt,mode="numeric")
for( i in seq(1:npnt)){
if(runif(1,0,1)<=p2){
rv[i]<-rnorm(1,m2,s2)
}
else{
rv[i]<-rnorm(1,m1,s1)
}
}
return(rv)
}
x <- rtwonormals(50000,0,100,500,500,0.05)
#and I try to fit these with (based on thread: [R]
2010 Sep 30
3
how to avoid NaN in optim()
hi ,
lik <- function(nO, nA, nB, nAB){
loglik <- function(par)
{
p=par[1]
q=par[2]
r <- 1 - p - q
if (c(p,q,r) > rep(0,3) && c(p,q,r) < rep(1,3) )
{
-(2 * nO * log (r) + nA * log (p^2 + 2 * p * r)
+ nB * log (q^2 + 2 * q * r)
+ nAB * (log(2) +log(p) +log(q)))
}
else
NA
}
loglik
}
2008 Apr 22
2
optimization and gradient
Dear all,
I am using the functions 'optim' and 'nlminb'. For both, you can provide
a function which computes the gradient of the objective function (to
enhance speed and precision). In my case, both the objective function
and the gradient take time to be computed and share many common
computations (similar matrix, products, etc...). Therefore, I have to
compute these