Displaying 20 results from an estimated 10000 matches similar to: "Interpreting negative diagonal values in a hessian"
2003 Oct 17
2
nlm, hessian, and derivatives in obj function?
I've been working on a new package and I have a few questions regarding the
behaviour of the nlm function. I've been (for better or worse) using the nlm
function to fit a linear model without suppling the hessian or gradient
attributes in the objective function. I'm curious as to why the nlm requires
31 iterations (for the linear model), and then it doesn't work when I try to
add
2011 Sep 22
1
nlm's Hessian update method
Hi R-help!
I'm trying to understand how R's nlm function updates its estimate of the Hessian matrix. The Dennis/Schnabel book cited in the references presents a number of different ways to do this, and seems to conclude that the positive-definite secant method (BFGS) works best in practice (p201). However, when I run my code through the optim function with the method as "BFGS",
2004 Feb 19
1
Obtaining SE from the hessian matrix
Dear R experts,
In R-intro, under the 'Nonlinear least squares and maximum likelihood
models' there are ttwo examples considered how to use 'nlm' function.
In 'Least squares' the Standard Errors obtained as follows:
After the fitting, out$minimum is the SSE, and out$estimates are the
least squares estimates of the parameters. To obtain the approximate
standard
2003 May 28
0
supplying the Hessian to "nlm"
Dear all,
I am trying to minimize a function with 3 parameters using nlm. I have worked out the 2nd derivatives (incl the cross-product terms) and would like to supply them to nlm for evaluation. What I am not sure about is how to set up the Hessian matrix for nlm. That is,
attr(lhat, "hessian") <- c(???)
Do I have to enter all 9 of the entries or just the lower triangle of the
1999 Nov 24
0
nlm gradient and hessian
Out of curiosity, I have tried, without success, to use the new
facility in nlm to specify the gradient and hessian. (It is many years
since I had a problem simple enough to make analytic derivation of
these worthwhile.) The help now says that the function must have
attributes with these names but gives no indication as to what should
be in the attributes. The online example and demo do not use
2005 Dec 04
1
Understanding nonlinear optimization and Rosenbrock's banana valley function?
GENERAL REFERENCE ON NONLINEAR OPTIMIZATION?
What are your favorite references on nonlinear optimization? I like
Bates and Watts (1988) Nonlinear Regression Analysis and Its
Applications (Wiley), especially for its key insights regarding
parameter effects vs. intrinsic curvature. Before I spent time and
money on several of the refences cited on the help pages for "optim",
2007 Sep 16
1
Problem with nlm() function.
In the course of revising a paper I have had occasion to attempt to
maximize a rather
complicated log likelihood using the function nlm(). This is at the
demand of a referee
who claims that this will work better than my proposed use of a home-
grown implementation
of the Levenberg-Marquardt algorithm.
I have run into serious hiccups in attempting to apply nlm().
If I provide gradient and
2008 Jul 10
0
ace error because of missings?
Hello RUser!
I try to use ace for an ancestral state reconstruction but got back an error
message.
ace(FacVar,Tree, type="discrete")
Warning messages:
1: In nlm(function(p) dev(p), p = rep(ip, length.out = np), hessian = TRUE)
:
NA/Inf durch gr??te positive Zahl ersetzt (NA/Inf replaced by positive
number)
2: In nlm(function(p) dev(p), p = rep(ip, length.out = np), hessian
2007 Feb 16
1
optim() and resultant hessian
R users;
A question about optimization within R.
I've been using both optim() and nlminb() to estimate parameters and all
seems to be working fine. For context (but without getting into specifics -
sorry), I'm working with a problem that is known to have correlated
parameters, and parameter estimation can be difficult. I have a question on
optim() - I'm using
2011 Dec 29
0
problem of "constrOptim.nl", no hessian and convergence values
Dear Helper,
I used "constrOptim.nl" and got the value of par. The estimations looks good
even if the number of iterations is only 16. But the values of hessian and
convergence are both "NULL".
I tested the objective function and gradient function by "optim" and didn't
see any problem there. With these functions, "optim" gives the convergence
value
2010 Mar 24
1
vcov.nlminb
Hello all,
I am trying to get the variance-covariance (VCOV) matrix of the
parameter estimates produced from the nlminb minimizing function, using
vcov.nlminb, but it seems to have been expunged from the MASS library.
The hessian from nlminb is also producing NaNs, although the estimates
seems to be right, so I can't VCOV that way either. I also tried using
the vcov function after minimizing
2003 Oct 24
1
first value from nlm (non-finite value supplied by nlm)
Dear expeRts,
first of all I'd like to thank you for the
quick help on my last which() problem.
Here is another one I could not tackle:
I have data on an absorption measurement which I want to fit
with an voigt profile:
fn.1 <- function(p){
for (i1 in ilong){
ff <- f[i1]
ex[i1] <- exp(S*n*L*voigt(u,v,ff,p[1],p[2],p[3])[[1]])
}
sum((t-ex)^2)
}
out <-
2003 Feb 10
1
Zero rows/cols in the hessian matrix
Dear R experts!
I try to minimize a function with external C fitting function.
I get the hessian matrix. Here it is:
[,1] [,2] [,3] [,4]
[1,] 1.8816631 0 0.8859803 0
[2,] 0.0000000 0 0.0000000 0
[3,] 0.8859803 0 0.4859983 0
[4,] 0.0000000 0 0.0000000 0
Second and fourth rows/columns have zero values only. That's OK,
because that ones related
2011 Dec 29
0
problem of "constrOptim.nl", no hessian and convergence
Hi,
Use the `auglag' function in "alabama" if you want to get the Hessian at convergence. This typically tends to perform better than `constrOptim.nl'. Also, `constrOptim.nl' does not compute the Hessian. You should not specify method="L-BFGS-B". The default method "BFGS" is better in this setting.
Hope this helps,
Ravi
2019 Feb 19
1
mle (stat4) crashing due to singular Hessian in covariance matrix calculation
Hi, R developers.
when running mle inside a loop I found a nasty behavior. From time to
time, my model had a degenerate minimum and the loop just crashed. I
tracked it down to "vcov <- if (length(coef)) solve(oout$hessian)" line,
being the hessian singular.
Note that the minimum reached was good, it just did not make sense to
calculate the covariance matrix as the inverse of a
2005 Oct 11
2
Sometimes having problems finding a minimum using optim(), optimize(), and nlm() (while searching for noncentral F parameters)
Hi everyone.
I have a problem that I have been unable to determine either the best
way to proceed and why the methods I'm trying to use sometimes fail. I'm
using the pf() function in an optimization function to find a
noncentrality parameter that leads to a specific value at a specified
quantile. My goal is to have a general function that returns the
noncentrality parameter that
2000 Mar 06
1
nlm and optional arguments
It would be really nice if nlm took a set of "..." optional arguments
that were passed through to the objective function. This level of hacking
is probably slightly beyond me: is there a reason it would be technically
difficult/inefficient? (I have a vague memory that it used to work this
way either in S-PLUS or in some previous version of R, but I could easily
be wrong.)
Here's
2004 Apr 14
1
How does nlm work?
Dear R users,
I have looked in the reference
Schnabel, R. B., Koontz, J. E. and Weiss, B. E. (1985) A modular
system of algorithms for unconstrained minimization. _ACM Trans.
Math. Software_, *11*, 419-440.
cited in the nlm help.
This article says that the algorithm permits the use of step selection
(line search, dogleg and optimal step), analytic or finite diference
gradient
2007 Mar 02
2
nlm() problem : extra parameters
Hello:
Below is a toy logistic regression problem. When I wrote my own code,
Newton-Raphson converged in three iterations using both the gradient
and the Hessian and the starting values given below. But I can't
get nlm() to work! I would much appreciate any help.
> x
[1] 10.2 7.7 5.1 3.8 2.6
> y
[1] 9 8 3 2 1
> n
[1] 10 9 6 8 10
derfs4=function(b,x,y,n)
{
2006 Nov 10
1
Variable limit in nlm?
Admittedly I am using an old version 1.7.1, but can anyone tell if this
is or was a problem. I can only get nlm (nonlinear minimization) to
adjust the first three components of function variable. No gradient or
hessian is supplied. E.G.;
fnoise
function(y) { y[5]/(y[4]*sp2) * exp(-((x[,3]-y[1]-y[2]*x[,1]-y[3]
*x[,2])/y[4])^2/2) + (1-y[5])/(y[9]*sp2) * exp(-((x[,3]-y[6]-y[7]*x[,1]-y[8]