similar to: R-beta: SEs for one-param MLE in R?

Displaying 20 results from an estimated 10000 matches similar to: "R-beta: SEs for one-param MLE in R?"

2001 Apr 27
3
nls question
I have a question about passing arguments to the function f that nlm minimizes. I have no problems if I do this: x<-seq(0,1,.1) y<-1.1*x + (1-1.1) + rnorm(length(x),0,.1) fn<-function(p) { yhat<-p*x+(1-p) sum((y-yhat)^2) } out<-nlm(fn,p=1.5,hessian=TRUE) But I would like to define fn<-function(x,y,p) { yhat<-p*x+(1-p) sum((y-yhat)^2) } so
2004 Feb 05
1
for help about MLE in R
Dear Sir, I am using R to estimate two parameters in Normal distribution. I generated 100 normal distributed numbers, on which to estimate the parameter. The syntax is: >fn<-function(x)-50*log((y)^2)+50*log(2*pi)-(1/2*(z^2))*(sum((x-y)^2)) >out<-nlm(fn, x, hessian=TRUE) but it does not work. Could you please help me to compose the syntax for the purpose that find maximum
2005 Mar 08
4
Non-linear minimization
hello, I have got some trouble with R functions nlm(), nls() or optim() : I would like to fit 3 parameters which must stay in a precise interval. For exemple with nlm() : fn<-function(p) sum((dN-estdata(p[1],p[2],p[3]))^2) out<-nlm(fn, p=c(4, 17, 5), hessian=TRUE,print.level=2) with estdata() a function which returns value to fit with dN (observed data vactor) My problem is that only
2002 Apr 24
3
nonlinear least squares, multiresponse
I'm trying to fit a model to solve a biological problem. There are multiple independent variables, and also there are multiple responses. Each response is a function of all the independent variables, plus a set of parameters. All the responses depend on the same variables and parameters - just the form of the function changes to define each seperate response. Any ideas how I can fit
2009 Jul 01
2
Difficulty in calculating MLE through NLM
Hi R-friends, Attached is the SAS XPORT file that I have imported into R using following code library(foreign) mydata<-read.xport("C:\\ctf.xpt") print(mydata) I am trying to maximize logL in order to find Maximum Likelihood Estimate (MLE) of 5 parameters (alpha1, beta1, alpha2, beta2, p) using NLM function in R as follows. # Defining Log likelihood - In the function it is noted as
2003 Oct 17
2
nlm, hessian, and derivatives in obj function?
I've been working on a new package and I have a few questions regarding the behaviour of the nlm function. I've been (for better or worse) using the nlm function to fit a linear model without suppling the hessian or gradient attributes in the objective function. I'm curious as to why the nlm requires 31 iterations (for the linear model), and then it doesn't work when I try to add
2004 Aug 25
3
Beginners Question: Make nlm work
Hello, I'm new to this and am trying to teach myself some R by plotting biological data. The growth curve in question is supposed to be fitted to the Verhulst equation, which may be transcribed as follows: f(x)=a/(1+((a-0.008)/0.008)*exp(-(b*x))) - for a known population density (0.008) at t(0). I am trying to rework the example from "An Introduction to R" (p. 72) for my case and
1999 Dec 09
1
nlm() problem or MLE problem?
I am trying to do a MLE fit of the weibull to some data, which I attach. fitweibull<-function() { rt<-scan("r/rt/data2/triam1.dat") rt<-sort(rt) plot(rt,ppoints(rt)) a<-9 b<-.27 fn<-function(p) -sum( log(dweibull(rt,p[1],p[2])) ) cat("starting -log like=",fn(c(a,b)),"\n") out<-nlm(fn,p=c(a,b), hessian=TRUE)
2005 Dec 04
1
Understanding nonlinear optimization and Rosenbrock's banana valley function?
GENERAL REFERENCE ON NONLINEAR OPTIMIZATION? What are your favorite references on nonlinear optimization? I like Bates and Watts (1988) Nonlinear Regression Analysis and Its Applications (Wiley), especially for its key insights regarding parameter effects vs. intrinsic curvature. Before I spent time and money on several of the refences cited on the help pages for "optim",
2011 Sep 22
1
nlm's Hessian update method
Hi R-help! I'm trying to understand how R's nlm function updates its estimate of the Hessian matrix. The Dennis/Schnabel book cited in the references presents a number of different ways to do this, and seems to conclude that the positive-definite secant method (BFGS) works best in practice (p201). However, when I run my code through the optim function with the method as "BFGS",
2019 Feb 19
1
mle (stat4) crashing due to singular Hessian in covariance matrix calculation
Hi, R developers. when running mle inside a loop I found a nasty behavior. From time to time, my model had a degenerate minimum and the loop just crashed. I tracked it down to "vcov <- if (length(coef)) solve(oout$hessian)" line, being the hessian singular. Note that the minimum reached was good, it just did not make sense to calculate the covariance matrix as the inverse of a
2004 Feb 19
1
Obtaining SE from the hessian matrix
Dear R experts, In R-intro, under the 'Nonlinear least squares and maximum likelihood models' there are ttwo examples considered how to use 'nlm' function. In 'Least squares' the Standard Errors obtained as follows: After the fitting, out$minimum is the SSE, and out$estimates are the least squares estimates of the parameters. To obtain the approximate standard
2007 Mar 02
2
nlm() problem : extra parameters
Hello: Below is a toy logistic regression problem. When I wrote my own code, Newton-Raphson converged in three iterations using both the gradient and the Hessian and the starting values given below. But I can't get nlm() to work! I would much appreciate any help. > x [1] 10.2 7.7 5.1 3.8 2.6 > y [1] 9 8 3 2 1 > n [1] 10 9 6 8 10 derfs4=function(b,x,y,n) {
2006 Jan 25
2
Question about fitting power
Hi R users I'm trying to fit a model y=ax^b. I know if I made ln(y)=ln(a)+bln(x) this is a linear regression. But I obtain differente results with nls() and lm() My commands are: nls(CV ~a*Est^b, data=limiares, start =list(a=100,b=0), trace = TRUE) for nonlinear regression and : lm(ln_CV~ln_Est, data=limiares) for linear regression Nonlinear
2003 Oct 24
1
first value from nlm (non-finite value supplied by nlm)
Dear expeRts, first of all I'd like to thank you for the quick help on my last which() problem. Here is another one I could not tackle: I have data on an absorption measurement which I want to fit with an voigt profile: fn.1 <- function(p){ for (i1 in ilong){ ff <- f[i1] ex[i1] <- exp(S*n*L*voigt(u,v,ff,p[1],p[2],p[3])[[1]]) } sum((t-ex)^2) } out <-
2007 Sep 16
1
Problem with nlm() function.
In the course of revising a paper I have had occasion to attempt to maximize a rather complicated log likelihood using the function nlm(). This is at the demand of a referee who claims that this will work better than my proposed use of a home- grown implementation of the Levenberg-Marquardt algorithm. I have run into serious hiccups in attempting to apply nlm(). If I provide gradient and
2000 Mar 06
1
nlm and optional arguments
It would be really nice if nlm took a set of "..." optional arguments that were passed through to the objective function. This level of hacking is probably slightly beyond me: is there a reason it would be technically difficult/inefficient? (I have a vague memory that it used to work this way either in S-PLUS or in some previous version of R, but I could easily be wrong.) Here's
2011 Mar 19
2
problem running a function
Dear people, I'm trying to do some analysis of a data using the models by Royle & Donazio in their fantastic book, particular the following function: http://www.mbr-pwrc.usgs.gov/pubanalysis/roylebook/panel4pt1.fn that applied to my data and in the console is as follows: > `desman.y` <- structure(c(3L,4L,3L,2L,1L), .Names = c("1", "2", "3",
2006 Nov 10
1
Variable limit in nlm?
Admittedly I am using an old version 1.7.1, but can anyone tell if this is or was a problem. I can only get nlm (nonlinear minimization) to adjust the first three components of function variable. No gradient or hessian is supplied. E.G.; fnoise function(y) { y[5]/(y[4]*sp2) * exp(-((x[,3]-y[1]-y[2]*x[,1]-y[3] *x[,2])/y[4])^2/2) + (1-y[5])/(y[9]*sp2) * exp(-((x[,3]-y[6]-y[7]*x[,1]-y[8]
2003 Oct 20
1
Fitting a Weibull/NaNs
I'm trying to fit a Weibull distribution to some data via maximum likelihood estimation. I'm following the procedure described by Doug Bates in his "Using Open Source Software to Teach Mathematical Statistics" but I keep getting warnings about NaNs being converted to maximum positive value: > llfunc <- function (x) { -sum(dweibull(AM,shape=x[1],scale=x[2], log=TRUE))} >