Yuchen Luo
2007-Sep-05 23:43 UTC
[R] 'singular gradient matrix’ when using nls() and how to make the program skip nls( ) and run on
Dear friends. I use nls() and encounter the following puzzling problem: I have a function f(a,b,c,x), I have a data vector of x and a vectory y of realized value of f. Case1 I tried to estimate c with (a=0.3, b=0.5) fixed: nls(y~f(a,b,c,x), control=list(maxiter = 100000, minFactor=0.5 ^2048),start=list(c=0.5)). The error message is: "number of iterations exceeded maximum of 100000" Case2 I then think maybe the value of a and be are not reasonable. So, I let nls() estimate (a,b,c) altogether: nls(y~f(a,b,c,x), control=list(maxiter = 100000, minFactor=0.5 ^2048),start=list(a=0.3,b=0.5,c=0.5)). The error message is: "singular gradient matrix at initial parameter estimates". This is what puzzles me, if the initial parameter of (a=0.3,b=0.5,c=0.5) can create 'singular gradient matrix', then why doesn't this 'singular gradient matrix' appear in Case1? I have tried to change the initial value of (a,b,c) around but the problem persists. I am wondering if there is a way out. My another question is, I need to run 220 of nls() in my program with different y and x. When one of the nls() encounter a problem, the whole program stops. In my case, the 3rd nls() runs into a problem. I would still need the program to run the remaining 217 nls( )! Is there a way to make the program skip the problematic nls() and complete the ramaining nls()'s? Your help will be highly appreciated! Yuchen Luo [[alternative HTML version deleted]]
Gabor Grothendieck
2007-Sep-06 21:21 UTC
[R] 'singular gradient matrix’ when using nls() and how to make the program skip nls( ) and run on
In case 1 graph your function and then use optimize rather than nls. In case 2 a and b may have the same effect as c on f whereas they don't vary in case 1 so it does not matter. For example consider minimizing f <- function(a, b) (a + b)^2 If a is fixed at zero then the minimum occurs for b=0 but if a is not fixed then increasing a and decreasing b by the same amount causes no change in the result so the gradient in such a direction is zero. On 9/5/07, Yuchen Luo <realityrandom at gmail.com> wrote:> Dear friends. > > I use nls() and encounter the following puzzling problem: > > > > I have a function f(a,b,c,x), I have a data vector of x and a vectory y of > realized value of f. > > > > Case1 > > I tried to estimate c with (a=0.3, b=0.5) fixed: > > nls(y~f(a,b,c,x), control=list(maxiter = 100000, minFactor=0.5 > ^2048),start=list(c=0.5)). > > The error message is: "number of iterations exceeded maximum of 100000" > > > > Case2 > > I then think maybe the value of a and be are not reasonable. So, I let nls() > estimate (a,b,c) altogether: > > nls(y~f(a,b,c,x), control=list(maxiter = 100000, minFactor=0.5 > ^2048),start=list(a=0.3,b=0.5,c=0.5)). > > The error message is: > > "singular gradient matrix at initial parameter estimates". > > > > This is what puzzles me, if the initial parameter of (a=0.3,b=0.5,c=0.5) can > create 'singular gradient matrix', then why doesn't this 'singular gradient > matrix' appear in Case1? > > > > I have tried to change the initial value of (a,b,c) around but the problem > persists. I am wondering if there is a way out. > > > > My another question is, I need to run 220 of nls() in my program with > different y and x. When one of the nls() encounter a problem, the whole > program stops. In my case, the 3rd nls() runs into a problem. I would > still need the program to run the remaining 217 nls( )! Is there a way to > make the program skip the problematic nls() and complete the ramaining > nls()'s? > > > > Your help will be highly appreciated! > > Yuchen Luo > > [[alternative HTML version deleted]] > > ______________________________________________ > R-help at stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. >
Joerg van den Hoff
2007-Sep-07 09:44 UTC
[R] 'singular gradient matrix? when using nls() and how to make the program skip nls( ) and run on
On Wed, Sep 05, 2007 at 04:43:19PM -0700, Yuchen Luo wrote:> Dear friends. > > I use nls() and encounter the following puzzling problem: > > > > I have a function f(a,b,c,x), I have a data vector of x and a vectory y of > realized value of f. > > > > Case1 > > I tried to estimate c with (a=0.3, b=0.5) fixed: > > nls(y~f(a,b,c,x), control=list(maxiter = 100000, minFactor=0.5 > ^2048),start=list(c=0.5)). > > The error message is: "number of iterations exceeded maximum of 100000" > > > > Case2 > > I then think maybe the value of a and be are not reasonable. So, I let nls() > estimate (a,b,c) altogether: > > nls(y~f(a,b,c,x), control=list(maxiter = 100000, minFactor=0.5 > ^2048),start=list(a=0.3,b=0.5,c=0.5)). > > The error message is: > > "singular gradient matrix at initial parameter estimates". > > > > This is what puzzles me, if the initial parameter of (a=0.3,b=0.5,c=0.5) can > create 'singular gradient matrix', then why doesn't this 'singular gradient > matrix' appear in Case1? > > > > I have tried to change the initial value of (a,b,c) around but the problem > persists. I am wondering if there is a way out. > > > > My another question is, I need to run 220 of nls() in my program with > different y and x. When one of the nls() encounter a problem, the whole > program stops. In my case, the 3rd nls() runs into a problem. I would > still need the program to run the remaining 217 nls( )! Is there a way to > make the program skip the problematic nls() and complete the ramaining > nls()'s??try> > > > Your help will be highly appreciated! > > Yuchen Luo > > [[alternative HTML version deleted]] > > ______________________________________________ > R-help at stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code.
rsposto at yahoo.com
2007-Dec-22 00:36 UTC
[R] Apparent discordant solution using NLS() vs Optim or Excel for IWLS problem
I am writing a program for automated (i.e. no user intervention - for biologists) iteratively reweighted least-square fit to a function of the form "reading ~ exp(lm2)/(1 + (dose/exp(lm3))^exp(lm4)" using case weight proportional to the mean, e.g., E(reading). Because for some datasets the solution is sensitive to starting values, I first use OPTIM() with Nelder-Mead to locate the solution, and then plug the solution into NLS() (default algorithm) using the appropriate weights as a way to retrieve SEs, deviance, etc rather than computing these from OPTIM results. The problem I have discovered is the following. OPTIM() arrives at the correct solution that minimizes the objective function, and I have confirmed this in numerous examples in Excel. When this solution is plugged into NLS(), the first evaluation yields the same value of the objective function as OPTIM(). NLS() then attempts steps, and apparently finds a slightly different solution with a smaller value for the objective function. However, this new solution actually does not give a smaller objective function value, but rather a larger one, when verifed against an independent computation. Hence, it appears that NLS is getting confused and taking a step based on erroneous computation of the objective function. An example is below, with an abridged OPTIM output followed by NLS output: -- ************** START OUTPUT ************************** -- [1] "Drug = Melphelan" -- [1] "Cell line = CHLA-266" -- Nelder-Mead direct search function minimizer -- function value for initial parameters = 3.052411 -- Scaled convergence tolerance is 3.05241e-12 -- Stepsize computed as 1.801225 -- BUILD 4 288782670697.238953 3.052411 -- LO-REDUCTION 6 40.478286 3.052411 -- HI-REDUCTION 8 27.015234 3.052411 -- . -- . -- . -- LO-REDUCTION 208 0.763370 0.763370 -- LO-REDUCTION 210 0.763370 0.763370 -- Exiting from Nelder Mead minimizer -- 212 function evaluations used -- m2 m3 m4 -- 1 80846342 1.190286e-05 1.049291 [***NOTE - the correct solution parameters values are the line above on the absolute scale, which are the values below on the ln scale, representing the first evaluation by NLS. This gives the correct objective function value of 0.763370, and hence agrees with OPTIM at this point ***] -- 0.7633697 : 18.20806090 -11.33873192 0.04811485 -- It. 1, fac= 1, eval (no.,total): ( 1, 1): new dev = 0.7504 -- 0.7504 : 18.19909405 -11.34554803 0.05560241 -- It. 2, fac= 1, eval (no.,total): ( 1, 2): new dev = 0.750387 -- 0.7503866 : 18.19933189 -11.34796180 0.05429375 -- It. 3, fac= 1, eval (no.,total): ( 1, 3): new dev = 0.750387 -- . -- . multiple step halving attempts -- . -- It. 14, fac= 3.8147e-06, eval (no.,total): ( 4, 39): new dev = 0.750387 -- It. 14, fac= 1.90735e-06, eval (no.,total): ( 5, 40): new dev = 0.750387 -- -- Formula: reading ~ exp(lm2)/(1 + (dose/exp(lm3))^exp(lm4)) -- -- Parameters: -- Estimate Std. Error t value Pr(>|t|) -- lm2 18.19934 0.02084 873.252 <2e-16 *** -- lm3 -11.34795 0.08316 -136.459 <2e-16 *** -- lm4 0.05435 0.04103 1.325 0.191 -- --- -- Signif. codes: 0 ?***? 0.001 ?**? 0.01 ?*? 0.05 ?.? 0.1 ? ? 1 -- -- Residual standard error: 0.1147 on 57 degrees of freedom -- -- Number of iterations till stop: 13 -- Achieved convergence tolerance: 2.611e-08 -- Reason stopped: step factor 9.53674e-007 reduced below 'minFactor' of 1e-006 -- -- Warning message: -- In nls(reading ~ exp(lm2)/(1 + (dose/exp(lm3))^exp(lm4)), start = list(lm2 -- log(beta_nm$m2), : -- step factor 9.53674e-07 reduced below 'minFactor' of 1e-06 -- -- *** END OUTPUT ********************************************************* Note the objective function value of .750387 at parameter values 18.19934, -11.34795, 0.05435. However, objective function value at this solution is actually 0.776426587, by Excel computation. Can anyone suggest what I might be doing wrong, or is this a bug or anomaly of the NLS algorithm? Thanks in advance Richard Sposto (Los Angeles). -- This message was sent on behalf of rsposto at yahoo.com at openSubscriber.com http://www.opensubscriber.com/message/r-help at stat.math.ethz.ch/7530379.html