Jared Blashka
2010-Dec-13 17:54 UTC
[R] Complicated nls formula giving singular gradient message
I'm attempting to calculate a regression in R that I normally use Prism for, because the formula isn't pretty by any means. Prism presents the formula (which is in the Prism equation library as Heterologous competition with depletion, if anyone is curious) in these segments: KdCPM = KdnM*SpAct*Vol*1000 R=NS+1 S=(1+10^(X-LogKi))*KdCPM+Hot a=-1*R b=R*S+NS*Hot+BMax c = -1*Hot*(S*MS+BMax) Y = (-1*b+sqrt(b*b-4*a*c))/(2*a) I'm only trying to solve for NS, LogKi, and BMax. I have everything else (KdnM, SpAct, Vol, Hot). I would use the simple formula at the bottom and then backsolve for the terms I'm looking for, but the simple formula at the bottom takes out the X term, which is contained within S, which it itself contained in both b and c. So I tried to substitute all the terms back into Y and got the following formula<-as.formula("Y ~ (-1*(((NS+1)*((1+10^(X-LogKi))*(KdnM*SpAct*Vol*1000)+Hot))+NS*Hot+BMax)+sqrt((((NS+1)*((1+10^(X-LogKi))*(KdnM*SpAct*Vol*1000)+Hot))+NS*Hot+BMax)*(((NS+1)*((1+10^(X-LogKi))*(KdnM*SpAct*Vol*1000)+Hot))+NS*Hot+BMax)-4*(-1*(NS+1))*(-1*Hot*(((1+10^(X-LogKi))*(KdnM*SpAct*Vol*1000)+Hot)*NS+BMax))))/(2*-1*(NS+1))") But trying to use that formula gives me the single gradient message, which I wasn't entirely surprised by. fit<-nls(formula=formula,data=data,start=list(NS=.01,LogKi=-7,BMax=33000)) Error in nls(formula = formula, data = all_no_outliers, start = list(NS 0.01, : singular gradient I've never worked with a formula this complicated in R. Is it even possible to do something like this? Any ideas or points in the right direction would be greatly appreciated. Thanks, Jared [[alternative HTML version deleted]]
Jared Blashka
2010-Dec-13 22:24 UTC
[R] Complicated nls formula giving singular gradient message
Phil, This is great! I had no idea nls would accept functions in the formula position. My apologies for not including data to reproduce my issue. dat<-data.frame(X=c(-13.0,-11.0,-10.0,-9.5,-9.0,-8.5,-8.0,-7.5,-7.0,-6.5,-6.0,-5.0, -13.0,-11.0,-10.0,-9.5,-9.0,-8.5,-8.0,-7.5,-7.0,-6.5,-6.0,-5.0), Y=c(3146,3321,2773,2415,2183,1091,514,191,109,65,54,50, 3288,3243,2826,2532,2060,896,517,275,164,106,202,53)) With your suggestion, I've changed the formula in nls to the following function: myfunc<-function(NS,LogKi,BMax)with(dat,{ KdCPM = KdnM*SpAct*Vol*1000 R<-NS+1 S<-(1+10^(X-LogKi))*KdCPM+Hot a<-(-1*R) b<-R*S+NS*Hot+BMax c<--1*Hot*(S*NS+BMax) (-1*b+sqrt(b*b-4*a*c))/(2*a) }) But to get it to compute without errors, I also had to increase the tolerance level: the step factor keeps being reduced below the min factor. Looking at the trace of the nls though, I don't see any changes after the 10th iteration or so; would increasing the tolerance cause any issue that I'm not thinking of? KdnM <- .8687 SpAct <- 4884 Vol <- .125 Hot <- 10191.0 nls(Y~myfunc(NS,LogKi,BMax),data=dat,start=list(NS=.01,LogKi=-7,BMax=10*max(dat['Y'])),control=nls.control(warnOnly=TRUE,minFactor=1e-5),trace=TRUE) Also, I've found that if the start value I provide for BMax is too inaccurate (ex. max(dat['Y']), nls generates the 'singular gradient' error message, which isn't something I'm used to. Usually nls is kind enough to inform me that the initial parameter estimates are what caused the problem. Has the error message changed in a recent update, or is this a different error message than what I'm thinking about? Thanks again for all your help, Jared On Mon, Dec 13, 2010 at 1:23 PM, Phil Spector <spector@stat.berkeley.edu>wrote:> Jared - > nls will happily accept a function on the right hand side > of the ~ -- you don't have to write out the formula in such > detail. > What you provided isn't reproducible because you didn't provide data, and > it's not clear what "Y" in the formula > represents. Let me provide you with an admittedly simpler > reproducible example. > > Suppose we want to estimate the model > > response = v * dose / (k + dose) > > where response and dose are variables in a data frame called "dat", > and v and k are the parameters to be estimated. > > Here's the data: > > dat = data.frame(dose=c(0.027,0.044,0.073,0.102,0.175,0.257,0.483,0.670), >> > + response=c(12.7,16.0,20.4,22.3,26.0,28.2,29.6,31.4)) > > Normally we would fit such a simple model as > > nls(response ~ v*dose / (k + dose),data=dat,start=list(v=30,k=.05)) >> > Nonlinear regression model > model: response ~ v * dose/(k + dose) > data: dat > v k 32.94965 0.04568 > residual sum-of-squares: 1.091 > > Number of iterations to convergence: 4 Achieved convergence tolerance: > 8.839e-07 > > Alternatively, I can write a function like this: > > myfunc = function(v,k)with(dat,v * dose / (k + dose)) >> > > and use the following call to nls: > > nls(response ~ myfunc(v,k),data=dat,start=list(v=30,k=.05)) >> > Nonlinear regression model > model: response ~ myfunc(v, k) > data: dat > v k 32.94965 0.04568 > residual sum-of-squares: 1.091 > > Number of iterations to convergence: 4 Achieved convergence tolerance: > 8.839e-07 > > which gets the identical results. > > Admittedly this function is trivial, but perhaps in your case > you could use the segments from prism to construct a function > for the right-hand side of your nls call. > > Hope this helps. > - Phil Spector > Statistical Computing Facility > Department of Statistics > UC Berkeley > spector@stat.berkeley.edu > > > > > > > On Mon, 13 Dec 2010, Jared Blashka wrote: > > I'm attempting to calculate a regression in R that I normally use Prism >> for, >> because the formula isn't pretty by any means. >> >> Prism presents the formula (which is in the Prism equation library as >> Heterologous competition with depletion, if anyone is curious) in these >> segments: >> >> KdCPM = KdnM*SpAct*Vol*1000 >> R=NS+1 >> S=(1+10^(X-LogKi))*KdCPM+Hot >> a=-1*R >> b=R*S+NS*Hot+BMax >> c = -1*Hot*(S*MS+BMax) >> Y = (-1*b+sqrt(b*b-4*a*c))/(2*a) >> >> I'm only trying to solve for NS, LogKi, and BMax. I have everything else >> (KdnM, SpAct, Vol, Hot). >> >> I would use the simple formula at the bottom and then backsolve for the >> terms I'm looking for, but the simple formula at the bottom takes out the >> X >> term, which is contained within S, which it itself contained in both b and >> c. >> So I tried to substitute all the terms back into Y and got the following >> >> formula<-as.formula("Y ~ >> >> (-1*(((NS+1)*((1+10^(X-LogKi))*(KdnM*SpAct*Vol*1000)+Hot))+NS*Hot+BMax)+sqrt((((NS+1)*((1+10^(X-LogKi))*(KdnM*SpAct*Vol*1000)+Hot))+NS*Hot+BMax)*(((NS+1)*((1+10^(X-LogKi))*(KdnM*SpAct*Vol*1000)+Hot))+NS*Hot+BMax)-4*(-1*(NS+1))*(-1*Hot*(((1+10^(X-LogKi))*(KdnM*SpAct*Vol*1000)+Hot)*NS+BMax))))/(2*-1*(NS+1))") >> >> But trying to use that formula gives me the single gradient message, which >> I >> wasn't entirely surprised by. >> fit<-nls(formula=formula,data=data,start=list(NS=.01,LogKi=-7,BMax=33000)) >> Error in nls(formula = formula, data = all_no_outliers, start = list(NS >> 0.01, : >> singular gradient >> >> I've never worked with a formula this complicated in R. Is it even >> possible >> to do something like this? Any ideas or points in the right direction >> would >> be greatly appreciated. >> >> Thanks, >> Jared >> >> [[alternative HTML version deleted]] >> >> ______________________________________________ >> R-help@r-project.org mailing list >> stat.ethz.ch/mailman/listinfo/r-help >> PLEASE do read the posting guide >> R-project.org/posting-guide.html >> and provide commented, minimal, self-contained, reproducible code. >> >>[[alternative HTML version deleted]]
Jared Blashka
2010-Dec-20 18:26 UTC
[R] Complicated nls formula giving singular gradient message
Though my topic is slightly old already, I feel that it is necessary to post an update on my situation. I ended being able to estimate the parameters for this problem without having to worry as much about initial parameter estimates using AD Model Builder. It calculates the exact gradient using automatic differentiation so it's able to avoid the singular gradient problem nls can give. I also used the R package PBSadmb, which allowed me to run AD Model Builder and retrieve the results from within R. Then I could do what I liked with the results: generate graphs, more analysis, etc. Thanks to everyone who helped, Jared On Wed, Dec 15, 2010 at 8:47 AM, dave fournier <davef@otter-rsch.com> wrote:> Jared Blashka wrote: > > Hi, > > Can you write a little note to the R list saying something like > > Re: SOLVED [R] Complicated nls formula giving singular gradient > message > > I was able to estimate the parameters for this problem using AD Mode > Builder which calculates > the exact gradient for you using automatic differentiation and is thus > able to avoid the singular gradient > problem I encountered in nls. > > That way other R users who might be able to take advantage of the software > will hear about it. > > Cheers, > > Dave > > > > >> Dave, >> >> That's exactly what I was looking for! >> >> Thanks for all your help! >> Jared >> >> On Tue, Dec 14, 2010 at 7:13 AM, dave fournier <davef@otter-rsch.com<mailto: >> davef@otter-rsch.com>> wrote: >> >> Jared Blashka wrote: >> >> The source code for that is in jared.tpl >> >> I changed from least squares to a concentrated likelihood so that you >> could get estimated std devs via the delta method. they are in >> jared.std >> I rescaled the parameters so that the condition number of the >> Hessian is close to 1. >> You can see the eigenvalues of the Hessian in jared.eva. >> Your data are in jared.dat and the initial parameter values are in >> jared.pin. >> >> The parameter estimates with their estiamted std devs are: >> >> index name value std dev >> 1 NS 1.1254e-02 7.1128e-03 >> 2 LogKi -8.8933e+00 8.2411e-02 >> 3 LogKi -5.2005e+00 9.2179e-02 >> 4 LogKi -7.2677e+00 7.7047e-02 >> 5 BMax 2.1226e+05 5.1699e+03 >> ~ How does it look? >> >> Cheers, >> Dave >> >> >> >> >> Dave - AD Model Builder looks like a great tool that I can >> use, but I'm curious if it can also perform global parameter >> estimations across multiple data sets. >> >> In regards to the example I have provided, I have two similar >> data sets that also need to be analyzed, but the values for NS >> and BMax between the three data sets should be the same. Each >> data set has a unique LogKi value however. In R, I >> accomplished this by merging the three data sets and adding an >> additional field for each data point that identified which set >> it was originally from. Then in the regression formula I >> specified the LogKi term as a vector: LogKi[dset]. The results >> of the regression gave me one value each for NS and BMax, but >> three LogKi values. I haven't had much time to look through >> the AD Model Builder documentation yet, but are you aware if >> such an analysis method is possible? >> >> Here's one such example of a data set >> >> all <-structure(list(X = c(-13, -11, -10, -9.5, -9, -8.5, -8, >> -7.5, -7, -6.5, -6, -5, -13, -11, -10, -9.5, -9, -8.5, -8, >> -7.5, -7, -6.5, -6, -5, -13, -11, -10, -9.5, -9, -8.5, -8, >> -7.5, -7, -6.5, -6, -5, -13, -11, -10, -9.5, -9, -8.5, -8, >> -7.5, -7, -6.5, -6, -5, -13, -11, -10, -9.5, -9, -8.5, -8, >> -7.5, -7, -6.5, -6, -5, -13, -11, -10, -9.5, -9, -8.5, -8, >> -7.5, -7, -6.5, -6, -5, -13, -11, -10, -9.5, -9, -8.5, -8, >> -7.5, -7, -6.5, -6, -5, -13, -11, -10, -9.5, -9, -8.5, -8, >> -7.5, -7, -6.5, -6, -5), Y = c(3146L, 3321L, 2773L, 2415L, >> 2183L, 1091L, 514L, 191L, 109L, 65L, 54L, 50L, 3288L, 3243L, >> 2826L, 2532L, 2060L, 896L, 517L, 275L, 164L, 106L, 202L, 53L, >> 3146L, 3502L, 2658L, 3038L, 3351L, 3238L, 2935L, 3212L, 3004L, >> 3088L, 2809L, 1535L, 3288L, 2914L, 2875L, 2489L, 3104L, 2771L, >> 2861L, 3309L, 2997L, 2361L, 2687L, 1215L, 3224L, 3131L, 3126L, >> 2894L, 2495L, 2935L, 2516L, 2994L, 3074L, 3008L, 2780L, 1454L, >> 3146L, 2612L, 2852L, 2774L, 2663L, 3097L, 2591L, 2295L, 1271L, >> 1142L, 646L, 68L, 3288L, 2606L, 2838L, 1320L, 2890L, 2583L, >> 2251L, 2155L, 1164L, 695L, 394L, 71L, 3224L, 2896L, 2660L, >> 2804L, 2762L, 2525L, 2615L, 1904L, 1364L, 682L, 334L, 64L), >> dset = c(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, >> 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, >> 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, >> 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, >> 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3)), .Names = c("X", >> "Y", "dset" >> ), class = "data.frame", row.names = c(NA, 96L)) >> >> Thanks for your time, >> Jared >> >> On Mon, Dec 13, 2010 at 7:37 PM, dave fournier >> <davef@otter-rsch.com <mailto:davef@otter-rsch.com> >> <mailto:davef@otter-rsch.com <mailto:davef@otter-rsch.com>>> >> >> wrote: >> >> I always enjoy these direct comparisons between different >> software >> packages. >> I coded this up in AD Model Builder which is freely >> available at >> admb-project.org ADMB calculates exact derivatives via >> automatic >> differentiation so it tends to be more stable for these >> difficult >> problems. >> >> The parameter estimates are >> # Number of parameters = 3 >> Objective function value = 307873. Maximum gradient >> component >> 1.45914e-06 >> # NS: >> 0.00865232633386 >> # LogKi: >> -8.98700621813 >> # BMax: >> 237135.365156 >> The objective function is just least squares. >> So it looks like SAS did pretty well before dying. >> >> >> ______________________________________________ >> R-help@r-project.org <mailto:R-help@r-project.org> >> <mailto:R-help@r-project.org <mailto:R-help@r-project.org>> >> >> mailing list >> >> stat.ethz.ch/mailman/listinfo/r-help >> PLEASE do read the posting guide >> R-project.org/posting-guide.html >> and provide commented, minimal, self-contained, >> reproducible code. >> >> >> >> >> >[[alternative HTML version deleted]]
Reasonably Related Threads
- extension to nlme self start SSmicmen?
- Fast Two-Dimensional Optimization
- Nonlinear statistical modeling -- a comparison of R and AD Model Builder
- [RFC PATCH v2 1/3] PCI: endpoint: introduce a helper to implement pci ep virtio function
- glmmADMB: Generalized Linear Mixed Models using AD Model Builder