On 5 January 2015 at 21:08, Ben Bolker <bbolker at gmail.com> wrote:> Roger Coppock <rcoppock <at> cox.net> writes: > >> >> When will "R" implement the "se.fit" option to the >> predict.nls() function? Is there some schedule? >> > > I think this is unlikely to happen, ever (sorry). The exact method > for finding confidence intervals on nonlinear fits would be > to compute likelihood profiles for each prediction, which would > be rather tedious.I understand profile likelihoods for parameters, but what do you mean by a profile likelihood for yet unobserved observations, i.e. predictions?> > Another reasonable approach would be to use bootstrapping (see > linked r-help thread below). > > An approximate approach would be to use the delta method. > > The nlstools package might be useful.Alternatively the propagate package: it provides a function predictNLS that computes uncertainty measures for nls predictions using (first and second order) Taylor approximations as well as simulation methods. I think the appropriateness of a simple (first order) Taylor/Delta method depends on the application. I can think of two important aspects: (1) if the model function is close to linear, you might be ok. (2) if you are interested in a prediction-type (rather than confidence) interval and the residual spread dominates the uncertainty, any inaccuracies in the model function uncertainty (where you apply the approximation) is swamped by the residual spread anyway. In a recent application on shelf life estimation that I worked on, both of these aspects were applicable and a simple approximation was fine. Cheers, Rune
On 06 Jan 2015, at 07:40 , Rune Haubo <rune.haubo at gmail.com> wrote:> On 5 January 2015 at 21:08, Ben Bolker <bbolker at gmail.com> wrote: >> Roger Coppock <rcoppock <at> cox.net> writes: >> >>> >>> When will "R" implement the "se.fit" option to the >>> predict.nls() function? Is there some schedule? >>> >> >> I think this is unlikely to happen, ever (sorry). The exact method >> for finding confidence intervals on nonlinear fits would be >> to compute likelihood profiles for each prediction, which would >> be rather tedious. > > I understand profile likelihoods for parameters, but what do you mean > by a profile likelihood for yet unobserved observations, i.e. > predictions?For (pointwise) confidence intervals on a predicted value, you can in principle reparametrize so that the predicted value becomes a parameter which together with (k-1) of the original parameters characterize the model completely; then you profile over the (k-1) parameters. For predicting new observations, i.e. obtaining tolerance limits, things are trickier; presumably you should set up a test for whether a new observation can be assumed to come from the same model as the original data, then work out the test statistic (but against which alternative?) and the accceptance region would be the answer. (There probably is a literature...)> >> >> Another reasonable approach would be to use bootstrapping (see >> linked r-help thread below). >> >> An approximate approach would be to use the delta method. >> >> The nlstools package might be useful. > > Alternatively the propagate package: it provides a function predictNLS > that computes uncertainty measures for nls predictions using (first > and second order) Taylor approximations as well as simulation methods. > > I think the appropriateness of a simple (first order) Taylor/Delta > method depends on the application. I can think of two important > aspects: (1) if the model function is close to linear, you might be > ok. (2) if you are interested in a prediction-type (rather than > confidence) interval and the residual spread dominates the > uncertainty, any inaccuracies in the model function uncertainty (where > you apply the approximation) is swamped by the residual spread anyway. > In a recent application on shelf life estimation that I worked on, > both of these aspects were applicable and a simple approximation was > fine.Code to do this might be accepted. ("R" surely won't implement it all by itself...) -pd -- Peter Dalgaard, Professor, Center for Statistics, Copenhagen Business School Solbjerg Plads 3, 2000 Frederiksberg, Denmark Phone: (+45)38153501 Email: pd.mes at cbs.dk Priv: PDalgd at gmail.com
All I had in mind was that *if* you can set up an optimization over the parameters such that the prediction for a particular set of predictors is constrained to come out to a specified target value (i.e., nonlinear equality constraints), then you can compute profile confidence intervals on the predictions in the usual way. The nloptr package supposedly offers access to some nonlinear optimizers with nonlinear equality constraints, but I haven't tried them out. On Tue, Jan 6, 2015 at 8:17 AM, peter dalgaard <pdalgd at gmail.com> wrote:> > On 06 Jan 2015, at 07:40 , Rune Haubo <rune.haubo at gmail.com> wrote: > >> On 5 January 2015 at 21:08, Ben Bolker <bbolker at gmail.com> wrote: >>> Roger Coppock <rcoppock <at> cox.net> writes: >>> >>>> >>>> When will "R" implement the "se.fit" option to the >>>> predict.nls() function? Is there some schedule? >>>> >>> >>> I think this is unlikely to happen, ever (sorry). The exact method >>> for finding confidence intervals on nonlinear fits would be >>> to compute likelihood profiles for each prediction, which would >>> be rather tedious. >> >> I understand profile likelihoods for parameters, but what do you mean >> by a profile likelihood for yet unobserved observations, i.e. >> predictions? > > For (pointwise) confidence intervals on a predicted value, you can in principle reparametrize so that the predicted value becomes a parameter which together with (k-1) of the original parameters characterize the model completely; then you profile over the (k-1) parameters. > > For predicting new observations, i.e. obtaining tolerance limits, things are trickier; presumably you should set up a test for whether a new observation can be assumed to come from the same model as the original data, then work out the test statistic (but against which alternative?) and the accceptance region would be the answer. > > (There probably is a literature...) > > >> >>> >>> Another reasonable approach would be to use bootstrapping (see >>> linked r-help thread below). >>> >>> An approximate approach would be to use the delta method. >>> >>> The nlstools package might be useful. >> >> Alternatively the propagate package: it provides a function predictNLS >> that computes uncertainty measures for nls predictions using (first >> and second order) Taylor approximations as well as simulation methods. >> >> I think the appropriateness of a simple (first order) Taylor/Delta >> method depends on the application. I can think of two important >> aspects: (1) if the model function is close to linear, you might be >> ok. (2) if you are interested in a prediction-type (rather than >> confidence) interval and the residual spread dominates the >> uncertainty, any inaccuracies in the model function uncertainty (where >> you apply the approximation) is swamped by the residual spread anyway. >> In a recent application on shelf life estimation that I worked on, >> both of these aspects were applicable and a simple approximation was >> fine. > > Code to do this might be accepted. ("R" surely won't implement it all by itself...) > > -pd > > -- > Peter Dalgaard, Professor, > Center for Statistics, Copenhagen Business School > Solbjerg Plads 3, 2000 Frederiksberg, Denmark > Phone: (+45)38153501 > Email: pd.mes at cbs.dk Priv: PDalgd at gmail.com > > > > > > > >