Hi:
Things work if x is the response and y is the covariate. To use the
approach I describe below, you need RStudio and its manipulate package
(which is only available in RStudio - you won't find it on CRAN). You
can download and install RStudio freely from http://rstudio.org/ ; it
is available for Windows, Linux and Mac. To quote an old TV commercial
line in the US: 'Try it, you'll like it' :)
In the script below, the covariate has to be named x since the script
calls the curve() function, which plots a mathematical function of a
single variable named x. As a result, you need to interchange the
names of your vectors. Within RStudio, copy and paste the following in
chunks; in particular, copy and paste the code starting with
'manipulate(' and ending in ')' to generate the sliders for the
parameter estimates. The idea is to tweak the parameter values until
you get a fitted model that fits the observed data fairly closely.
When you achieve that, kill the slider box (upper right corner); the
estimates at the state where the sliders are closed are then saved in
a vector called start, which you use in the subsequent nls() call.
After the model is fit, a sequence of x values is generated as new
data, the predicted values at those points are computed, and a plot of
the observed data with overlaid fitted model is produced.
You have to be a bit careful; occasionally, you'll get an error
Error in nls(y ~ a - b * exp(-c * x^d), start = start) :
singular gradient
If so, just try again with a different set of initial values, trying
not to overdo it. You don't need to be exact, just close.
library('manipulate')
### Weibull model:
x <- c(1,2,3,4,10,20)
y <- c(1,7,14,25,29,30)
## Copy and paste the code chunk below into RStudio,
## stopping with the line of hash marks
start <- list()
# Generate sliders to find good initial parameter estimates
manipulate(
{
plot(y ~ x)
a <- a0; b <- b0; c <- c0; d <- d0
curve(a-b*exp(-c*x^d), add=TRUE)
start <<- list(a=a, b=b, c=c, d=d)
},
a0 = slider(10, 50, step=0.1, initial = 30),
b0 = slider(0, 100, step=1, initial = 3),
c0 = slider(0, 0.1, step=0.01, initial = 0.01),
d0 = slider(0, 10, step=0.1, initial = 5)
)
########## Stop here ##########
# Fit the model using the estimates from the sliders
weibm <- nls(y ~ a-b*exp(-c*x^d), start = start)
summary(weibm)
# Make predictions over a sequence of x values and plot
ndata <- data.frame(x = seq(0, 20, by = 0.1))
wpred <- predict(weibm, newdata = ndata)
plot(y ~ x, pch = 16)
lines(ndata$x, wpred, col = 'red')
### Logistic:
start <- list()
manipulate(
{
plot(y ~ x)
a <- a0; b <- b0; d <- d0
curve(a/(1+b*exp(-d*x)), add=TRUE)
start <<- list(a=a, b=b, d=d)
},
a0 = slider(0, 50, step = 1, initial = 30),
b0 = slider(0, 20, step = 0.1, initial = 10),
d0 = slider(0, 1, step = 0.01, initial = 0.1)
)
logism <- nls(y ~ a/(1+b*exp(-d*x)), start = start)
summary(logism)
ldata <- data.frame(x = seq(0, 20, by = 0.1))
lpred <- predict(weibm, newdata = ndata)
plot(y ~ x, pch = 16)
lines(ldata$x, lpred, col = 'red')
This is a good exercise to learn how the various parameters affect the
shape of the curve associated with a particular nonlinear model in one
variable. It also helps to read about the model in question and
understand the interpretation associated with each of the parameters.
That way, you can use the sliders to visualize the effects of changes
in one parameter when the others are held constant. If you find that
the boundaries of the sliders are too restrictive, you can always
reset them and try again. The code above came about from a few
iterations of tweaking ranges for individual parameters (either wider
or narrower as the case may be). I always keep the code in an editor
so that it's easy to change, then copy and paste into the R console.
If you redo the slider fitting, it's easier to reset the start vector,
too.
You'll also notice that one parameter in each of the fitted models is
nonsignificant, but you need to take into account that you're fitting
models with three or four parameters to six data points.
Aside: If you really meant to use y and x as response and covariate,
respectively, in your posted data example, the sliders will show you
that the two models are way off the mark, since y would start out
slowly and then jump exponentially. That would require a completely
different nonlinear model. You'll also notice that the estimates of a
and b in the Weibull model are an order of magnitude (10 times) higher
than what you stated.
HTH,
Dennis
On Tue, Aug 30, 2011 at 8:31 PM, Marcio <mresende at ufl.edu>
wrote:> Hi guys,
> I have a data-set that fits well into a Weibull model y = a-b*exp(-c*x^d).
> I want to estimate the parameters of the coefficients a, b, c and d, given
x
> and y.
> Can you guys help me?
>
> Just as an example, I fit the data
> y <- c(1,2,3,4,10,20)
> and
> x <- c(1,7,14,25,29,30)
>
> According to this model above, and using the software CurveExpert, and I
got
> the estimates of a (2.95),b (2.90),c (2.82e-2) and d (2.97).
>
> This is just a sample data but I would like to do it on R because I can
loop
> multiple times through all my files.
>
> I would also like to fit a logistic model y = a/(1+b*exp-c*x)
>
> Can you help ?me????
> Thanks
>
> --
> View this message in context:
http://r.789695.n4.nabble.com/Fitting-my-data-to-a-Weibull-model-tp3780169p3780169.html
> Sent from the R help mailing list archive at Nabble.com.
>
> ______________________________________________
> R-help at r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide
http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
>