Hi, I wonder if anyone could advise me with this: I've been trying to make a standard curve in R with lm() of some standards from a spectrophotometer, so as I can express the curve as a formula, and so obtain values from my treated samples by plugging in readings into the formula, instead of trying to judge things by eye, with a curve drawn by hand. It is a curve and so I used the following formula: model <- lm(Approximate.Counts~X..Light.Transmission + I(Approximate.Counts^2), data=Standards) It gives me a pretty decent graph: xyplot(Approximate.Counts + fitted(model) ~ X..Light.Transmission, data=Standards) I'm pretty happy with it, and looking at the model summary, to my inexperienced eyes it seems pretty good: lm(formula = Approximate.Counts ~ X..Light.Transmission + I(Approximate.Counts^2), data = Standards) Residuals: Min 1Q Median 3Q Max -91.75 -51.04 27.33 37.28 49.72 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 9.868e+02 2.614e+01 37.75 <2e-16 *** X..Light.Transmission -1.539e+01 8.116e-01 -18.96 <2e-16 *** I(Approximate.Counts^2) 2.580e-04 6.182e-06 41.73 <2e-16 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 48.06 on 37 degrees of freedom Multiple R-squared: 0.9956, Adjusted R-squared: 0.9954 F-statistic: 4190 on 2 and 37 DF, p-value: < 2.2e-16 I tried to put some 95% confidence interval lines on a plot, as advised by my tutor, to see how they looked, and I used a function I found in "The R Book": se.lines <- function(model){ b1<-coef(model)[2]+ summary(model)[[4]][4] b2<-coef(model)[2]- summary(model)[[4]][4] xm<-mean(model[[12]][2]) ym<-mean(model[[12]][1]) a1<-ym-b1*xm a2<-ym-b2*xm abline(a1,b1,lty=2) abline(a2,b2,lty=2) } se.lines(model) but when I do this on a plot I get an odd result: They looks to me, to lie in the same kind of area, that my regression line did, before I used polynomial regression, by squaring "Approximate.Counts": lm(formula = Approximate.Counts ~ X..Light.Transmission + I(Approximate.Counts^2), data = Standards) Is there something else I should be doing? I've seen several ways of dealing with non-linear relationships, from log's of certain variables, and quadratic regression, and using sin and other mathematical devices. I'm not completely sure if I'm "allowed" to square the y variable, the book only squared the x variable in quadratic regression, which I did first, and it fit quite well, but not as good squaring Approximate Counts does: model <- lm(Approximate.Counts~X..Light.Transmission + I(X..Light.Transmission^2), data=Standards) Any advice is greatly appreciated, it's the first time I've really had to look at regression with data in my coursework that isn't a straight line. Thanks, Ben Ward.
I've just realised the couple of graphs I put on here have been stripped off. If anyone has to see them and can't see my problem from code, I can send them directly to anyone who thinks they can help but wants to see them. Thanks, Ben W. On 18/02/2011 23:29, Ben Ward wrote:> Hi, I wonder if anyone could advise me with this: > > I've been trying to make a standard curve in R with lm() of some > standards from a spectrophotometer, so as I can express the curve as a > formula, and so obtain values from my treated samples by plugging in > readings into the formula, instead of trying to judge things by eye, > with a curve drawn by hand. > > It is a curve and so I used the following formula: > > model <- lm(Approximate.Counts~X..Light.Transmission + > I(Approximate.Counts^2), data=Standards) > > It gives me a pretty decent graph: > xyplot(Approximate.Counts + fitted(model) ~ X..Light.Transmission, > data=Standards) > > I'm pretty happy with it, and looking at the model summary, to my > inexperienced eyes it seems pretty good: > > lm(formula = Approximate.Counts ~ X..Light.Transmission + > I(Approximate.Counts^2), > data = Standards) > > Residuals: > Min 1Q Median 3Q Max > -91.75 -51.04 27.33 37.28 49.72 > > Coefficients: > Estimate Std. Error t value Pr(>|t|) > (Intercept) 9.868e+02 2.614e+01 37.75 <2e-16 *** > X..Light.Transmission -1.539e+01 8.116e-01 -18.96 <2e-16 *** > I(Approximate.Counts^2) 2.580e-04 6.182e-06 41.73 <2e-16 *** > --- > Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 > > Residual standard error: 48.06 on 37 degrees of freedom > Multiple R-squared: 0.9956, Adjusted R-squared: 0.9954 > F-statistic: 4190 on 2 and 37 DF, p-value: < 2.2e-16 > > I tried to put some 95% confidence interval lines on a plot, as > advised by my tutor, to see how they looked, and I used a function I > found in "The R Book": > > se.lines <- function(model){ > b1<-coef(model)[2]+ summary(model)[[4]][4] > b2<-coef(model)[2]- summary(model)[[4]][4] > xm<-mean(model[[12]][2]) > ym<-mean(model[[12]][1]) > a1<-ym-b1*xm > a2<-ym-b2*xm > abline(a1,b1,lty=2) > abline(a2,b2,lty=2) > } > se.lines(model) > > but when I do this on a plot I get an odd result: > > > They looks to me, to lie in the same kind of area, that my regression > line did, before I used polynomial regression, by squaring > "Approximate.Counts": > > lm(formula = Approximate.Counts ~ X..Light.Transmission + > I(Approximate.Counts^2), data = Standards) > > Is there something else I should be doing? I've seen several ways of > dealing with non-linear relationships, from log's of certain > variables, and quadratic regression, and using sin and other > mathematical devices. I'm not completely sure if I'm "allowed" to > square the y variable, the book only squared the x variable in > quadratic regression, which I did first, and it fit quite well, but > not as good squaring Approximate Counts does: > > model <- lm(Approximate.Counts~X..Light.Transmission + > I(X..Light.Transmission^2), data=Standards) > > > Any advice is greatly appreciated, it's the first time I've really had > to look at regression with data in my coursework that isn't a straight > line. > > Thanks, > Ben Ward. > ______________________________________________ > R-help at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > >
Hi Graham, Thanks, that does explain lots. I've been playing with making log's of data in models to make the relationship linear, which it does, which suggests to me that lm() is the right way to go, however, after if try to predict after y values after about 60% on the x axis for light transmission, the y value, for bacterial numbers, crosses the axis and gives me negative values for y, which on a practical level isn't possible, as one can't have less than no bacteria in a culture. On a practical level, when I include the cirve in my appendix I could say anything above around 60% is 0, and mention the negative results from the standard curve's prediction capabilities are not literal, and to say turn any negative bacterial count obtained as a result of the curve to 0. I've not had to deal with such pleatauing curves before. The values I have for the curve don't go above 50%, so anything above it is prediction, and my experiment probably won't result in x values above 50% as the death of the culture proceeds slowly, but that depending on relative amounts of culture and antimicrobial I use the rate 'could' go faster or slower, so could go above 50%. I was wondering if non-linear regression is better for such a thing, but I'm hesitant to go into it in more detail for now because of the danger of drastically increacing complexity, if on a practical level, what I currenly have, works and is very accurate, within the range I will most likely be using it. Thanks, Ben. On 19/02/2011 15:39, Graham Smith wrote:> Ben, > > Does this help. > > http://r-eco-evo.blogspot.com/2011/01/confidence-intervals-for-regression.html > > Not sure if it will work with your particular model, but may be worth > a try. > > > Graham > > On 18 February 2011 23:29, Ben Ward <benjamin.ward@bathspa.org > <mailto:benjamin.ward@bathspa.org>> wrote: > > Hi, I wonder if anyone could advise me with this: > > I've been trying to make a standard curve in R with lm() of some > standards from a spectrophotometer, so as I can express the curve > as a formula, and so obtain values from my treated samples by > plugging in readings into the formula, instead of trying to judge > things by eye, with a curve drawn by hand. > > It is a curve and so I used the following formula: > > model <- lm(Approximate.Counts~X..Light.Transmission + > I(Approximate.Counts^2), data=Standards) > > It gives me a pretty decent graph: > xyplot(Approximate.Counts + fitted(model) ~ X..Light.Transmission, > data=Standards) > > I'm pretty happy with it, and looking at the model summary, to my > inexperienced eyes it seems pretty good: > > lm(formula = Approximate.Counts ~ X..Light.Transmission + > I(Approximate.Counts^2), > data = Standards) > > Residuals: > Min 1Q Median 3Q Max > -91.75 -51.04 27.33 37.28 49.72 > > Coefficients: > Estimate Std. Error t value Pr(>|t|) > (Intercept) 9.868e+02 2.614e+01 37.75 <2e-16 *** > X..Light.Transmission -1.539e+01 8.116e-01 -18.96 <2e-16 *** > I(Approximate.Counts^2) 2.580e-04 6.182e-06 41.73 <2e-16 *** > --- > Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 > > Residual standard error: 48.06 on 37 degrees of freedom > Multiple R-squared: 0.9956, Adjusted R-squared: 0.9954 > F-statistic: 4190 on 2 and 37 DF, p-value: < 2.2e-16 > > I tried to put some 95% confidence interval lines on a plot, as > advised by my tutor, to see how they looked, and I used a function > I found in "The R Book": > > se.lines <- function(model){ > b1<-coef(model)[2]+ summary(model)[[4]][4] > b2<-coef(model)[2]- summary(model)[[4]][4] > xm<-mean(model[[12]][2]) > ym<-mean(model[[12]][1]) > a1<-ym-b1*xm > a2<-ym-b2*xm > abline(a1,b1,lty=2) > abline(a2,b2,lty=2) > } > se.lines(model) > > but when I do this on a plot I get an odd result: > > > They looks to me, to lie in the same kind of area, that my > regression line did, before I used polynomial regression, by > squaring "Approximate.Counts": > > lm(formula = Approximate.Counts ~ X..Light.Transmission + > I(Approximate.Counts^2), data = Standards) > > Is there something else I should be doing? I've seen several ways > of dealing with non-linear relationships, from log's of certain > variables, and quadratic regression, and using sin and other > mathematical devices. I'm not completely sure if I'm "allowed" to > square the y variable, the book only squared the x variable in > quadratic regression, which I did first, and it fit quite well, > but not as good squaring Approximate Counts does: > > model <- lm(Approximate.Counts~X..Light.Transmission + > I(X..Light.Transmission^2), data=Standards) > > > Any advice is greatly appreciated, it's the first time I've really > had to look at regression with data in my coursework that isn't a > straight line. > > Thanks, > Ben Ward. > ______________________________________________ > R-help@r-project.org <mailto:R-help@r-project.org> mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > >[[alternative HTML version deleted]]
model <- lm(Approximate.Counts~X..Light.Transmission + I(Approximate.Counts^2), data=Standards) Might not be addressing the problem, don't you have Y ~ X + Y^2 here? That's a violation of the assumptions of an lm isn't it? Also for plotting CI on a curve look into ggplot2::geom_ribbon, it's much nicer than just plotting lines and is easy to use. had.co.nz should set you right for setting this up. -- View this message in context: http://r.789695.n4.nabble.com/Confidence-Intervals-on-Standard-Curve-tp3313850p3315071.html Sent from the R help mailing list archive at Nabble.com.