(1) So finally, thank to your help I have this: summary(lm(x ~ 0+I(t^2))) And then I get this result: ================================================Call: lm(formula = x ~ 0 + I(t^2)) Residuals: Min 1Q Median 3Q Max -3.332e-02 -9.362e-03 1.169e-05 1.411e-02 3.459e-02 Coefficients: Estimate Std. Error t value Pr(>|t|) I(t^2) 0.0393821 0.0001487 264.8 <2e-16 *** --- Signif. codes: 0 `***' 0.001 `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1 Residual standard error: 0.01945 on 18 degrees of freedom Multiple R-Squared: 0.9997, Adjusted R-squared: 0.9997 F-statistic: 7.014e+04 on 1 and 18 DF, p-value: < 2.2e-16 ================================================ I see in MuPad, that Delta^2 is 0.006813. Now is not the standard error the square root of Delta^2? Should I not get 0.069 as standard error? (2) When I use the model summary(lm(x ~ I(t^2))) I get (of course) another result with a slightly smaller Delta^2. But I do not expect such an error as this would mean that there was a systematic error in our measurement of the distance and if I understand the result of R correctly, the error was 0.04m which is impossible: =========================================================Call: lm(formula = x ~ I(t^2)) Residuals: Min 1Q Median 3Q Max -0.0202520 -0.0116533 -0.0006036 0.0036699 0.0432987 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.0427606 0.0161085 2.655 0.0167 * I(t^2) 0.0379989 0.0005367 70.801 <2e-16 *** --- Signif. codes: 0 `***' 0.001 `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1 Residual standard error: 0.01683 on 17 degrees of freedom Multiple R-Squared: 0.9966, Adjusted R-squared: 0.9964 F-statistic: 5013 on 1 and 17 DF, p-value: < 2.2e-16 ==================================================== What is going on here? (Sorry but I am only a high school teacher and have not much idea of statistics.) TIA, JB
Is there any function to mesure different copulas in R? Would appreciate any codes or whatever information you may have. Thank you in advance Thomas Holm thomas at holm.cn -----Ursprungligt meddelande----- Fr?n: r-help-bounces at stat.math.ethz.ch [mailto:r-help-bounces at stat.math.ethz.ch] F?r JB Skickat: den 6 november 2003 22:51 Till: R-help at stat.math.ethz.ch ?mne: [R] newbie's additional (probably to some extent OT) questions (1) So finally, thank to your help I have this: summary(lm(x ~ 0+I(t^2))) And then I get this result: ================================================Call: lm(formula = x ~ 0 + I(t^2)) Residuals: Min 1Q Median 3Q Max -3.332e-02 -9.362e-03 1.169e-05 1.411e-02 3.459e-02 Coefficients: Estimate Std. Error t value Pr(>|t|) I(t^2) 0.0393821 0.0001487 264.8 <2e-16 *** --- Signif. codes: 0 `***' 0.001 `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1 Residual standard error: 0.01945 on 18 degrees of freedom Multiple R-Squared: 0.9997, Adjusted R-squared: 0.9997 F-statistic: 7.014e+04 on 1 and 18 DF, p-value: < 2.2e-16 ================================================ I see in MuPad, that Delta^2 is 0.006813. Now is not the standard error the square root of Delta^2? Should I not get 0.069 as standard error? (2) When I use the model summary(lm(x ~ I(t^2))) I get (of course) another result with a slightly smaller Delta^2. But I do not expect such an error as this would mean that there was a systematic error in our measurement of the distance and if I understand the result of R correctly, the error was 0.04m which is impossible: =========================================================Call: lm(formula = x ~ I(t^2)) Residuals: Min 1Q Median 3Q Max -0.0202520 -0.0116533 -0.0006036 0.0036699 0.0432987 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.0427606 0.0161085 2.655 0.0167 * I(t^2) 0.0379989 0.0005367 70.801 <2e-16 *** --- Signif. codes: 0 `***' 0.001 `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1 Residual standard error: 0.01683 on 17 degrees of freedom Multiple R-Squared: 0.9966, Adjusted R-squared: 0.9964 F-statistic: 5013 on 1 and 17 DF, p-value: < 2.2e-16 ==================================================== What is going on here? (Sorry but I am only a high school teacher and have not much idea of statistics.) TIA, JB ______________________________________________ R-help at stat.math.ethz.ch mailing list https://www.stat.math.ethz.ch/mailman/listinfo/r-help ########################################### This message has been scanned by F-Secure Anti-Virus for Internet Mail. For more information, connect to http://www.F-Secure.com/
Michael A. Miller
2003-Nov-06 21:37 UTC
[R] newbie's additional (probably to some extent OT) questions
>>>>> "JB" == JB <jblazi at gmx.de> writes:> (1) So finally, thank to your help I have this: > summary(lm(x ~ 0+I(t^2))) [...] Would you post your data set? It is hard for me to sort out what is going on without seeing the input. Regards, Mike
Thomas W Blackwell
2003-Nov-06 23:24 UTC
[R] newbie's additional (probably to some extent OT) questions
JB and Michael - I'm coming into this without having reviewed the earlier emails (if there are any) in this thread. But I will guess that the data come from a high school physics experiment on gravitational acceleration which drops a weight dragging a paper tape through a buzzer with a piece of carbon paper in it. This prints periodic marks on the paper tape. The data x are the distances traveled at successive time points following time zero. I think it's DYNAMITE that you're actually doing this data analysis. It's what I always wanted to do as a high school student, but didn't have the technical background then to carry out. In fact ... come to think of it ... I'm pretty sure I STILL HAVE my high school ticker tapes folded up among my high school papers somewhere, 35 years later, still waiting to be properly analyzed ! It makes sense to fit a no-intercept model with no linear term and only a quadratic term. The model formula x ~ 0 + I(t^2) does this correctly. (If one wanted to account for friction, the linear term would come back in.) Question 1 involves a distinction between the standard deviation of the residuals and the standard error of an estimate for the single coefficient in the model. These are not at all the same concept. The coefficient estimate behaves like a sample average, and has much smaller sampling variation over repeated experiments than one observation would. In the no-intercept model, the standard deviation of the residuals is stated as 0.01945 on 18 df. In the model WITH an intercept, it is stated as 0.01683 on 17 df. I don not understand 'MuPad' but I observe an apparent typographical error in which the second residual standard deviation is reported instead as 0.006813. All of these three numbers represent the residual standard deviation. Naturally, this is much larger than the standard error of an estimate: 0.0001487 or 0.0005367. Question 2 refers to the estimated value for the intercept in a model with constant and quadratic terms only (no linear term). The estimated value is 0.043 +- 0.016 (no units are given). Gosh, I'm not surprised. The observations and the predictors are all non-negative. Linear regression produces an unbiased estimate, given its assumptions, but when there is uncertainty in the predictors as well, it is known to be biased downward. (Think of the "two regression lines".) If some of that bias shows up in the intercept, it's no surprise. If this were a mission-critical data set, I would certainly plot the residuals against the fitted values and look for empirical evidence to judge whether the quadratic-only model is adequate. HTH - tom blackwell - u michigan medical school - ann arbor - On Thu, 6 Nov 2003, JB wrote:> (1) > So finally, thank to your help I have this: > > summary(lm(x ~ 0+I(t^2))) > > And then I get this result: > ================================================> Call: > lm(formula = x ~ 0 + I(t^2)) > > Residuals: > Min 1Q Median 3Q Max > -3.332e-02 -9.362e-03 1.169e-05 1.411e-02 3.459e-02 > > Coefficients: > Estimate Std. Error t value Pr(>|t|) > I(t^2) 0.0393821 0.0001487 264.8 <2e-16 *** > --- > Signif. codes: 0 `***' 0.001 `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1 > > Residual standard error: 0.01945 on 18 degrees of freedom > Multiple R-Squared: 0.9997, Adjusted R-squared: 0.9997 > F-statistic: 7.014e+04 on 1 and 18 DF, p-value: < 2.2e-16 > ================================================> > I see in MuPad, that Delta^2 is 0.006813. Now is not the standard error the > square root of Delta^2? Should I not get 0.069 as standard error? > > (2) > When I use the model > summary(lm(x ~ I(t^2))) > I get (of course) another result with a slightly smaller Delta^2. But I do > not expect such an error as this would mean that there was a systematic > error in our measurement of the distance and if I understand the result of > R correctly, the error was 0.04m which is impossible: > > =========================================================> Call: > lm(formula = x ~ I(t^2)) > > Residuals: > Min 1Q Median 3Q Max > -0.0202520 -0.0116533 -0.0006036 0.0036699 0.0432987 > > Coefficients: > Estimate Std. Error t value Pr(>|t|) > (Intercept) 0.0427606 0.0161085 2.655 0.0167 * > I(t^2) 0.0379989 0.0005367 70.801 <2e-16 *** > --- > Signif. codes: 0 `***' 0.001 `**' 0.01 `*' 0.05 `.' 0.1 ` ' 1 > > Residual standard error: 0.01683 on 17 degrees of freedom > Multiple R-Squared: 0.9966, Adjusted R-squared: 0.9964 > F-statistic: 5013 on 1 and 17 DF, p-value: < 2.2e-16 > ====================================================> > What is going on here? > (Sorry but I am only a high school teacher and have not much idea of > statistics.) > > TIA, > > JB > > ______________________________________________ > R-help at stat.math.ethz.ch mailing list > https://www.stat.math.ethz.ch/mailman/listinfo/r-help >
At 07.11.2003 (00:24), Thomas W Blackwell wrote:>JB and Michael - > >But I will guess that the >data come from a high school physics experiment on gravitational >acceleration which drops a weight dragging a paper tape through >a buzzer with a piece of carbon paper in it. This prints periodic >marks on the paper tape. The data x are the distances traveled >at successive time points following time zero.No. It is a body (slider?) that is sliding down an inclined plane on an air cushion. we can determine the position of the slider pretty exactly (the error should be less than 0.01m). The clock starts when we release the body and it stops when the body passes a photo cell. There are two data sets as we experimented with two different angles between plane table. The measurement of the angles is probably a bit less exact than the measurement of the position. Here are the two data sets: The positions are in the dx-list and are the same in both experiments: dx-list = c( 1.60, 1.55,1.50,...,0.70) (19 values). The corresponding dt-lists are dt-list1 = c(6.44,6.29,6.1,6.09,6.02,5.87,5.68,5.65,5.52,5.43,5.30,5.20,5.01,4.88,4.74,4.61,4.44,4.36,4.12) dt-list2 = c(3.98,3.86,3.78,3.72,3.65,3.59,3.51,3.45,3.37,3.28,3.22,3.14,3.07,2.96,2.89,2.81,2.74,2.61,2.55) During the first series of measurements, tha body bumped against a boundary that was fixed on the inclined plane. By bumping against this boundray, the inclined plane, that has a much bigger mass than the body, was slightly pushed and after 15 measurements the position of this boundary changed by 0.01m: A------------------------------B---C Here B should be a fixed position and A should be changed. According to our mistake B was changed a bit too. C is a boundary that stops the body from leavinf the air cushion (as those sliding bodies are expensive). Then, when we took the second series of measurements, I "ordered" a pupil to stop tha body with his hand before bumping against C. And really, it seems to me that the second series is more precise.>I think it's DYNAMITE that you're actually doing this data analysis.Why? I always do this, but this year I started to involve a bit more statistics. I told about how the method of least squares was an "unbiased estimate" and that also some hypothesis testing is done (when I check whether the points lie on a parabola). The pupils are 16 to 18 years old. They have to draw dx against (dt)^2 as their homework and have to fit in a straight line. This is the way we do linear regression.>It's what I always wanted to do as a high school student, but didn't >have the technical background then to carry out. In fact ... come to >think of it ... I'm pretty sure I STILL HAVE my high school ticker >tapes folded up among my high school papers somewhere, 35 years >later, still waiting to be properly analyzed !From your explanations which follow this point, I do not understand a single word (the termini technici are all unknown to me) but I suspect that I pretty much would like to understand them. Sigh. Probably, I should have to read some work on statistics