On Apr 18, 2010, at 4:55 PM, anon anon wrote:
> Hello,
>
> I'm using var.test to do a simple F-test for equality of variances.
> I think
> I'm missing something small here:
>
>> m<-rnorm(10,sd=1)
>> n<-rnorm(5,sd=1)
>> var.test(m,n)
>
> F test to compare two variances
>
> data: m and n
> F = 13.7438, num df = 9, denom df = 4, p-value = 0.02256
> alternative hypothesis: true ratio of variances is not equal to 1
> 95 percent confidence interval:
> 1.543430 64.844094
> sample estimates:
> ratio of variances
> 13.74375
>
>> qf(.0250,9,4)*var(m)/var(n)
> [1] 2.912997 <- correct degrees of freedom (I think!) and does not
> match
> var.test lower bound
>> qf(.0250,4,9)*var(m)/var(n)
> [1] 1.543430 <-matches var.test lower bound with degrees of
> freedom
> reversed
The var.test code is available for inspection:
getAnywhere(var.test.default)
It can be seen to use the ratio of the estimate to the theoretic qf
value. Was there a reason you decided to use the product?
BETA <- (1 - conf.level)/2
CINT <- c(ESTIMATE/qf(1 - BETA, DF.x, DF.y), ESTIMATE/qf(BETA,
DF.x, DF.y))
>>
>
> It seems that the F-test in var.test is getting the degrees of
> freedom mixed
> up. Outside calculators seem to agree with the qf function.
I would think that inverting the estimate should reverse the "correct"
order for the degrees of freedom, but it is not clear that your choice
for the CI calculation is the correct one.
>
> So, am I misunderstanding something?
--
David Winsemius, MD
West Hartford, CT