On Thu, 2007-10-18 at 10:18 -0500, Bret Collier wrote:> All,
>
> I have been digging around in the help files and found bsamsize in
> Hmisc, but I am wondering if i am using it right.
>
> So, here is the question: given a binomial response (success/failure)
> for 2 groups (treatment/control) and I want to estimate the necessary
> sample size (n) to determine if the magnitude of the difference between
> treatments and controls is a 25% increase in success probability.
>
> Pilot data indicated that treatment success was ~0.32, control success
> ~0.09. So, using bsamsize (code below), I am interested in determining
> what sample size (n) is needed such that I can detect a 25% change in
> success between treatments/controls.
>
> I tried this but I can't shake the feeling I am doing something wrong,
>
> > power_b<-bsamsize(.25, .0, fraction =0.5, alpha=0.10, power=0.80)
> > power_b<-as.data.frame(round(power_b, digits=1))
> > power_b
> round(power_b, digits = 1)
> n1 20.6
> n2 20.6
>
> Any suggestions on approaches, places I should have looked would be
helpful,
Your code above suggests that you want to be able to detect a 25%
increase from 0%, which is not what you want.
presumably you want an 80% probability of detecting a 25% improvement
over the 9% success in the control group, which means you would be
looking for 11.25% in the treatment group.
Presuming that your subjects are randomized 1:1, you would use:
> bsamsize(0.09, 0.1125)
n1 n2
2820.493 2820.493
which means you need 2821 subjects in EACH arm of the study.
You can also use power.prop.test(), which is in the base 'stats'
package:
> power.prop.test(p1 = 0.09, p2 = 0.1125, power = 0.8)
Two-sample comparison of proportions power calculation
n = 2820.493
p1 = 0.09
p2 = 0.1125
sig.level = 0.05
power = 0.8
alternative = two.sided
NOTE: n is number in *each* group
Same answer and in both cases, we are presuming a two-sided hypothesis.
I might also note that given the pilot study data, a 25% increase in the
treatment group seems rather conservative. This suggests that if this is
actually part of a study design, you might want to revisit the relative
improvement you seek and/or consider implementing interim stopping
rules.
HTH,
Marc Schwartz