A question about comparing symptom reduction over time and across treatment groups. Each respondent is asked twice if they experience symptoms (coded 1), at baseline and then at 6 months later. They are randomly assigned to either the control or the intervention group. The 2x2x2 table below shows the frequency counts. As can be seen in the margins of the table, 21% of the control group show symptoms at baseline, then it reduced to 7% six months later; and 48% of the intervention group show symptoms at baseline, then only 4% at 6 month. The question is, given the information below, how do I test if the intervention group has a better improvement? Intuitively, I imagine that a difference of 44% should be better than a 14% difference in a sample of about 30 respondents. But it is no so according to Levin & Serlin's method (J. of Stats Education, 8 (2), 2000). Levin and Serlin say that correlated proportions between two groups can be tested by a simple 2x2 table using the frequency of changes: Control Intervention ------------------------------------ 0 --> 1 (worse) 2 0 1 --> 0 (better) 6 12 ---------------------------------------------------- A fisher.test(matrix(c(2,6,0,12), ncol=2)) shows a p-value of 0.15. I'd appreciate suggestions to alternative methods, perhaps a test of conditional independence in loglin()? But I am not sure how to do that. Yuelin Li. --------- Table ------------- Control Group (baseline by 6 months) Frequency| Percent | Row Pct | 0| 1| Total ---------+--------+--------+ 0 | 20 | 2 | 22 | 71.43 | 7.14 | 78.57 | 90.91 | 9.09 | ---------+--------+--------+ 1 | 6 | 0 | 6 | 21.43 | 0.00 | 21.43 | 100.00 | 0.00 | ---------+--------+--------+ Total 26 2 28 92.86 7.14 100.00 Intervention Group (baseline by 6 months) Frequency| Percent | Row Pct | 0| 1| Total ---------+--------+--------+ 0 | 14 | 0 | 14 | 51.85 | 0.00 | 51.85 | 100.00 | 0.00 | ---------+--------+--------+ 1 | 12 | 1 | 13 | 44.44 | 3.70 | 48.15 | 92.31 | 7.69 | ---------+--------+--------+ Total 26 1 27 96.30 3.70 100.00