I am interested in knowing whether and how I can test the significance of the relationship between my continuous predictor variable (a covariate) and my binary response variable according to two different groups, my categorical predictor variable, in a logistic regression model (glm). Specifically, can I determine whether the relationships are identical (the hypothesis of coincidence), or whether there is a difference between the levels of the categorical variable, but the effect of the covariate is the same (hypothesis of parallelism). I have previously performed an ANCOVA on this data using proportions of the response variable, but I know this is an incorrect application of the technique since proportions (bounded by 0 and 1) violate the assumptions of linear regression. In my ANCOVA, my model produced the equation where 'Y' is the predicted value for the response, 'x' is my continuous covariate and 'z' is a dummy term with (0,1) for the two levels of the categorical predictor. Y = a + bx + bz + bx*z I can test the hypothesis of coincidence (that a single regression line will fit all the data) by testing the terms 'z' and 'x*z' simultaneously using the ANOVA table to generate an F-value for these combined terms. I can test the hypothesis of parallelism (that two intercepts are required, but a single slope to fit the data) by testing the term 'x*z', again using the ANOVA table to generate an F-value. My logistic regression model produces the same equation, except that 'Y' is now the logit of the response. Because I don't know all the math behind the technique of logistic regression, how can I evaluate the two hypotheses? The ANOVA table produced by glm - anova(model, test="Chisq") - speaks about deviances and degrees of freedom. I can see how to determine whether each term is predictive - 1-pchisq(deviance, df) - but how can I tell whether the relationship between 'z' and 'Y' is the same or different at each level of 'x'? ie, is the change in logodds of Y for 1z significantly different from the change in logodds of Y for 0z? I have had no luck tracking this down using Google. Many thanks to those who take up my question! *Peter M Milne* *Dept of Linguistics**, University of Ottawa* Tel: (613) 562-5800 x1125 | Fax: (613) 562-5141 aix2.uottawa.ca/~pmiln099 Get a signature like this. <r1.wisestamp.com/r/landing?promo=20&dest=http://www.wisestamp.com/email-install?utm_source=extension&utm_medium=email&utm_campaign=promo_20> CLICK HERE.<r1.wisestamp.com/r/landing?promo=20&dest=http://www.wisestamp.com/email-install?utm_source=extension&utm_medium=email&utm_campaign=promo_20> [[alternative HTML version deleted]]
I am interested in knowing whether and how I can test the significance of the relationship between my continuous predictor variable (a covariate) and my binary response variable according to two different groups, my categorical predictor variable, in a logistic regression model (glm). Specifically, can I determine whether the relationships are identical (the hypothesis of coincidence), or whether there is a difference between the levels of the categorical variable, but the effect of the covariate is the same (hypothesis of parallelism). I have previously performed an ANCOVA on this data using proportions of the response variable, but I know this is an incorrect application of the technique since proportions (bounded by 0 and 1) violate the assumptions of linear regression. In my ANCOVA, my model produced the equation where 'Y' is the predicted value for the response, 'x' is my continuous covariate and 'z' is a dummy term with (0,1) for the two levels of the categorical predictor. Y = a + bx + bz + bx*z I can test the hypothesis of coincidence (that a single regression line will fit all the data) by testing the terms 'z' and 'x*z' simultaneously using the ANOVA table to generate an F-value for these combined terms. I can test the hypothesis of parallelism (that two intercepts are required, but a single slope to fit the data) by testing the term 'x*z', again using the ANOVA table to generate an F-value. My logistic regression model produces the same equation, except that 'Y' is now the logit of the response. Because I don't know all the math behind the technique of logistic regression, how can I evaluate the two hypotheses? The ANOVA table produced by glm - anova(model, test="Chisq") - speaks about deviances and degrees of freedom. I can see how to determine whether each term is predictive - 1-pchisq(deviance, df) - but how can I tell whether the relationship between 'z' and 'Y' is the same or different at each level of 'x'? ie, is the change in logodds of Y for 1z significantly different from the change in logodds of Y for 0z? I have had no luck tracking this down using Google. Many thanks to those who take up my question! Peter Milne University of Ottawa Department of Linguistics [[alternative HTML version deleted]]