Hello, I've fitted a parametric survival model by> survreg(Surv(Week, Cens) ~ C(Treatment, srmod.contr), > data = poll.surv.wo3)where srmod.contr is the following matrix of contrasts: prep auto poll self home [1,] 1 1 1.0000000 0.0 0 [2,] -1 0 0.0000000 0.0 0 [3,] 0 -1 0.0000000 0.0 0 [4,] 0 0 -0.3333333 1.0 0 [5,] 0 0 -0.3333333 -0.5 1 [6,] 0 0 -0.3333333 -0.5 -1 The summary of the model looks like this: [snip] Value Std. Error z p (Intercept) 1.4644 0.0552 26.536 3.68e-155 C(Treatment, srmod.contr)prep 0.2117 0.1268 1.669 9.50e-02 C(Treatment, srmod.contr)auto 0.1490 0.1265 1.178 2.39e-01 C(Treatment, srmod.contr)poll -0.7242 0.1639 -4.420 9.89e-06 C(Treatment, srmod.contr)self -0.2960 0.1141 -2.593 9.51e-03 C(Treatment, srmod.contr)home 0.0494 0.1068 0.462 6.44e-01 Log(scale) -0.4451 0.0517 -8.609 7.36e-18 [snip] Now, I'd like to test which of my contrasts are significantly different from zero. I assume that the p values given by the summary are not corrected for multiple testing. Thus, I might correct them with p.adjust(). But since the contrasts are not independent, I'm not sure if the adjustment methods would work here. On the other hand, I've come across a procedure called "Scheffe's multiple comparisons" (or S test), which is said to be appropriate for multiple contrasts like these. Before I try to implement it: Has anybody already done that, or are there good reasons not to use it? BTW, I tried to extract the SEs of the contrasts by se.contrast(), but it would not work for survival models. Would they be the same that appear in the summary above? Thanks for any hints Kaspar Pflugshaupt -- Kaspar Pflugshaupt Geobotanical Institute ETH Zurich, Switzerland http://www.geobot.umnw.ethz.ch mailto:pflugshaupt at geobot.umnw.ethz.ch -.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.- r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html Send "info", "help", or "[un]subscribe" (in the "body", not the subject !) To: r-help-request at stat.math.ethz.ch _._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._
Kaspar Pflugshaupt <pflugshaupt at cirsium.ethz.ch> writes:> Now, I'd like to test which of my contrasts are significantly different from > zero. I assume that the p values given by the summary are not corrected for > multiple testing. Thus, I might correct them with p.adjust(). But since the > contrasts are not independent, I'm not sure if the adjustment methods would > work here. > On the other hand, I've come across a procedure called "Scheffe's multiple > comparisons" (or S test), which is said to be appropriate for multiple > contrasts like these. Before I try to implement it: Has anybody already done > that, or are there good reasons not to use it?The p.adjust methods do work for correlated tests, at least the Holm and Bonferroni tests where I can understand the theory. The Hochberg test has some amount of handwaving associated with it, but it too is intended to be applied to testing contrasts. Scheffe also works with correlated tests. It is more conservative than the other tests because it protects against multiple testing on any number of linear contrasts by projecting the (F-based) multivariate confidence region onto the direction given by the contrast. This argues both for and against its use...> BTW, I tried to extract the SEs of the contrasts by se.contrast(), but it > would not work for survival models. Would they be the same that appear in the > summary above?se.contrast works for aov models only. You probably need to massage the covariance matrix for the estimates. -- O__ ---- Peter Dalgaard Blegdamsvej 3 c/ /'_ --- Dept. of Biostatistics 2200 Cph. N (*) \(*) -- University of Copenhagen Denmark Ph: (+45) 35327918 ~~~~~~~~~~ - (p.dalgaard at biostat.ku.dk) FAX: (+45) 35327907 -.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.- r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html Send "info", "help", or "[un]subscribe" (in the "body", not the subject !) To: r-help-request at stat.math.ethz.ch _._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._
On Thu, 8 Feb 2001, Kaspar Pflugshaupt wrote:> Hello, > > I've fitted a parametric survival model by > > > survreg(Surv(Week, Cens) ~ C(Treatment, srmod.contr), > > data = poll.surv.wo3) > > where srmod.contr is the following matrix of contrasts: > > prep auto poll self home > [1,] 1 1 1.0000000 0.0 0 > [2,] -1 0 0.0000000 0.0 0 > [3,] 0 -1 0.0000000 0.0 0 > [4,] 0 0 -0.3333333 1.0 0 > [5,] 0 0 -0.3333333 -0.5 1 > [6,] 0 0 -0.3333333 -0.5 -1 > > The summary of the model looks like this: > > [snip] > Value Std. Error z p > (Intercept) 1.4644 0.0552 26.536 3.68e-155 > C(Treatment, srmod.contr)prep 0.2117 0.1268 1.669 9.50e-02 > C(Treatment, srmod.contr)auto 0.1490 0.1265 1.178 2.39e-01 > C(Treatment, srmod.contr)poll -0.7242 0.1639 -4.420 9.89e-06 > C(Treatment, srmod.contr)self -0.2960 0.1141 -2.593 9.51e-03 > C(Treatment, srmod.contr)home 0.0494 0.1068 0.462 6.44e-01 > Log(scale) -0.4451 0.0517 -8.609 7.36e-18 > > [snip] > > Now, I'd like to test which of my contrasts are significantly different from > zero. I assume that the p values given by the summary are not corrected for > multiple testing. Thus, I might correct them with p.adjust(). But since the > contrasts are not independent, I'm not sure if the adjustment methods would > work here.The adjustment procedures are valid for dependent p-values. They wouldn't be much use otherwise. To be precise, the Holm method is valid universally, the Hochberg method can sometimes slightly exceed the nominal type I error.> On the other hand, I've come across a procedure called "Scheffe's multiple > comparisons" (or S test), which is said to be appropriate for multiple > contrasts like these. Before I try to implement it: Has anybody already done > that, or are there good reasons not to use it?The Scheffe procedure maintains the Type I error over all possible contrasts, making it more conservative. On the other hand, it uses the estimated covariance among the parameters, which might make it less conservative.> BTW, I tried to extract the SEs of the contrasts by se.contrast(), but it > would not work for survival models. Would they be the same that appear in the > summary above?Yes, that's why they are there :) -thomas Thomas Lumley Asst. Professor, Biostatistics tlumley at u.washington.edu University of Washington, Seattle -.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.- r-help mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html Send "info", "help", or "[un]subscribe" (in the "body", not the subject !) To: r-help-request at stat.math.ethz.ch _._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._._