Hi, I'm designing a experiment in order to compare the growing of several clones of a tree specie. It will be a complete randomized block design. How can I decide what model of mean comparision to choose? LSD, HSD,TukeyHSD, Duncan,...? Thanks in advance
If you have a priori planned comparisons, you can just test those using linear contrasts, with no need to correct for multiple testing. If you do not, and you are relying on looking at the data and analysis to tell you which treatment means to compare, and you are considering several tests, then you should consider correcting for multiple testing. There is a large literature on the properties of the various tests. (Tukey HSD usually works pretty well for me.) <rant> Why do people design experiments with a priori hypotheses in mind, yet test them using post hoc comparison procedures? It's as if they are afraid to admit that they had hypotheses to begin with! Far better to test what you had planned to test using the more powerful methods for planned comparisons, and leave it at that. </rant> On Mon, 2007-07-16 at 09:52 +0200, Adrian J. Montero Calvo wrote:> Hi, > I'm designing a experiment in order to compare the growing of > several clones of a tree specie. It will be a complete randomized block > design. How can I decide what model of mean comparision to choose? LSD, > HSD,TukeyHSD, Duncan,...? Thanks in advance > > ______________________________________________ > R-help at stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code.-- Simon Blomberg, BSc (Hons), PhD, MAppStat. Lecturer and Consultant Statistician Faculty of Biological and Chemical Sciences The University of Queensland St. Lucia Queensland 4072 Australia Room 320 Goddard Building (8) T: +61 7 3365 2506 email: S.Blomberg1_at_uq.edu.au Policies: 1. I will NOT analyse your data for you. 2. Your deadline is your problem. The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data. - John Tukey.
<follow-on rant> Stepwise regression variable selection methods make multiple post hoc comparisons. The number of comparisons may be very large, vastly more than the half-dozen post-hoc comparisons that are common in an experimental design context. There is a disconnect here. The multiple testing issue is noted in pretty much every discussion of analysis of experimental data, but not commonly mentioned (at least in older texts) in discussions of stepwise regression, best subsets and related regression approaches. One reason for this silence may be that there is no ready HSD-like fix. The SEs and t-statistics that lm() gives for the finally selected model can be grossly optimistic. Running the analysis with the same model matrix, but with y-values that are noise, can give a useful wake-up call. John Maindonald email: john.maindonald at anu.edu.au phone : +61 2 (6125)3473 fax : +61 2(6125)5549 Centre for Mathematics & Its Applications, Room 1194, John Dedman Mathematical Sciences Building (Building 27) Australian National University, Canberra ACT 0200. On 16 Jul 2007, at 8:00 PM, Simon Blomberg wrote:> If you have a priori planned comparisons, you can just test those > using > linear contrasts, with no need to correct for multiple testing. If you > do not, and you are relying on looking at the data and analysis to > tell > you which treatment means to compare, and you are considering several > tests, then you should consider correcting for multiple testing. There > is a large literature on the properties of the various tests. > (Tukey HSD > usually works pretty well for me.) > > <rant> Why do people design experiments with a priori hypotheses in > mind, yet test them using post hoc comparison procedures? It's as if > they are afraid to admit that they had hypotheses to begin with! Far > better to test what you had planned to test using the more powerful > methods for planned comparisons, and leave it at that. > </rant> > > > On Mon, 2007-07-16 at 09:52 +0200, Adrian J. Montero Calvo wrote: >> Hi, >> I'm designing a experiment in order to compare the growing of >> several clones of a tree specie. It will be a complete randomized >> block >> design. How can I decide what model of mean comparision to choose? >> LSD, >> HSD,TukeyHSD, Duncan,...? Thanks in advance >> >> ______________________________________________ >> R-help at stat.math.ethz.ch mailing list >> https://stat.ethz.ch/mailman/listinfo/r-help >> PLEASE do read the posting guide http://www.R-project.org/posting- >> guide.html >> and provide commented, minimal, self-contained, reproducible code. > -- > Simon Blomberg, BSc (Hons), PhD, MAppStat. > Lecturer and Consultant Statistician > Faculty of Biological and Chemical Sciences > The University of Queensland > St. Lucia Queensland 4072 > Australia > Room 320 Goddard Building (8) > T: +61 7 3365 2506 > email: S.Blomberg1_at_uq.edu.au
Dear Adrain, You can see the library agricolae for the functions LSD.test, HSD.test, and Waller.test for Waller-Duncan. The criterion is that LSD is more used for few treatments and HSD for many treatments (more than 5) the test of Waller is Bayes and minimizes the two types of error (I or II). In experiment with clones, we prefer Waller-Duncan. Felipe de Mendiburu Statistician. International Potato Center Lima-Peru -----Original Message----- From: r-help-bounces at stat.math.ethz.ch [mailto:r-help-bounces at stat.math.ethz.ch]On Behalf Of Adrian J. Montero Calvo Sent: Monday, July 16, 2007 2:52 AM To: r-help at stat.math.ethz.ch Subject: [R] LSD, HSD,... Hi, I'm designing a experiment in order to compare the growing of several clones of a tree specie. It will be a complete randomized block design. How can I decide what model of mean comparision to choose? LSD, HSD,TukeyHSD, Duncan,...? Thanks in advance ______________________________________________ R-help at stat.math.ethz.ch mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.