Kim Milferstedt
2008-May-22 16:50 UTC
[R] AMOVA results from ade4 different than in the reference publication
Hello, I am trying to run some AMOVA analyses with the amova function in the package ade4. When running the example dataset provided in ade4, I noticed a difference between the published results from the same data (Excoffier et al. 1992) and what ade4 calculates. Below are the data for "within sample/population" from ade4 and from the haplotypic distance matrix in the paper: source name Variance % total variance p-value phi ade4 within samples 0.478 75.390 1.000 0.246 paper within populations 0.478 75.390 0.000 0.246 In the R output, it seems that the Std.Obs for "within samples" in ade4 is very high and may be the reason why the difference is not significant. Std.Obs is not given in the publication. Is the high p-value an error in ade4 or am I missing something? Thanks already for your help! Kim -- ___________________________________________ Kim Milferstedt, PhD Postdoctoral Researcher University of Illinois at Urbana-Champaign Department of Microbiology C207 CLSL 601 S. Goodwin Avenue Urbana, IL 61801 phone: 001-217-244-0721 email: milferst at uiuc.edu
Ben Bolker
2008-May-22 17:18 UTC
[R] AMOVA results from ade4 different than in the reference publication
Kim Milferstedt <milferst <at> uiuc.edu> writes:> > Hello, > > I am trying to run some AMOVA analyses with the amova function in the > package ade4. > > When running the example dataset provided in ade4, I noticed a > difference between the published results from the same data (Excoffier > et al. 1992) and what ade4 calculates. >A generic answer to this type of question is that you should contact the package maintainers (see help(package="ade4") for their e-mail addresses) -- especially for these kinds of specialized questions (most R-helpers are not population geneticists and wouldn't know an AMOVA if it bit them) ... On the other hand, running example(randtest.amova) [which is what I assume you did, although you don't say] I do confirm your results, and furthermore the plot that is produced looks like the p-value *should* be very low -- and the std. obs. is large (which I think is a *good* thing from the point of view of rejecting null hypothesis -- seems like this is the analogue of a t- or z-score), but negative. Perhaps the package authors did an upper 1-tailed test by mistake? (I would follow through with this but it looks like the important stuff goes through into C code and I can't take the time to dig that deep right now.) Bottom line: looks like a mistake, take it up with the package maintainers. good luck, Ben Bolker