Dear all, I would like to have some opinions/advices and possible references about the following subject: Example: If I permute, with replacement, 3 items out of a sample of size 10, I get 10^3 possible results. If I permute, with replacement, 6 items ou of a sample of size 20, I get 20^6 possible results. So, what I would like to know is if we increase the sample size (and increase the number of items in the same proportion) each single permutation becomes more robust because the number of possible results increase exponentially, right? When I say more robust, it means that if we want to test an hypothesis over a big sample size and use permutations for building the null-distribution for assessing the by-chance situation I think we have more confidence if we do "n" permutations on a sample of size "2x" with "2a" items than to do the same number of permutations over a sample of size "x" with "a" items. Thanks in advance, João Fadista [[alternative HTML version deleted]]