Dear Contributors, I conducting epoch analysis. I tried to test the significance of my result using randomization test. Since I have 71 events, I randomly selected another 71 events, making sure that none of the dates in the random events corresponds with the ones in the real event. Following the code I found here (https://www.uvm.edu/~dhowell/StatPages/R/RandomizationTestsWithR/Random2Sample/TwoIndependentSamplesR.html), I combined these two data set and used them to generate another 5000 events. I then plotted the graph of the mean differences for the 5000 randomly generated events. On the graph, I indicated the region of the mean difference between the real 71 epoch and the randomly selected 71 epoch. Since the two tail test shows that the mean difference falls at the extreme of the randomly selected events, I concluded that my result is statistically significant. I am attaching the graph to assistance you in you suggestions. I can attach both my code and the real and randomly generated events if you ask for it. My request is that you help me to understand if I am on the right track or no. This is the first time I am doing this and except the experts decide, I am not quite sure whether I am right or not. Many thanks for your kind concern. Best Ogbos
Ogbos, You do not seem to have received a reply over the list yet, which might be due to the fact that this seems rather a stats than an R question. Neither got your attachment (Figure) through - see posting guide. I'm not familiar with epoch analysis, so not sure what exactly you are doing / trying to achieve, but some general thoughts: * You do NOT want to restrict your re-randomizations in a way that "none of the dates corresponds with the ones in the real event" - actually, as a general principle, the true data must be an admissible re-randomization as well. You seem to have excluded that (and a lot of other randomizations at the same time which might have occurred, i.e. dates 1 and 2 reversed but all others the same), thereby rendering the test invalid. Any restrictions you have on your re-randomizations must've applied to the original randomization as well. * If you have rather observational data (which I suspect, but not sure), Edgington & Onghena (2007) would rather refer to this as a permutation test - the difference being that you have to make strong assumptions (similar to parametric tests) on the nature of the data, which are designed-in to be true for randomization tests. It might be a merely linguistic discrimination, but it is important to note which assumptions have to be (implicitly) made. * I'm not sure what you mean by "mean differences" of the events - is that two groups you are comparing? If so, that seems reasonable, but just make sure the test statistic you use is reasonable and sensitive against the alternatives you are mostly interested in. The randomization/permutation test will never proof that, e.g., means are significantly different, but only that there is SOME difference. By selecting the appropriate test statistic, you can influence what will pop up more easily and what not, but you can never be sure (unless you make strong assumptions about everything else, like in many parametric tests). * For any test statistic, you would then determine the proportion of its values among the 5000 samples where it is as large or larger than the one observed (or as small or smaller, or either, depending on the nature of the test statistic and whether you aim for a one- or a two-sided test). That is your p value. If small enough, conclude significance. At least conceptually important: The observed test statistic is always part of the re-randomization (i.e. your 5000) - so you truly only do 4999 plus the one you observed. Otherwise the test may be more or less liberal. Your p value is hence no smaller than 1/n, where n is the total number of samples you looked at (including the observed one), a p value of 0 is not possible in randomization tests (nor in other tests, of course). I hope this is helpful, but you will need to go through these and refer to your own setup to check whether you adhered to the principles or not, which is impossible for me to judge based on the information provided (and I won't be able to look at excessive code to check either). Michael> -----Original Message----- > From: R-help <r-help-bounces at r-project.org> On Behalf Of Ogbos Okike > Sent: Montag, 28. Januar 2019 19:42 > To: r-help <r-help at r-project.org> > Subject: [R] Randomization Test > > Dear Contributors, > > I conducting epoch analysis. I tried to test the significance of my result using > randomization test. > > Since I have 71 events, I randomly selected another 71 events, making sure > that none of the dates in the random events corresponds with the ones in > the real event. > > Following the code I found here > (https://www.uvm.edu/~dhowell/StatPages/R/RandomizationTestsWithR/R > andom2Sample/TwoIndependentSamplesR.html), > I combined these two data set and used them to generate another 5000 > events. I then plotted the graph of the mean differences for the 5000 > randomly generated events. On the graph, I indicated the region of the > mean difference between the real 71 epoch and the randomly selected 71 > epoch. > > Since the two tail test shows that the mean difference falls at the extreme of > the randomly selected events, I concluded that my result is statistically > significant. > > > > I am attaching the graph to assistance you in you suggestions. > > I can attach both my code and the real and randomly generated events if you > ask for it. > > My request is that you help me to understand if I am on the right track or no. > This is the first time I am doing this and except the experts decide, I am not > quite sure whether I am right or not. > > Many thanks for your kind concern. > > Best > Ogbos > ______________________________________________ > R-help at r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting- > guide.html > and provide commented, minimal, self-contained, reproducible code.
Dear Michael, This is great! Thank you. I have not really got any response other than yours. I have long before now included what I have in a paper submitted to a journal. I am awaiting the feedback of the reviewer. I will compare the comments with your input here and determine the corrections to make and probably return to the list for additional help. Best wishes Ogbos On Fri, Feb 8, 2019 at 4:31 PM Meyners, Michael <meyners.m at pg.com> wrote:> > Ogbos, > > You do not seem to have received a reply over the list yet, which might be due to the fact that this seems rather a stats than an R question. Neither got your attachment (Figure) through - see posting guide. > > I'm not familiar with epoch analysis, so not sure what exactly you are doing / trying to achieve, but some general thoughts: > > * You do NOT want to restrict your re-randomizations in a way that "none of the dates corresponds with the ones in the real event" - actually, as a general principle, the true data must be an admissible re-randomization as well. You seem to have excluded that (and a lot of other randomizations at the same time which might have occurred, i.e. dates 1 and 2 reversed but all others the same), thereby rendering the test invalid. Any restrictions you have on your re-randomizations must've applied to the original randomization as well. > * If you have rather observational data (which I suspect, but not sure), Edgington & Onghena (2007) would rather refer to this as a permutation test - the difference being that you have to make strong assumptions (similar to parametric tests) on the nature of the data, which are designed-in to be true for randomization tests. It might be a merely linguistic discrimination, but it is important to note which assumptions have to be (implicitly) made. > * I'm not sure what you mean by "mean differences" of the events - is that two groups you are comparing? If so, that seems reasonable, but just make sure the test statistic you use is reasonable and sensitive against the alternatives you are mostly interested in. The randomization/permutation test will never proof that, e.g., means are significantly different, but only that there is SOME difference. By selecting the appropriate test statistic, you can influence what will pop up more easily and what not, but you can never be sure (unless you make strong assumptions about everything else, like in many parametric tests). > * For any test statistic, you would then determine the proportion of its values among the 5000 samples where it is as large or larger than the one observed (or as small or smaller, or either, depending on the nature of the test statistic and whether you aim for a one- or a two-sided test). That is your p value. If small enough, conclude significance. At least conceptually important: The observed test statistic is always part of the re-randomization (i.e. your 5000) - so you truly only do 4999 plus the one you observed. Otherwise the test may be more or less liberal. Your p value is hence no smaller than 1/n, where n is the total number of samples you looked at (including the observed one), a p value of 0 is not possible in randomization tests (nor in other tests, of course). > > I hope this is helpful, but you will need to go through these and refer to your own setup to check whether you adhered to the principles or not, which is impossible for me to judge based on the information provided (and I won't be able to look at excessive code to check either). > > Michael > > > -----Original Message----- > > From: R-help <r-help-bounces at r-project.org> On Behalf Of Ogbos Okike > > Sent: Montag, 28. Januar 2019 19:42 > > To: r-help <r-help at r-project.org> > > Subject: [R] Randomization Test > > > > Dear Contributors, > > > > I conducting epoch analysis. I tried to test the significance of my result using > > randomization test. > > > > Since I have 71 events, I randomly selected another 71 events, making sure > > that none of the dates in the random events corresponds with the ones in > > the real event. > > > > Following the code I found here > > (https://www.uvm.edu/~dhowell/StatPages/R/RandomizationTestsWithR/R > > andom2Sample/TwoIndependentSamplesR.html), > > I combined these two data set and used them to generate another 5000 > > events. I then plotted the graph of the mean differences for the 5000 > > randomly generated events. On the graph, I indicated the region of the > > mean difference between the real 71 epoch and the randomly selected 71 > > epoch. > > > > Since the two tail test shows that the mean difference falls at the extreme of > > the randomly selected events, I concluded that my result is statistically > > significant. > > > > > > > > I am attaching the graph to assistance you in you suggestions. > > > > I can attach both my code and the real and randomly generated events if you > > ask for it. > > > > My request is that you help me to understand if I am on the right track or no. > > This is the first time I am doing this and except the experts decide, I am not > > quite sure whether I am right or not. > > > > Many thanks for your kind concern. > > > > Best > > Ogbos > > ______________________________________________ > > R-help at r-project.org mailing list -- To UNSUBSCRIBE and more, see > > https://stat.ethz.ch/mailman/listinfo/r-help > > PLEASE do read the posting guide http://www.R-project.org/posting- > > guide.html > > and provide commented, minimal, self-contained, reproducible code.