Lorenzo Isella wrote:>
> Dear All,
> Apologies if my questions are too basic for this list.
> I am given a set of data corresponding to list of detection times (real,
> non-integer numbers in general) for some events, let us say nuclear
> decays to fix the ideas.
> It is a small dataset, corresponding to about 400 nuclear decay times.
> I would like to test the hypothesis that these decay times are
> Poissonian-distributed.
> What is the best day of dealing with the data? Should I consider the
> cumulative number of detections vs time, the time intervals between two
> consecutive detections vs time or something else?
> Many thanks
>
>
Perhaps:
sort data, compute inter-event times (should be exponentially
distributed), and compare with an exponential distribution?
difft = diff(sort(times))
ks.test(difft,"pexp",rate=1/difft)
This is not quite right because we have estimated the
rate from the data -- from ?ks.test
If a single-sample test is used, the parameters specified in '...'
must be pre-specified and not estimated from the data. There is
some more refined distribution theory for the KS test with
estimated parameters (see Durbin, 1973), but that is not
implemented in 'ks.test'.
But perhaps not a bad start.
--
View this message in context:
http://www.nabble.com/Detection-Times-and-Poisson-Distribution-tp26080592p26083370.html
Sent from the R help mailing list archive at Nabble.com.