similar to: Confusion about ks.test() handling of ties and exact vs approximate results

Displaying 20 results from an estimated 10000 matches similar to: "Confusion about ks.test() handling of ties and exact vs approximate results"

2023 Mar 01
1
Incorrect behavior of ks.test and psmirnov functions with exact=TRUE
HI, I've noticed what I think is an incorrect behavior of stats::psmirnov function and consequently of ks.test when run in an exact mode. For example: psmirnov(1, sizes=c(50, 50), z=1:100, two.sided = FALSE, lower.tail = F, exact=TRUE) produces 2.775558e-15 However, the exact value should be 1/combination(100, 50), which is 9.9e-30. While the absolute error is small, the relative error is
2024 Apr 07
0
Questions about ks.test function {stats}
Dear R-help, Hope this email finds you well. My name is Ziyan. I am a graduate student in Zhejiang University. My subject research involves ks.test in stats-package {stats}. Based on the code, I have two main questions. Could you provide me some more information? I download different versions of the r language source code through r language website (https://www.r-project.org/). By reading
2006 Mar 22
1
Modified KS test to handle ties.
Dear All, I wonder if there is now an implementation of a modified KS test that can handle ties? Steve. [[alternative HTML version deleted]]
1999 Apr 09
2
KS test from ctest package
This question is mainly aimed at Kurt Hornik as author of the ctest package, but I'm cc'ing it to r-help as I suspect there will be other valuable opinions out there. I have been attempting 2 sample Kolmogorov-Smirnov tests using the ks.test function from the ctest package (ctest v.0.9-15, R v.0.63.3 win32). I am comparing fish length-frequency distributions. My main reference for the
2008 Mar 08
1
ks.test troubles
Hi there! I have two little different data. One is a computer test on people, the other is a paper and pencil test. two boxplots show me that the data is almost the same. So now I'd like to know if I could handle all data as one, by testing with ks.test: ==== > ks.test(el$angststoer, fl$angststoer) Two-sample Kolmogorov-Smirnov test data: el$angststoer and fl$angststoer D =
2002 Mar 26
3
ks.test - continuous vs discrete
I frequently want to test for differences between animal size frequency distributions. The obvious test (I think) to use is the Kolmogorov-Smirnov two sample test (provided in R as the function ks.test in package ctest). The KS test is for continuous variables and this obviously includes length, weight etc. However, limitations in measuring (e.g length to the nearest cm/mm, weight to the nearest
2010 Aug 20
3
how to interpret KS test
Dear R users I am using KS test to compare two different distribution for the same variable (temperature) for two different time periods. H0: the two distributions are equal H1: the two distributions are different ks.test (temp12, temp22) Two-sample Kolmogorov-Smirnov test data: temp12 and temp22 D = 0.2047, p-value < 2.2e-16 alternative hypothesis: two-sided Warning message: In
2007 Jan 14
2
ks.test not working?
Hi, I am trying the following: library(ismev) library(evd) fit <- gev.fit(x,show=FALSE) ks.test(x,pgev,fit$mle[1],fit$mle[2],fit$mle[3]) but I am getting: Warning message: cannot compute correct p-values with ties in: ks.test(x, pgev, fit$mle[1], fit$mle[2], fit$mle[3]) where x is: [1] 239 38 1 43 22 1 5 9 15 6 1 9 156 25 3 100 6 [18] 5 100
2009 Jul 22
0
ks.test - The two-sample two-sided Kolmogorov-Smirnov test with ties (PR#13848)
Full_Name: Thomas Waterhouse Version: 2.9.1 OS: OS X 10.5.7 Submission from: (NULL) (216.239.45.4) ks.test uses a biased approximation to the p-value in the case of the two-sample test with ties in the pooled data. This has been justified in R in the past by the argument that the KS test assumes a continuous distribution. But the two-sample test can be extended to arbitrary distributions by a
2001 Jul 01
0
ks.test doesn't compute correct empirical distribution if there are ties in the data (PR#1007)
Full_Name: Andrew Grant McDowell Version: R 1.1.1 (but source in 1.3.0 looks fishy as well) OS: Windows 2K Professional (Consumer) Submission from: (NULL) (194.222.243.209) In article <xeQ_6.1949$xd.353840@typhoon.snet.net>, johnt@tman.dnsalias.com writes >Can someone help? In R, I am generating a vector of 1000 samples from >Bin (1000, 0.25). I then do a Kolmogorov Smirnov test
2006 Jul 09
1
KS Test Warning Message
All, Happy World Cup and Wimbledon. This morning finds me with the first of my many daily questions. I am running a ks.test on residuals obtained from a regression model. I use this code: > ks.test(Year5.lm$residuals,pnorm) and obtain this output One-sample Kolmogorov-Smirnov test data: Year5.lm$residuals D = 0.7196, p-value < 2.2e-16 alternative hypothesis: two.sided Warning
2009 Sep 08
1
Unexpected behavior in friedman.test and ks.test
I have to start by saying that I am new to R, so I might miss something crucial here. It seems to me that the results of friedman.test and ks.test are "wrong". Now, obviously, the first thing which crossed my mind was "it can't be, this is a package used by so many, someone should have observed", but I can't figure out what it might be. Problem: let's start with
2004 Nov 01
1
ks.test calculations incorrect (PR#7330)
Full_Name: t. avery Version: 2.0.0 OS: windows xp / Linux Submission from: (NULL) (131.162.134.159) ks.test does not produce the correct output. If given the script: d1 <- c(53.63984674,0.383141762,1.915708812,0.383141762,10.72796935,6.896551724,20.30651341,5.747126437,0) d1 d2 <- c(76.43312102,15.2866242,3.821656051,1.27388535,0,0.636942675,1.27388535,0.636942675,0.636942675) d2
2005 Jun 27
1
ks.test() output interpretation
I'm using ks.test() to compare two different measurement methods. I don't really know how to interpret the output in the absence of critical value table of the D statistic. I guess I could use the p-value when available. But I also get the message "cannot compute correct p-values with ties ..." does it mean I can't use ks.test() for these data or I can still use the D
2006 Feb 03
2
Problems with ks.test
Hi everybody, while performing ks.test for a standard exponential distribution on samples of dimension 2500, generated everytime as new, i had this strange behaviour: >data<-rexp(2500,0.4) >ks.test(data,"pexp",0.4) One-sample Kolmogorov-Smirnov test data: data D = 0.0147, p-value = 0.6549 alternative hypothesis: two.sided >data<-rexp(2500,0.4)
2001 Jul 03
0
(PR#1007) ks.test doesn't compute correct empirical distribution if there are ties in the data
In message <Pine.GSO.4.31.0107010731110.7616-100000@auk.stats>, Prof Brian D Ripley <ripley@stats.ox.ac.uk> writes > >You do realize that the Kolmogorov tests (and the Kolmogorov-Smirnov >extension) assume continuous distributions, so the distribution theory >is not valid in this case? > >S-PLUS does stop you doing this: > >> ks.gof(o,
2010 Feb 05
1
Hodges-Lehmann EXACT confidence interval for small dataset with ties
Dear r-helpers, I have a small dataset (n<50), and I want to compute the Hodges Lehmann exact confidence interval. So far, I know that "pairwiseCI" has the function "HL.diff". The description is as follows : HL.diff calculates the Hodges-Lehmann confidence interval for the difference of locations by calling wilcox.exact in package exactRankTests ; But when I check
2010 Aug 04
4
KS Test question (2)
Hi R Users, I have two vectors, x and y, of equal length representing two types of data from two studies. I would like to test if they are similar enough to use them interchangeably. No assumptions about distributions can be made (initial tests clearly show that they are not normal). Here some result: Two-sample Kolmogorov-Smirnov test data: x and y D = 0.1091, p-value < 2.2e-16 alternative
2006 Aug 25
1
exact Wilcoxon signed rank test with ties and the "no longer under development" exactRanksumTests package
Dear List, after updating the exactRanksumTests package I receive a warning that the package is not developed any further and that one should consider the coin package. I don't find the signed rank test in the coin package, only the Wilcoxon Mann Whitney U-Test. I only found a signed rank test in the stats package (wilcox.test) which is able to calculate the exact pvalues but unfortunately
2010 Mar 13
1
What can I use instead of ks.test for the binomial distribution ?
Hello all, A friend just showed me how ks.test fails to work with pbinom for small "size". Example: x<-rbinom(10000,10,0.5) x2<-rbinom(10000,10,0.5) ks.test(x,pbinom,10,0.5) ks.test(x,pbinom,size = 10, prob= 0.5) ks.test(x,x2) The tests gives significant p values, while the x did come from binom with size = 10 prob = 0.5. What test should I use instead ? Thanks, Tal