similar to: Unexpected behavior in friedman.test and ks.test

Displaying 20 results from an estimated 1300 matches similar to: "Unexpected behavior in friedman.test and ks.test"

2010 Nov 29
2
Significance of the difference between two correlation coefficients
Hi, based on the sample size I want to calculate whether to correlation coefficients are significantly different or not. I know that as a first step both coefficients have to be converted to z values using fisher's z transformation. I have done this already but I dont know how to further proceed from there. unlike for correlation coefficients I know that the difference for z values is
2007 Oct 07
1
Question about aov
Hello R gurus, I am a beginner with R. I am doing an ANCOVA analysis using 'aov,' and need some help understanding how 'aov' works. I have a dataset (taken from http://faculty.vassar.edu/lowry/ch17pt2.html) looking at hypnotic induction. The variable 'X' is a measure of how susceptible the subject is to being hypnotized, the variable 'Y' is how well the
2008 Nov 07
1
kruskal test in R
Hi, i have a question in R, How and what command you need to do to run a kruskal-wallis test without the built in command 'kruskal.test'? many thanks. _________________________________________________________________ [[alternative HTML version deleted]]
2010 Jul 14
1
Wilcox.test U values
I have been examining the Mann-Whitney test closely. And there are two features of the R implementation that puzzles me. The test statistic is reported as "W" and depends on the order of the arguments to the function. > x <- c(1,3,5,7,9) > y <- x-1 > x [1] 1 3 5 7 9 > y [1] 0 2 4 6 8 > wilcox.test(x, y)$statistic W 15 > wilcox.test(y,x)$statistic W 10
2001 Oct 26
1
ks.test (PR#1004)
The note to 1004 says "fixed for 1.3.1" Uh. No. It ain't. The problem was more serious than guessed as even the simplest testing would show. For example, Example 5.4 in Hollander and Wolfe (Nonparametric Statistical, Methods, 2nd ed., Wiley, 1999, pp. 180-181) R Version 1.3.1 (SuSE Linux 7.1) > X <-
2012 Feb 17
3
portable parallel seeds project: request for critiques
I've got another edition of my simulation replication framework. I'm attaching 2 R files and pasting in the readme. I would especially like to know if I'm doing anything that breaks .Random.seed or other things that R's parallel uses in the environment. In case you don't want to wrestle with attachments, the same files are online in our SVN
2004 Dec 13
1
Friedman test for replicated blocked data
Hi, I would need to extend the Friedman test to a replicated design. Currently the function: friedman.test(y, ...) only works for unreplicated designs. I found in Conover 1999 "Practical Nonparamteric statistics" an extension of the formula to my case. Nevertheless, other sources, like Sheskin 2000 "Parametric and Nonparametric statistical Procedures" and Daniel 1990
2009 Sep 21
1
Post-Hoc tests for Friedman Test?
Hi there all, This is my first post to the list and I'll first say a few things: - R is great! - The archives of this list have helped me solve all of my problems/questions so far - I only know enough statistics "to be dangerous" I'm looking for a way to do post-hoc tests for the Friedman test. I have a dataset from a within-subjects design with 5 conditions where some of
2006 Dec 15
2
ks.test "greater" and "less"
Hello r-group I have a question to the ks.test. I would expect different values for less and greater between data1 and data2. Does anybody could explain my point of misunderstanding the function? data1<-c(8,12,43,70) data2<- c(70,43,12,8) ks.test(data1,"pnorm") ks.test(data1,"pnorm",alternative ="less") #expected < 0.001
2007 Nov 16
2
ks.test
Hello, I want to do normality test on my data I write this but I don't understand the display of the results ks.test(data,"pnorm") In fact I want to know if my data is a normal distribution. I have to check the p-value or D? Thanks. _____________________________________________________________________________ l [[alternative HTML version deleted]]
2006 Jul 09
1
KS Test Warning Message
All, Happy World Cup and Wimbledon. This morning finds me with the first of my many daily questions. I am running a ks.test on residuals obtained from a regression model. I use this code: > ks.test(Year5.lm$residuals,pnorm) and obtain this output One-sample Kolmogorov-Smirnov test data: Year5.lm$residuals D = 0.7196, p-value < 2.2e-16 alternative hypothesis: two.sided Warning
2008 Apr 18
1
2.2e-16 a magic number? ks.test help
Hello, I'm trying to test my data for normality. I enter the data (95ish species counts) run >ks.test (data,pnorm) and get a p- value <2.2e-16 But this seems to be the p-value no matter what the data I enter. (I have multiple datasets and am testing them all for normality). [Actually, I just entered a vector of 1's and the p-value changed.] When I use the >Shapiro.test command,
1998 Nov 05
0
Server Security settings
Hello, I am having some difficulty in getting Samba v.1.9.18p10 to authenticate using server security. I have couple of users that do not have unix accounts and those are the users that can not connect to the samba server. The message that I get in the logfile is server rejected the session. I have no problem login in to the server, but I also have a account on the unix server. Server
2017 Nov 15
2
ks.test() with 2 samples vs. 1 sample an distr. function
Dear all, I have a question concerning the ks.test() function. I tryed to calculate the example given on the German wikipedia page. xi <- c(9.41,9.92,11.55,11.6,11.73,12,12.06,13.3) I get the right results when I calculate: ks.test(xi,pnorm,11,1) Now the question: shouldn't I obtain the same or a very similar result if I commpare the sample and a calculated sample from the distribution?
2005 Mar 18
1
Pb with ks.test pvalue
Hello, While doing test of normality under R and SAS, in order to prove the efficiency of R to my company, I notice that Anderson Darling, Cramer Van Mises and Shapiro-Wilk tests results are quite the same under the two environnements, but the Kolmogorov-smirnov p-value really is different. Here is what I do: > ks.test(w,pnorm,mean(w),sd(w)) One-sample Kolmogorov-Smirnov test data: w D
2011 Oct 06
2
KS test and theoretical distribution
> x <- runif(100) > y <- runif(100) > ks.test(x,y) Two-sample Kolmogorov-Smirnov test data: x and y D = 0.11, p-value = 0.5806 alternative hypothesis: two-sided ok I expected that, but: > ks.test(runif(100), "runif") One-sample Kolmogorov-Smirnov test data: runif(100) D = 0.9106, p-value < 2.2e-16 alternative hypothesis: two-sided How
2001 Jun 26
1
compiling R-1.3.0 under Tru64 Unix
Dear all- I get the same problem in compiling R-1.3.0 on Tru64 Unix (OSF 5.0). Here is the final output of the ./configure R is now configured for alphaev67-dec-osf5.0 Source directory: . Installation directory: /usr/local C compiler: gcc -mieee -g -O2 C++ compiler: c++ -g -O2 FORTRAN compiler: f77 -fpe3 -g X11 support:
2017 Nov 15
0
ks.test() with 2 samples vs. 1 sample an distr. function
In the first example you are performing a one-sample test against a continuous cumulative distribution (in this case a normal distribution). In the second case you are performing a two-sample test. You drew your values for x non-randomly by specifying fixed intervals along a normal distribution, but ks.test() just sees that you have provided two samples, not one sample and values along a
2008 Feb 14
1
ks.test help
I am trying to do a ks.test in R 2.6.2 (running on Mac OS X 10.4.11). In the help guides it specifies that the y variable can be a character string for the type of distribution I want. I am doing this on the residuals of a regression model, but I continue to get an error. This is some of the code I have tried: > ks.test(res,"Norm") Error in get(y, mode =
2003 Apr 22
4
fisher exact vs. simulated chi-square
Dear All, I have a problem understanding the difference between the outcome of a fisher exact test and a chi-square test (with simulated p.value). For some sample data (see below), fisher reports p=.02337. The normal chi-square test complains about "approximation may be incorrect", because there is a column with cells with very small values. I therefore tried the chi-square with