Displaying 9 results from an estimated 9 matches for "gatemaz".
Did you mean:
gatemaze
2007 May 25
1
normality tests [Broadcast]
...sing a test statistic is generally not a good
> > idea.
> >
> > -----Original Message-----
> > From: r-help-bounces at stat.math.ethz.ch
> > [mailto:r-help-bounces at stat.math.ethz.ch] On Behalf Of Liaw, Andy
> > Sent: Friday, May 25, 2007 12:04 PM
> > To: gatemaze at gmail.com; Frank E Harrell Jr
> > Cc: r-help
> > Subject: Re: [R] normality tests [Broadcast]
> >
> > From: gatemaze at gmail.com
> > >
> > > On 25/05/07, Frank E Harrell Jr <f.harrell at vanderbilt.edu> wrote:
> > > > gatemaze at gmai...
2008 Feb 19
4
fitted values are different from manually calculating
Hello,
on a simple linear model the values produced from the fitted(model) function
are difference from manually calculating on calc. Will anyone have a clue...
or any insights on how fitted function calculates the values? Thank you.
--
-- Yianni
[[alternative HTML version deleted]]
2007 May 18
1
partial correlation significance
Hi,
among the many (5) methods that I found in the list to do partial
correlation in the following two that I had a look I am getting different
t-values. Does anyone have any clues on why is that? The source code is
below. Thanks.
pcor3 <- function (x, test = T, p = 0.05) {
nvar <- ncol(x)
ndata <- nrow(x)
conc <- solve(cor(x))
resid.sd <- 1/sqrt(diag(conc))
pcc <-
2007 Apr 30
3
general question about use of list
Hi,
is this list only related to R issues or it has a broader context regarding
questions and discussions about statistics. Is there any other email list or
forum for that? For example, I have a question regarding variance. It is
defined as:
variance = sum(sq(Xi-mean)) / (N-1)
and I never understood why not define it as
variance = sum(absolute(Xi-mean) / (N-1)
I read somewhere that this cannot
2007 Apr 24
1
help interpreting the output of functions - any sources of information
Hi,
I am looking for documentation, reference guides, etc. that explain the
output of functions... For example using cor.test(...., method="pearson")
with Pearson's corr coeff the output is:
Pearson's product-moment correlation
data: a and b
t = 0.2878, df = 14, p-value = 0.7777
alternative hypothesis: true correlation is not equal to 0
95 percent confidence interval:
2007 May 09
1
Error in plot.new() : figure margins too large
Yes, I already had a look on previous posts but nothing is really helpful to
me.
The code is:
postscript(filename, horizontal=FALSE, onefile=FALSE, paper="special",
bg="white", family="ComputerModern", pointsize=10);
par(mar=c(5, 4, 0, 0) + 0.1);
plot(x.nor, y.nor, xlim=c(3,6), ylim=c(20,90), pch=normal.mark);
gives error
Error in plot.new() : figure margins too
2007 May 10
1
t value two.sided and one.sided
Hi,
on a
> summary(lm(y~x))
are the computed t-values for two.sided or one.sided. By looking on some
tables they seem like they are for two.sided. Is it possible to have them
for one.sided? If this makes sense...
Thanks.
[[alternative HTML version deleted]]
2007 May 25
3
normality tests
Hi all,
apologies for seeking advice on a general stats question. I ve run
normality tests using 8 different methods:
- Lilliefors
- Shapiro-Wilk
- Robust Jarque Bera
- Jarque Bera
- Anderson-Darling
- Pearson chi-square
- Cramer-von Mises
- Shapiro-Francia
All show that the null hypothesis that the data come from a normal
distro cannot be rejected. Great. However, I don't think it looks
2007 May 22
0
partial correlation function
Hi,
after reading the archives I found some methods... adopted and
modified one of them to the following. I think it is correct after
checking and comparing the results with other software... but if
possible someone could have a look and spot any mistakes I would be
grateful. Thanks
pcor3 <- function (x, test = T, p = 0.05, alternative="two.sided") {
nvar <- ncol(x)
ndata