Displaying 20 results from an estimated 20000 matches similar to: "how to obtain p values from an ANOVA result"
2005 Oct 20
3
different F test in drop1 and anova
Hi,
I was wondering why anova() and drop1() give different tail
probabilities for F tests.
I guess overdispersion is calculated differently in the following
example, but why?
Thanks for any advice,
Tom
For example:
> x<-c(2,3,4,5,6)
> y<-c(0,1,0,0,1)
> b1<-glm(y~x,binomial)
> b2<-glm(y~1,binomial)
> drop1(b1,test="F")
Single term deletions
Model:
y ~
2005 Feb 02
1
anova.glm (PR#7624)
There may be a bug in the anova.glm function.
deathstar[32] R
R : Copyright 2004, The R Foundation for Statistical Computing
Version 2.0.1 (2004-11-15), ISBN 3-900051-07-0
R is free software and comes with ABSOLUTELY NO WARRANTY.
You are welcome to redistribute it under certain conditions.
Type 'license()' or 'licence()' for distribution details.
R is a collaborative project
2008 Jan 15
1
Anova for stratified Cox regression
Dear List,
I have tried a stratified Cox Regression, it is working fine, except for
the "Anova"-Tests:
Here the commands (should work out of the box):
library(survival)
d = colon[colon$etype==2, ]
m = coxph(Surv(time, status) ~ strata(sex) + rx, data=d)
summary(m)
# Printout ok
anova(m, test='Chisq')
This is the output of the anova command:
> Analysis of Deviance Table
2011 Mar 10
1
ANOVA for stratified cox regression
This is a follow-up to a query that was posted regarding some problems that
emerge when running anova analyses for cox models, posted by Mathias Gondan:
Matthias Gondan wrote:
>* Dear List,*>**>* I have tried a stratified Cox Regression, it is working fine, except for*>* the "Anova"-Tests:*>**>* Here the commands (should work out of the box):*>**>*
2008 Jan 08
1
Problem in anova with coxph object
Dear R users,
I noticed a problem in the anova command when applied on
a single coxph object if there are missing observations in
the data:
This example code was run on R-2.6.1:
> library(survival)
> data(colon)
> colondeath = colon[colon$etype==2, ]
> m = coxph(Surv(time, status) ~ rx + sex + age + perfor, data=colondeath)
> m
Call:
coxph(formula = Surv(time, status) ~ rx +
2002 Nov 15
1
anova.glm gets test="Chisq" wrong (PR#2294)
Full_Name: Robert King
Version: 1.5.0
OS: windows
Submission from: (NULL) (134.148.4.19)
Also occurs in 1.6.0 on linux
anova.glm(fitted.object,test="Chisq") is giving strange answers in this
situation
> resptime
sex task time
1 m s 210
2 m s 300
3 m s 420
4 f s 250
5 f s 310
6 f s 390
7 m c 310
8 m c 400
9 m c 600
2012 Jun 04
1
Chi square value of anova(binomialglmnull, binomglmmod, test="Chisq")
Hi all,
I have done a backward stepwise selection on a full binomial GLM where the
response variable is gender.
At the end of the selection I have found one model with only one explanatory
variable (cohort, factor variable with 10 levels).
I want to test the significance of the variable "cohort" that, I believe, is
the same as the significance of this selected model:
>
2002 Mar 21
1
Underdispersion with anova testing methods
Using anova of a glm with test = "Chisq", I get this:
Analysis of Deviance Table
Model: poisson, link: log
Response: Days
Terms added sequentially (first to last)
Df Deviance Resid. Df Resid. Dev P(>|Chi|)
NULL 373 370.56
Block 3 71.05 370 299.51 2.543e-15
Variety 1 94.04 369
2012 May 08
2
mgcv: inclusion of random intercept in model - based on p-value of smooth or anova?
Dear useRs,
I am using mgcv version 1.7-16. When I create a model with a few
non-linear terms and a random intercept for (in my case) country using
s(Country,bs="re"), the representative line in my model (i.e.
approximate significance of smooth terms) for the random intercept
reads:
edf Ref.df F p-value
s(Country) 36.127 58.551 0.644
2002 Sep 12
1
dropterm, binomial.glm, F-test
Hi there -
I am using R1.5.1 on WinNT and the latest MASS (Venables and Ripley) library.
Running the following code:
>minimod<-glm(miniSF~gtbt*f.batch+log(mxjd),data=gtbt,family="binomial")
>summary(minimod,cor=F)
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.91561 0.32655 2.804 0.005049 **
gtbtgt 0.47171
2003 May 08
1
A problem in a glm model
Hallo all,
I have the following glm model:
f1 <- as.formula(paste("factor(y.fondi)~",
"flgsess + segmeta2 + udm + zona.geo + ultimo.prod.",
"+flg.a2 + flg.d.na2 + flg.v2 + flg.cc2",
" +(flg.a1 + flg.d.na1 + flg.v1 + flg.cc1)^2",
" + flg.a2:flg.d.na2 + flg.a2:flg.v2 +
2006 Nov 13
3
Profile confidence intervals and LR chi-square test
System: R 2.3.1 on Windows XP machine.
I am building a logistic regression model for a sample of 100 cases in
dataframe "d", in which there are 3 binary covariates: x1, x2 and x3.
----------------
> summary(d)
y x1 x2 x3
0:54 0:50 0:64 0:78
1:46 1:50 1:36 1:22
> fit <- glm(y ~ x1 + x2 + x3, data=d, family=binomial(link=logit))
>
2010 Jan 19
1
splitting a factor in an analysis of deviance table (negative binomial model)
Dears useRs,
I have 2 factors, (for the sake of explanation - A and B), with 4 levels each. I've already fitted a negative binomial generalized linear model to my data, and now I need to split the factors in two distinct analysis of deviance table:
- A within B1, A within B2, A within B3 and A within B4
- B within A1, B within A2, B within A3 and B within A4
Here is a code that illustrates
2012 Jul 02
2
degree of freedom GLM
Hi,
I have a problem with the df.
I read in a big csv file.
Tabelle <- read.csv("C:\\Users\\Public\\Documents\\Bachelorarbeit\\eingabe8_durchnummeriert.csv" , header = T , sep=";")
then I try this:
> ygamma <- glm(Tabelle$sb_ek_ber ~1+ Tabelle$FAHRL_C + Tabelle$NUTZKREIS + Tabelle$schw_drittel_c , family = Gamma)
> anova(ygamma, test="Chisq")
2011 Oct 06
1
anova.rq {quantreg) - Why do different level of nesting changes the P values?!
Hello dear R help members.
I am trying to understand the anova.rq, and I am finding something which I
can not explain (is it a bug?!):
The example is for when we have 3 nested models. I run the anova once on
the two models, and again on the three models. I expect that the p.value
for the comparison of model 1 and model 2 would remain the same, whether or
not I add a third model to be compared
2010 Mar 31
2
interpretation of p values for highly correlated logistic analysis
Dear list,
I want to perform a logistic regression analysis with multiple
categorical predictors (i.e., a logit) on some data where there is a
very definite relationship between one predicator and the
response/independent variable. The problem I have is that in such a
case the p value goes very high (while I as a naive newbie would
expect it to crash towards 0).
I'll illustrate my problem
2012 Jul 14
1
GAM Chi-Square Difference Test
We are using GAM in mgcv (Wood), relatively new users, and wonder if anyone
can advise us on a problem we are encountering as we analyze many short time
series datasets. For each dataset, we have four models, each with intercept,
predictor x (trend), z (treatment), and int (interaction between x and z).
Our models are
Model 1: gama1.1 <- gam(y~x+z+int, family=quasipoisson) ##no smooths
Model
2011 Mar 16
1
Standardized Pearson residuals (and score tests)
Hi Peter and others,
If it helps, I wrote a small function glm.scoretest() for the statmod
package on CRAN to compute score tests from glm fits. The score test for
adding a covariate, or any set of covariates, can be extracted very neatly
from the standard glm output, although you probably already know that.
Regards
Gordon
---------------------------------------------
Professor Gordon K
2011 Aug 17
2
interpreting interactions in a model
Hi,
I?ve got this model
> model<-glm(prevalence~agesex+agesex:month,binomial)
and the output of anova is like that
> anova(model,test="Chisq")
Df Deviance Resid. Df Resid. Dev P(>|Chi|)
NULL 524 206.97
agesex 2 9.9165 522 197.05 0.007025 **
agesex:month 9 18.0899
2010 Apr 01
2
About logistic regression
Hi,
I have a dichotomous variable (Q1) whose answers are Yes or
No.
Also I have 2 categorical explanatory variables (V1 and V2)
with two levels each.
I used logistic regression to determine whether there is an
effect of V1, V2 or an interaction between them.
I used the R and SAS, just for the conference. It happens
that there is disagreement about the effect of the
explanatory variables