Displaying 20 results from an estimated 1000 matches similar to: "Standardized Pearson residuals"
2011 Mar 16
1
Standardized Pearson residuals (and score tests)
Hi Peter and others,
If it helps, I wrote a small function glm.scoretest() for the statmod
package on CRAN to compute score tests from glm fits. The score test for
adding a covariate, or any set of covariates, can be extracted very neatly
from the standard glm output, although you probably already know that.
Regards
Gordon
---------------------------------------------
Professor Gordon K
2005 Dec 06
1
standardized residuals (rstandard & plot.lm) (PR#8367)
Full_Name: Heather Turner
Version: 2.2.0
OS: Windows XP
Submission from: (NULL) (137.205.240.44)
Standardized residuals as calculated by rstandard.lm, rstandard.glm and plot.lm
are Inf/NaN rather than zero when the un-standardized residuals are zero. This
causes plot.lm to break when calculating 'ylim' for any of the plots of
standardized residuals. Example:
2004 Jan 20
2
rstandard.glm() in base/R/lm.influence.R
I contacted John Fox about this first, because parts of the file are
attributed to him. He says that he didn't write rstandard.glm(), and
suggests asking r-devel.
As it stands, rstandard.glm() has summary(model)$dispersion outside the
sqrt(), while in rstandard.lm(), the sd is already sqrt()ed. This seems to
follow stdres() in VR/MASS/R/stdres.R.
Of course for the c("poisson",
2010 Nov 10
1
standardized/studentized residuals with loess
Hi all,
I'm trying to apply loess regression to my data and then use the fitted
model to get the *standardized/studentized residuals. I understood that for
linear regression (lm) there are functions to do that:*
*
*
fit1 = lm(y~x)
stdres.fit1 = rstandard(fit1)
studres.fit1 = rstudent(fit1)
I was wondering if there is an equally simple way to get
the standardized/studentized residuals for a
2006 Jan 10
2
standardized residuals (rstandard & plot.lm) (PR#8468)
This bug is not quite fixed - the example from my original report now =
works using R-2.2.1, but
plot(Uniform, 6)
does not. The bug is due to
if (show[6]) {
ymx <- max(cook, na.rm =3D TRUE) * 1.025
g <- hatval/(1 - hatval) # Potential division by zero here #
plot(g, cook, xlim =3D c(0, max(g)), ylim =3D c(0, ymx),=20
main =3D main, xlab =3D
2003 Jan 21
2
books on categorical data analyses
Dear All,
We are about to purchase the second edition of Agresti's "Categorical Data
Analysis" (my old copy of the first ed. of that wonderful book is falling
apart). I would appreciate suggestions about other comparable books which, if
possible, have examples using R/S code (instead of SAS).
Thanks,
Ram?n
--
Ram?n D?az-Uriarte
Bioinformatics Unit
Centro Nacional de
2012 Feb 09
1
passing an extra argument to an S3 generic
I'm trying to write some functions extending influence measures to
multivariate linear models and also
allow subsets of size m>=1 to be considered for deletion diagnostics.
I'd like these to work roughly parallel
to those functions for the univariate lm where only single case deletion
(m=1) diagnostics are considered.
Corresponding to stats::hatvalues.lm, the S3 method for class
2007 Oct 29
3
Strange results with anova.glm()
Hi,
I have been struggling with this problem for some time now. Internet,
books haven't been able to help me.
## I have factorial design with counts (fruits) as response variable.
> str(stubb)
'data.frame': 334 obs. of 5 variables:
$ id : int 6 23 24 25 26 27 28 29 31 34 ...
$ infl.treat : Factor w/ 2 levels "0","1": 2 2 2 2 1 1 1 2 1 1 ...
$ def.treat :
2003 Apr 07
1
filtering ts with arima
Hi,
I have the following code from Splus that I'd like to migrate to R. So far,
the only problem is the arima.filt function. This function allows me to
filter an existing time-series through a previously estimated arima model,
and obtain the residuals for further use. Here's the Splus code:
# x is the estimation time series, new.infl is a timeseries that contains
new information
# a.mle
2005 Apr 13
2
multinom and contrasts
Hi,
I found that using different contrasts (e.g.
contr.helmert vs. contr.treatment) will generate
different fitted probabilities from multinomial
logistic regression using multinom(); while the fitted
probabilities from binary logistic regression seem to
be the same. Why is that? and for multinomial logisitc
regression, what contrast should be used? I guess it's
helmert?
here is an example
2004 Jan 08
3
Strange parametrization in polr
In Venables \& Ripley 3rd edition (p. 231) the proportional odds model
is described as:
logit(p<=k) = zeta_k + eta
but polr apparently thinks there is a minus in front of eta,
as is apprent below.
Is this a bug og a feature I have overlooked?
Here is the naked code for reproduction, below the results.
------------------------------------------------------------------------
---
version
2003 May 05
3
polr in MASS
Hi, I am trying to test the proportional-odds model using the "polr" function in the MASS library with the dataset of "housing" contained in the MASS book ("Sat" (factor: low, medium, high) is the dependent variable, "Infl" (low, medium, high), "Type" (tower, apartment, atrium, terrace) and "Cont" (low, high) are the predictor variables
2008 Sep 30
2
weird behavior of drop1() for polr models (MASS)
I would like to do a SS type III analysis on a proportional odds logistic
regression model. I use drop1(), but dropterm() shows the same behaviour. It
works as expected for regular main effects models, however when the model
includes an interaction effect it seems to have problems with matching the
parameters to the predictor terms. An example:
library("MASS");
options(contrasts =
2012 Apr 30
5
Different varable lengths
Hi!
I'm trying to do a lm() test on three objects. My problem is that R protests
and says that the variable lengths differ for one of the objects
(Sweden.GDP.gap). But I have double checked that the number of observations
are the same. All three objects should contain 9 observations but R only
accepts 9 observations in two of the objects. The third must have 10! Very
confusing because there
2004 Feb 24
1
rstandard does not produce standardized residuals
Dear all,
the application of the function rstandard() in the base package
to a glm object does not produce residuals standardized to
have variance one:
the reason is that the deviance residuals are divided
by the dispersion estimate and not by the
square root of the estimate for the dispersion.
Should the function not be changed to produce residuals
with a variance about 1?
R 1.8.1 on
2009 Dec 10
2
Problem with coeftest using Newey West estimator
Hi,
I want to calculate the t- and p-values for a linear model using the Newey West estimator.
I tried this Code and it usually worked just fine:
> oberlm <- lm(DYH ~ BIP + Infl + EOil, data=HU_H)
> coeftest(oberlm, NeweyWest(oberlm, lag=2))
t test of coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.1509950 0.0743832 2.0300 0.179486
BIP
2008 Jan 05
1
Likelihood ratio test for proportional odds logistic regression
Hi,
I want to do a global likelihood ratio test for the proportional odds
logistic regression model and am unsure how to go about it. I am using
the polr() function in library(MASS).
1. Is the p-value from the likelihood ratio test obtained by
anova(fit1,fit2), where fit1 is the polr model with only the intercept
and fit2 is the full polr model (refer to example below)? So in the
case of the
2012 Apr 24
1
nobs.glm
Hi all,
The nobs method of (MASS:::polr class) takes into account of weight,
but nobs method of glm does not. I wonder what is the rationale of
such design behind nobs.glm. Thanks in advance. Best Regards.
> library(MASS)
> house.plr <- polr(Sat ~ Infl + Type + Cont, weights = Freq, data = housing)
> house.logit <- glm(I(Sat=='High') ~ Infl + Type + Cont, binomial,weights
2010 May 26
2
extracat , JGR, iWidgets install problems
[Environment: Win XP, R 2.10.1]
I'm trying to install the packages JGR and iWidgets required by the
extracat package to make the interactive plots
in the package work. I've tried various things, but nothing seems to
work. Here is my most recent attempt,
followed by my sessionInfo().
Does anyone have any suggestions how to make this work?
>
> library(extracat)
Loading
2011 Apr 29
1
logistic regression with glm: cooks distance and dfbetas are different compared to SPSS output
Hi there,
I have the problem, that I'm not able to reproduce the SPSS residual
statistics (dfbeta and cook's distance) with a simple binary logistic
regression model obtained in R via the glm-function.
I tried the following:
fit <- glm(y ~ x1 + x2 + x3, data, family=binomial)
cooks.distance(fit)
dfbetas(fit)
When i compare the returned values with the values that I get in SPSS,