similar to: rstandard.glm() in base/R/lm.influence.R

Displaying 20 results from an estimated 1000 matches similar to: "rstandard.glm() in base/R/lm.influence.R"

2011 Mar 14
3
Standardized Pearson residuals
Is there any reason that rstandard.glm doesn't have a "pearson" option? And if not, can it be added? Background: I'm currently teaching an undergrad/grad-service course from Agresti's "Introduction to Categorical Data Analysis (2nd edn)" and deviance residuals are not used in the text. For now I'll just provide the students with a simple function to use, but I
2005 Dec 06
1
standardized residuals (rstandard & plot.lm) (PR#8367)
Full_Name: Heather Turner Version: 2.2.0 OS: Windows XP Submission from: (NULL) (137.205.240.44) Standardized residuals as calculated by rstandard.lm, rstandard.glm and plot.lm are Inf/NaN rather than zero when the un-standardized residuals are zero. This causes plot.lm to break when calculating 'ylim' for any of the plots of standardized residuals. Example:
2010 Nov 10
1
standardized/studentized residuals with loess
Hi all, I'm trying to apply loess regression to my data and then use the fitted model to get the *standardized/studentized residuals. I understood that for linear regression (lm) there are functions to do that:* * * fit1 = lm(y~x) stdres.fit1 = rstandard(fit1) studres.fit1 = rstudent(fit1) I was wondering if there is an equally simple way to get the standardized/studentized residuals for a
2004 Nov 19
2
glm with Newton Raphson
Hi, Does anyone know if there is a function to find the maximum likelihood estimates of glm using Newton Raphson metodology instead of using IWLS. Thanks Valeska Andreozzi -------------------------------------------------------- Department of Epidemiology and Quantitative Methods FIOCRUZ - National School of Public Health Tel: (55) 21 2598 2872 Rio de Janeiro - Brazil
2004 Feb 24
1
rstandard does not produce standardized residuals
Dear all, the application of the function rstandard() in the base package to a glm object does not produce residuals standardized to have variance one: the reason is that the deviance residuals are divided by the dispersion estimate and not by the square root of the estimate for the dispersion. Should the function not be changed to produce residuals with a variance about 1? R 1.8.1 on
2006 Jan 10
2
standardized residuals (rstandard & plot.lm) (PR#8468)
This bug is not quite fixed - the example from my original report now = works using R-2.2.1, but plot(Uniform, 6) does not. The bug is due to if (show[6]) { ymx <- max(cook, na.rm =3D TRUE) * 1.025 g <- hatval/(1 - hatval) # Potential division by zero here # plot(g, cook, xlim =3D c(0, max(g)), ylim =3D c(0, ymx),=20 main =3D main, xlab =3D
2011 Feb 09
5
Removing Outliers Function
I am working on a function that will remove outliers for regression analysis. I am stating that a data point is an outlier if its studentized residual is above or below 3 and -3, respectively. The code below is what i have thus far for the function x = c(1:20) y = c(1,3,4,2,5,6,18,8,10,8,11,13,14,14,15,85,17,19,19,20) data1 = data.frame(x,y) rm.outliers =
2006 Aug 31
1
NaN when using dffits, stemming from lm.influence call
Hi all I'm getting a NaN returned on using dffits, as explained below. To me, there seems no obvious (or non-obvious reason for that matter) reason why a NaN appears. Before I start digging further, can anyone see why dffits might be failing? Is there a problem with the data? Consider: # Load data dep <-
2013 Jun 10
1
padding specific missing values with NA to allow cbind
Dear list Getting very frustrated with this simple-looking problem > m1 <- lm(x~y, data=mydata) > outliers <- abs(stdres(m1))>2 > plot(x~y, data=mydata) I would like to plot a simple x,y scatter plot with labels giving custom information displayed for the outliers only, i.e. I would like to define a column mydata$labels for the mydata dataframe so that the command >
2011 Mar 08
2
consulta
Hola soy novata en el programa R, pero lo encuentro súper interesante, tengo un par de consultas... 1. necesito crear una nueva base de datos. 2. necesito saber como se codifica el sample, subset y el rbind. Por favor agradecería sus respuesta Saludos Cordiales Valeska Yaitul Yaitul. [[alternative HTML version deleted]]
2013 Oct 15
1
Q-Q plot scaling in plot.lm(); bug or thinko?
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I've been looking fairly carefully at the Q-Q plots produced by plot.lm() and am having difficulty understanding why plot.lm() is doing what it's doing, specifically scaling the standardized residuals by the prior weights. Can anyone explain this to me ... ? Multiplying by the weights seems to give the wrong plot, at least for binomial
2007 Oct 29
3
Strange results with anova.glm()
Hi, I have been struggling with this problem for some time now. Internet, books haven't been able to help me. ## I have factorial design with counts (fruits) as response variable. > str(stubb) 'data.frame': 334 obs. of 5 variables: $ id : int 6 23 24 25 26 27 28 29 31 34 ... $ infl.treat : Factor w/ 2 levels "0","1": 2 2 2 2 1 1 1 2 1 1 ... $ def.treat :
2013 Feb 15
2
Making the plot window wider and using the predict function
Hello, I am new to R and have a couple of questions. My data set contains the variables "Bwt" and "Hwt", which are bodyweight and heartweight, respectively, of a group of cats. With the following code, I am making two plots, both to be viewed in the same plot window in R: library(MASS) maleData <- subset(cats, Sex == "M") linreg0 <- lm(maleData$Hwt ~
2010 Feb 21
1
tests for measures of influence in regression
influence.measures gives several measures of influence for each observation (Cook's Distance, etc) and actually flags observations that it determines are influential by any of the measures. Looks good! But how does it discriminate between the influential and non- influential observations by each of the measures? Like does it do a Bonferroni-corrected t on the residuals identified by
2018 Feb 23
2
How to Save the residuals of an LM object greater or less than a certin value to an R object?
Dear list members, I want to save residuals above or less than a certain value to an R object. I have performed a multiple linear regression, and now I want to find out which cases have a residual of above + 2.5 and ? 2.5. Below I provide the R commands I have used. Reg<-lm(a~b+c+d+e+f) # perform multiple regression with a as the dependent variable. Residuals<-residuals(reg) # store
2011 Feb 11
1
censReg or tobit: testing for assumptions in R?
Hello! I'm thinking of applying a censored regression model to cross-sectional data, using either the tobit (package survival) or the censReg function (package censReg). The dependent variable is left and right-censored. My hopefully not too silly question is this: I understand that heteroskedasticity and nonnormal errors are even more serious problems in a censored regression than in an
2011 Aug 10
1
studentized and standarized residuals
Hi, I must be doing something silly here, because I can't get the studentised and standardised residuals from r output of a linear model to agree with what I think they should be from equation form. Thanks in advance, Jennifer x = seq(1,10) y = x + rnorm(10) mod = lm(y~x) rstandard(mod) residuals(mod)/(summary(mod)$sigma) rstudent(mod)
2012 May 03
1
NA's when subset in a dataframe
Dear community, I'm having this silly problem. I've a linear model. After fixing it, I wanted to know which data had studentized residuals larger than 3, so i tried this: d1 <- cooks.distance(lmmodel) r <- sqrt(abs(rstandard(lmmodel))) rstu <- abs(rstudent(lmmodel)) a <- cbind( mydata, d1, r,rstu) alargerthan3 <- a[rstu >3, ] And suddenly a[rstu >3, ] has
2010 Nov 17
1
how exactly does 'identify' work?
Hi all, ######################################### test=data.frame(x=1:26,y=-23.5+0.45*(1:26)+rnorm(26)) rownames(test)=LETTERS[1:26] attach(test) #test test.lm=lm(y~x) plot(test.lm,2) identify(test.lm$res,,row.names(test)) # not working plot(x,y) identify(x,y,row.names(test)) # works fine identify(y,,row.names(test)) # works fine identify(x,,row.names(test)) # not working identify(y,,y) # works
2011 Apr 29
1
logistic regression with glm: cooks distance and dfbetas are different compared to SPSS output
Hi there, I have the problem, that I'm not able to reproduce the SPSS residual statistics (dfbeta and cook's distance) with a simple binary logistic regression model obtained in R via the glm-function. I tried the following: fit <- glm(y ~ x1 + x2 + x3, data, family=binomial) cooks.distance(fit) dfbetas(fit) When i compare the returned values with the values that I get in SPSS,