similar to: logistic regression with glm: cooks distance and dfbetas are different compared to SPSS output

Displaying 20 results from an estimated 2000 matches similar to: "logistic regression with glm: cooks distance and dfbetas are different compared to SPSS output"

2009 Nov 13
1
dfbetas vs dfbeta
Hi, I've looked around but can't find a clear answer to the difference for these two? Any help? Thanks! -- View this message in context: http://old.nabble.com/dfbetas-vs-dfbeta-tp26331704p26331704.html Sent from the R help mailing list archive at Nabble.com.
2003 Jun 12
1
What PRECISELY is the dfbetas() or lm.influence()$coef ?
Hello. I want to get the proper influence function for the glm coefficients in R. This is supposed to be inv(information)*(y-yhat)*x. So I am wondering what is the exact mathematical formula for the output that the functions: dfbeta() OR lm.influence()$coefficients return for a glm model. I am confused because: 1. Their columns don't sum to zero as influences should. 2. They
2003 Jul 12
1
Problem with library "car"
I am using the Unix version of R (version 1.7.0), installed via fink on a G4 Macintosh. I recently upgraded from version 1.6.0 and found that the "car" library now has a problem: ---Begin transcript--- >library(car) Attaching package 'car': The following object(s) are masked from package:base : dfbeta dfbeta.lm dfbetas dfbetas.lm hatvalues hatvalues.lm
2009 Jan 14
1
dfbetas without intercept
Hello I am running a regression without the intercept, and want to compute dfbetas. How do I do this? The dfbetas function only works when the intercept is included in the model. Regards K [[alternative HTML version deleted]]
2010 Feb 21
1
tests for measures of influence in regression
influence.measures gives several measures of influence for each observation (Cook's Distance, etc) and actually flags observations that it determines are influential by any of the measures. Looks good! But how does it discriminate between the influential and non- influential observations by each of the measures? Like does it do a Bonferroni-corrected t on the residuals identified by
2013 May 01
1
Trouble with methods() after loading gdata package.
Greetings to r-help land. I've run into some program crashes and I've traced them back to methods() behavior after the package gdata is loaded. I provide now a minimal re-producible example. This seems bugish to me. How about you? dat <- data.frame(x = rnorm(100), y = rnorm(100)) lm1 <- lm(y ~ x, data = dat) methods(class = "lm") ## OK so far library(gdata)
2001 Apr 28
9
two new packages
I've prepared preliminary versions of two packages that I plan eventually to contribute to CRAN: car (for "Companion to Applied Regression") is a package that provides a variety of functions in support of linear and generalized linear models, including regression diagnostics (e.g., studentized residuals, hat-values, Cook's distances, dfbeta, dfbetas, added-variable plots,
2001 Apr 28
9
two new packages
I've prepared preliminary versions of two packages that I plan eventually to contribute to CRAN: car (for "Companion to Applied Regression") is a package that provides a variety of functions in support of linear and generalized linear models, including regression diagnostics (e.g., studentized residuals, hat-values, Cook's distances, dfbeta, dfbetas, added-variable plots,
2001 Apr 28
9
two new packages
I've prepared preliminary versions of two packages that I plan eventually to contribute to CRAN: car (for "Companion to Applied Regression") is a package that provides a variety of functions in support of linear and generalized linear models, including regression diagnostics (e.g., studentized residuals, hat-values, Cook's distances, dfbeta, dfbetas, added-variable plots,
2004 Mar 23
1
influence.measures, cooks.distance, and glm
Dear list, I've noticed that influence.measures and cooks.distance gives different results for non-gaussian GLMs. For example, using R-1.9.0 alpha (2003-03-17) under Windows: > ## Dobson (1990) Page 93: Randomized Controlled Trial : > counts <- c(18,17,15,20,10,20,25,13,12) > outcome <- gl(3,1,9) > treatment <- gl(3,3) > glm.D93 <- glm(counts ~ outcome +
2006 Oct 24
1
Cook's Distance in GLM (PR#9316)
Hi Community, I'm trying to reconcile Cook's Distances computed in glm. The following snippet of code shows that the Cook's Distances contours on the plot of Residuals v Leverage do not seem to be the same as the values produced by cooks.distance() or in the Cook's Distance against observation number plot. counts <- c(18,17,15,20,10,20,25,13,12) outcome <- gl(3,1,9)
2010 Nov 13
1
Define a glm object with user-defined coefficients (logistic regression, family="binomial")
Hi there, I just don't find the solution on the following problem. :( Suppose I have a dataframe with two predictor variables (x1,x2) and one depend binary variable (y). How is it possible to define a glm object (family="binomial") with a user defined logistic function like p(y) = exp(a + c1*x1 + c2*x2) where c1,c2 are the coefficents which I define. So I would like to do no
2005 Feb 11
1
cook's distance in weighted regression
I have a puzzle as to how R is computing Cook's distance in weighted linear regression. In this case cook's distance should be given not as in OLS case by h_ii*r_i^2/(1-hii)^2 divided by k*s^2 (1) (where r is plain unadjusted residual, k is number of parameters in model, etc. ) but rather by w_ii*h_ii*r_i^2/(1-hii)^2 divided by k*s^2,
2017 Apr 04
0
Some "lm" methods give wrong results when applied to "mlm" objects
I had a look at some influence measures, and it seems to me that currently several methods handle multiple lm (mlm) objects wrongly in R. In some cases there are separate "mlm" methods, but usually "mlm" objects are handled by the same methods as univariate "lm" methods, and in some cases this fails. There are two general patterns of problems in influence measures:
2013 Mar 12
1
Cook's distance
Dear useRs, I have some trouble with the calculation of Cook's distance in R. The formula for Cook's distance can be found for example here: http://en.wikipedia.org/wiki/Cook%27s_distance I tried to apply it in R: > y <- (1:400)^2 > x <- 1:100 > lm(y~x) -> linmod # just for the sake of a simple example >
2008 May 07
1
coxph - weights- robust SE
Hi, I am using coxph with weights to represent sampling fraction of subjects. Our simulation results show that the robust SE of beta systematically under-estimate the empirical SD of beta. Does anyone know how the robust SE are estimated in coxph using weights? Is there any analytical formula for the “weighted” robust SE? Any help is appreciated! Thanks so much in advance Willy
2009 Oct 31
2
Logistic and Linear Regression Libraries
Hi all, I'm trying to discover the options available to me for logistic and linear regression. I'm doing some tests on a dataset and want to see how different flavours of the algorithms cope. So far for logistic regression I've tried glm(MASS) and lrm (Design) and found there is a big difference. Is there a list anywhere detailing the options available which details the specific
2008 May 09
0
Incorrect fix for PR#9316: Cook's Distance & plot.lm
Bug PR#9316 noted an inconsistency between the Cook's distance contours on plot.lm(x, which = 5) and the values given by cooks.distance(x) -- as shown in plot.lm(x, which = 4) -- for glms: http://bugs.r-project.org/cgi-bin/R/Analyses-fixed?id=9316;user=guest;selectid=9316 The suggested fix was to modify the contour levels by a dispersion factor, implemented as follows: dispersion <-
2009 Oct 10
2
easy way to find all extractor functions and the datatypes of what they return
Am I asking for too much: for any object that a stat proc returns ( y <- lm( y~x) , etc ) ) , is there a super convenient function like give_all_extractors( y ) that lists all extractor functions , the datatype returned , and a text descriptor field ("pairwisepval" "lsmean" etc) That would just be so convenient. What are my options for querying an object so that I can
2009 Sep 08
4
Count number of different patterns (Polytomous variable)
Hi there, Does anyone know a method to calculate the number of different patterns in a given data frame. The variables are of polytomous type and not binary (for the latter i found a package called "countpattern" which unfortunately only functions for binary variables). V1 V2 V3 0 3 1 1 2 0 1 2 0 So, in this case, i would like to get "2" as output. Thanks