Displaying 20 results from an estimated 2249 matches for "influencent".
Did you mean:
influence
2003 Sep 11
1
discrepancy between R and Splus lm.influence() functions for family=Gamma(link=identity)
Hello,
I am looking for an explanation and/or fix for a discrepancy in the behaviour of the R lm.influence() function [ version R 1.5.0 (2002-04-29) ] and the same function in Splus [ Splus version 5.1 release 1, running on SGI IRIX 6.2]. The discrepancy is of concern because I am migrating some Splus scripts to R and need to ensure consistency of results.
Specifically, when I fit a glm()
2016 Apr 09
2
R.squared in summary.lm with weights
>>>>> Murray Efford <murray.efford at otago.ac.nz>
>>>>> on Fri, 8 Apr 2016 18:45:33 +0000 writes:
> Thanks for these perfectly consistent replies - I didn't
> understand the purpose of m = sum(w * f/sum(w)) and saw it
> merely as a weighted average of the fitted values. My
> ultimate concern is how to compute an appropriate
2003 Jun 12
1
What PRECISELY is the dfbetas() or lm.influence()$coef ?
Hello. I want to get the proper influence function for the glm
coefficients in R. This is supposed to be inv(information)*(y-yhat)*x. So
I am wondering what is the exact mathematical formula for the output that
the functions:
dfbeta() OR lm.influence()$coefficients
return for a glm model. I am confused because:
1. Their columns don't sum to zero as influences should.
2. They
2016 Apr 10
2
R.squared in summary.lm with weights
> On Apr 10, 2016, at 3:11 AM, Murray Efford <murray.efford at otago.ac.nz> wrote:
>
> Martin -
> Thanks, but although hatvalues() is useful for calculating PRESS, I can't find anything directly relevant to my question in the influence help pages. After some burrowing in the literature I'm doubting there is an answer out there (PRESS R^2 is always presented in a fairly
2016 Apr 10
0
R.squared in summary.lm with weights
Martin -
Thanks, but although hatvalues() is useful for calculating PRESS, I can't find anything directly relevant to my question in the influence help pages. After some burrowing in the literature I'm doubting there is an answer out there (PRESS R^2 is always presented in a fairly ad hoc way).
This is a new topic, as you say, and perhaps better handled on a statistics list.
Murray Efford
2011 Jan 27
1
Minor typo in influence.measures.Rd ?
Dear list,
There is, I believe, a minor typo in the example section of
influence.measures.Rd. In the final example the word `does` appears
where I suspect `dose` is required:
I couldn't remember exactly what format patches should be in, so here is
one as diff would produce:
Index: devel/src/library/stats/man/influence.measures.Rd
2011 Mar 07
0
Difference between the S-plus influence and R empinf functions
Hello everyone !
I am currently trying to convert a program from S-plus to R, and I am
having some trouble with the S-plus function called "influence(data,
statistic,...)".
This function aims to "calculate empirical influence values and related
quantities",
and is part of the Resample library that I cannot find for R.
However, 2 similar functions are available in R:
- the
2006 Aug 31
1
NaN when using dffits, stemming from lm.influence call
Hi all
I'm getting a NaN returned on using dffits, as explained
below. To me, there seems no obvious (or non-obvious reason
for that matter) reason why a NaN appears.
Before I start digging further, can anyone see why dffits
might be failing? Is there a problem with the data?
Consider:
# Load data
dep <-
2012 Feb 09
1
passing an extra argument to an S3 generic
I'm trying to write some functions extending influence measures to
multivariate linear models and also
allow subsets of size m>=1 to be considered for deletion diagnostics.
I'd like these to work roughly parallel
to those functions for the univariate lm where only single case deletion
(m=1) diagnostics are considered.
Corresponding to stats::hatvalues.lm, the S3 method for class
2010 Feb 21
1
tests for measures of influence in regression
influence.measures gives several measures of influence for each
observation (Cook's Distance, etc) and actually flags observations
that it determines are influential by any of the measures. Looks
good! But how does it discriminate between the influential and non-
influential observations by each of the measures? Like does it do a
Bonferroni-corrected t on the residuals identified by
2010 Aug 10
1
influence measures for multivariate linear models
Barrett & Ling, JASA, 1992, v.87(417), pp184-191 define general classes
of influence measures for multivariate
regression models, including analogs of Cook's D, Andrews & Pregibon
COVRATIO, etc. As in univariate
response models, these are based on leverage and residuals based on
omitting one (or more) observations at
a time and refitting, although, in the univariate case, the
2016 Apr 10
0
R.squared in summary.lm with weights
> On Apr 10, 2016, at 9:38 AM, David Winsemius <dwinsemius at comcast.net> wrote:
>
>>
>> On Apr 10, 2016, at 3:11 AM, Murray Efford <murray.efford at otago.ac.nz> wrote:
>>
>> Martin -
>> Thanks, but although hatvalues() is useful for calculating PRESS, I can't find anything directly relevant to my question in the influence help pages. After
2010 Sep 14
0
influence measures for multivariate linear models
I'm following up on a question I posted 8/10/2010, but my newsreader
has lost this thread.
> Barrett & Ling, JASA, 1992, v.87(417), pp184-191 define general
> classes of influence measures for multivariate
> regression models, including analogs of Cook's D, Andrews & Pregibon
> COVRATIO, etc. As in univariate
> response models, these are based on leverage and
2004 Mar 23
1
influence.measures, cooks.distance, and glm
Dear list,
I've noticed that influence.measures and cooks.distance gives different
results for non-gaussian GLMs. For example, using R-1.9.0 alpha
(2003-03-17) under Windows:
> ## Dobson (1990) Page 93: Randomized Controlled Trial :
> counts <- c(18,17,15,20,10,20,25,13,12)
> outcome <- gl(3,1,9)
> treatment <- gl(3,3)
> glm.D93 <- glm(counts ~ outcome +
2002 Mar 05
3
enhanced Question to stand. Beta
Hello everybody,
a question that connect to the question of Frederik Karlsons about 'how
to stand. betas'
With the stand. betas i can compare the influence of the different
explaning variables. What do i with the betas of factors? I can't use
the solution of JohnFox, because there is no sd of an factor. How can i
compare the influence of the factor with the influence of the numeric
2007 Dec 06
1
lm.influence under R2.6.1
Greetings!
Recently when I tried to use lm.influence I get the following error:
Error in .Fortran("lminfl", model$qr$qr, n, n, k, as.integer(do.coef), :
Fortran symbol name "lminfl" not in DLL for package "base"
This occurs on both Linux and Windows platforms (details below).
Searching the mail lists and other sources indicates that the fortran code
for
2006 Jan 18
1
Influence measure + lme ?
Hi all,
Does lme has function to compute the cook's distance or influence
measure like lm? I can't find them. Thanks.
Yen Lin
[[alternative HTML version deleted]]
2012 Feb 15
1
influence.measures()
Hi dear all,
I'm wondering about the question that; Does the influence.measures(model)
for linear models valid for general linear models such as logistic
regression models?
That is;
If I fit the model like
model <- glm( y~X1+X2, family=binomial)
Then, if i apply the function "influence.measures(model), i will get the
result of influence measures.
These result are valid for
2006 Jul 19
0
connection network - influence of site
Dear R users,
I'm trying to construct a distance matrix based on a nb object
created by spdep where sites must have a larger influence in one
direction then in the other.
Here is an example to better illustrate what I need:
Let say I have the following Gabriel connection network
library(spdep)
library(ade4)
data(orbatid)
nbgab<-graph2nb(gabrielneigh(as.matrix(oribatid$xy)))
2012 Jan 29
0
Using influence plots and obtaining id numbers
I am a novice R user, and I am having difficulty understanding R's influence
plots.
I am trying to remove outliers from a particular variable, "sib." I am able
to generate influence plots and further outlier information such as below
(which is a shortened example). For my analyses, I end up excluding the
points R refers to, 7, 18, 26, and 105. However, my question is, how can I