Displaying 20 results from an estimated 2000 matches similar to: "passing an extra argument to an S3 generic"
2010 Aug 10
1
influence measures for multivariate linear models
Barrett & Ling, JASA, 1992, v.87(417), pp184-191 define general classes
of influence measures for multivariate
regression models, including analogs of Cook's D, Andrews & Pregibon
COVRATIO, etc. As in univariate
response models, these are based on leverage and residuals based on
omitting one (or more) observations at
a time and refitting, although, in the univariate case, the
2010 Sep 14
0
influence measures for multivariate linear models
I'm following up on a question I posted 8/10/2010, but my newsreader
has lost this thread.
> Barrett & Ling, JASA, 1992, v.87(417), pp184-191 define general
> classes of influence measures for multivariate
> regression models, including analogs of Cook's D, Andrews & Pregibon
> COVRATIO, etc. As in univariate
> response models, these are based on leverage and
2013 Jan 29
3
how to suppress the intercept in an lm()-like formula method?
I'm trying to write a formula method for canonical correlation analysis,
that could be called similarly to lm() for
a multivariate response:
cancor(cbind(y1,y2,y3) ~ x1+x2+x3+x4, data=, ...)
or perhaps more naturally,
cancor(cbind(y1,y2,y3) ~ cbind(x1,x2,x3,x4), data=, ...)
I've adapted the code from lm() to my case, but in this situation, it
doesn't make sense to
include an
2008 Mar 08
5
Non-visible functions are asterisked
Dear R-Helpers,
I suspect I'm about to ask a FAQ, but I haven't been able to find an
answer in the FAQ, AItR or an R Site Search. When I look at the methods
of summary (below) it says, "Non-visible functions are asterisked". I
looked at the help file for summary.princomp, which did not comment on
it being non-visible. I ran its help file example, which printed visible
output. I
2008 Oct 19
2
definition of "dffits"
R-users
E-mail: r-help@r-project.org
Hi! R-users.
I am just wondering what the definition of "dffits" in R language is.
Let me show you an simple example.
function() {
library(MASS)
xx <- c(1,2,3,4,5)
yy <- c(1,3,4,2,4)
data1 <- data.frame(x=xx, y=yy)
lm.out <- lm(y~., data=data1, x=T)
lev1 <- lm.influence(lm.out)$hat
sig1 <-
2009 Mar 05
1
hatvalues?
I am struiggling a bit with this function 'hatvalues'. I would like a little more undrestanding than taking the black-box and using the values. I looked at the Fortran source and it is quite opaque to me. So I am asking for some help in understanding the theory. First, I take the simplest case of a single variant. For this I turn o John Fox's book, "Applied Regression Analysis
2009 Nov 08
2
influence.measures(stats): hatvalues(model, ...)
Hello:
I am trying to understand the method 'hatvalues(...)', which returns something similar to the diagonals of the plain vanilla hat matrix [X(X'X)^(-1)X'], but not quite.
A Fortran programmer I am not, but tracing through the code it looks like perhaps some sort of correction based on the notion of 'leave-one-out' variance is being applied.
Whatever the
2007 Oct 29
3
Strange results with anova.glm()
Hi,
I have been struggling with this problem for some time now. Internet,
books haven't been able to help me.
## I have factorial design with counts (fruits) as response variable.
> str(stubb)
'data.frame': 334 obs. of 5 variables:
$ id : int 6 23 24 25 26 27 28 29 31 34 ...
$ infl.treat : Factor w/ 2 levels "0","1": 2 2 2 2 1 1 1 2 1 1 ...
$ def.treat :
2003 Apr 07
1
filtering ts with arima
Hi,
I have the following code from Splus that I'd like to migrate to R. So far,
the only problem is the arima.filt function. This function allows me to
filter an existing time-series through a previously estimated arima model,
and obtain the residuals for further use. Here's the Splus code:
# x is the estimation time series, new.infl is a timeseries that contains
new information
# a.mle
2005 Apr 13
2
multinom and contrasts
Hi,
I found that using different contrasts (e.g.
contr.helmert vs. contr.treatment) will generate
different fitted probabilities from multinomial
logistic regression using multinom(); while the fitted
probabilities from binary logistic regression seem to
be the same. Why is that? and for multinomial logisitc
regression, what contrast should be used? I guess it's
helmert?
here is an example
2004 Jan 08
3
Strange parametrization in polr
In Venables \& Ripley 3rd edition (p. 231) the proportional odds model
is described as:
logit(p<=k) = zeta_k + eta
but polr apparently thinks there is a minus in front of eta,
as is apprent below.
Is this a bug og a feature I have overlooked?
Here is the naked code for reproduction, below the results.
------------------------------------------------------------------------
---
version
2011 Mar 14
3
Standardized Pearson residuals
Is there any reason that rstandard.glm doesn't have a "pearson" option?
And if not, can it be added?
Background: I'm currently teaching an undergrad/grad-service course from
Agresti's "Introduction to Categorical Data Analysis (2nd edn)" and
deviance residuals are not used in the text. For now I'll just provide
the students with a simple function to use, but I
2003 May 05
3
polr in MASS
Hi, I am trying to test the proportional-odds model using the "polr" function in the MASS library with the dataset of "housing" contained in the MASS book ("Sat" (factor: low, medium, high) is the dependent variable, "Infl" (low, medium, high), "Type" (tower, apartment, atrium, terrace) and "Cont" (low, high) are the predictor variables
2008 Sep 30
2
weird behavior of drop1() for polr models (MASS)
I would like to do a SS type III analysis on a proportional odds logistic
regression model. I use drop1(), but dropterm() shows the same behaviour. It
works as expected for regular main effects models, however when the model
includes an interaction effect it seems to have problems with matching the
parameters to the predictor terms. An example:
library("MASS");
options(contrasts =
2008 Nov 20
2
Identify command in R
Hi all,
In using the identify command, I get the following message
> plot(hatvalues(scireg3))
> abline(h=.0154,lty=2) # plots a reference line at (k + 1)/n
> identify(1:1165, hatvalues(scireg3),row.names(sciach))
Error in xy.coords(x, y) : 'x' and 'y' lengths differ
which doesn't allow me to see the observation number when I scroll over
with the mouse. What
2007 Oct 19
2
In a SLR, Why Does the Hat Matrix Depend on the Weights?
I understand that the hat matrix is a function of the predictor variable
alone. So, in the following example why do the values on the diagonal of the
hat matrix change when I go from an unweighted fit to a weighted fit? Is the
function hatvalues giving me something other than what I think it is?
library(ISwR)
data(thuesen)
attach(thuesen)
fit <- lm(short.velocity ~ blood.glucose)
2005 Dec 06
1
standardized residuals (rstandard & plot.lm) (PR#8367)
Full_Name: Heather Turner
Version: 2.2.0
OS: Windows XP
Submission from: (NULL) (137.205.240.44)
Standardized residuals as calculated by rstandard.lm, rstandard.glm and plot.lm
are Inf/NaN rather than zero when the un-standardized residuals are zero. This
causes plot.lm to break when calculating 'ylim' for any of the plots of
standardized residuals. Example:
2013 Mar 12
1
Cook's distance
Dear useRs,
I have some trouble with the calculation of Cook's distance in R.
The formula for Cook's distance can be found for example here:
http://en.wikipedia.org/wiki/Cook%27s_distance
I tried to apply it in R:
> y <- (1:400)^2
> x <- 1:100
> lm(y~x) -> linmod # just for the sake of a simple example
>
2012 Apr 30
5
Different varable lengths
Hi!
I'm trying to do a lm() test on three objects. My problem is that R protests
and says that the variable lengths differ for one of the objects
(Sweden.GDP.gap). But I have double checked that the number of observations
are the same. All three objects should contain 9 observations but R only
accepts 9 observations in two of the objects. The third must have 10! Very
confusing because there
2009 Dec 10
2
Problem with coeftest using Newey West estimator
Hi,
I want to calculate the t- and p-values for a linear model using the Newey West estimator.
I tried this Code and it usually worked just fine:
> oberlm <- lm(DYH ~ BIP + Infl + EOil, data=HU_H)
> coeftest(oberlm, NeweyWest(oberlm, lag=2))
t test of coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.1509950 0.0743832 2.0300 0.179486
BIP