Displaying 20 results from an estimated 2000 matches similar to: "add1.lm and add1.glm not handling weights and offsets properly (PR#8049)"
2005 Aug 05
0
(PR#8049) add1.lm and add1.glm not handling weights and
David,
Thanks.
The reason add1.lm (and drop1.lm) do not support offsets is that lm did
not when they were written, and the person who added offsets to lm did not
change them. (I do wish they had not added an offset arg and just used the
formula as in S's glm.) That is easy to add.
For the other point, some care is needed if 'x' is supplied and the upper
scope reduces the number
2006 Mar 16
2
DIfference between weights options in lm GLm and gls.
Dear R-List users,
Can anyone explain exactly the difference between Weights options in lm glm
and gls?
I try the following codes, but the results are different.
> lm1
Call:
lm(formula = y ~ x)
Coefficients:
(Intercept) x
0.1183 7.3075
> lm2
Call:
lm(formula = y ~ x, weights = W)
Coefficients:
(Intercept) x
0.04193 7.30660
> lm3
Call:
2008 Nov 19
1
F-Tests in generalized linear mixed models (GLMM)
Hi!
I would like to perform an F-Test over more than one variable within a
generalized mixed model with Gamma-distribution
and log-link function. For this purpose, I use the package mgcv.
Similar tests may be done using the function "anova", as for example in
the case of a normal
distributed response. However, if I do so, the error message
"error in eval(expr, envir, enclos) :
2003 Jul 30
2
Comparing two regression slopes
Hello,
I've written a simple (although probably overly roundabout) function to
test whether two regression slope coefficients from two linear models on
independent data sets are significantly different. I'm a bit concerned,
because when I test it on simulated data with different sample sizes and
variances, the function seems to be extremely sensitive both of these. I am
wondering if
2004 Aug 19
1
The 'test.terms' argument in 'regTermTest' in package 'survey'
This is a question regarding the 'regTermTest' function in the 'survey' package. Imagine Z as a three level factor variable, and code ZB and ZC as the two corresponding dummy variables. X is a continuous variable. In a 'glm' of Y on Z and X, say, how do the two test specifications
test.terms = c("ZB:X","ZC:X") # and
test.terms = ~ ZB:X + ZC:X
in
2008 May 08
2
poisson regression with robust error variance ('eyestudy
Ted Harding said:
> I can get the estimated RRs from
> RRs <- exp(summary(GLM)$coef[,1])
> but do not see how to implement confidence intervals based
> on "robust error variances" using the output in GLM.
Thanks for the link to the data. Here's my best guess. If you use
the following approach, with the HC0 type of robust standard errors in
the
2008 Jun 09
1
Cross-validation in R
Folks; I am having a problem with the cv.glm and would appreciate someone
shedding some light here. It seems obvious but I cannot get it. I did read
the manual, but I could not get more insight. This is a database containing
3363 records and I am trying a cross-validation to understand the process.
When using the cv.glm, code below, I get mean of perr1 of 0.2336 and SD of
0.000139. When using a
2009 Feb 16
1
Overdispersion with binomial distribution
I am attempting to run a glm with a binomial model to analyze proportion
data.
I have been following Crawley's book closely and am wondering if there is
an accepted standard for how much is too much overdispersion? (e.g. change
in AIC has an accepted standard of 2).
In the example, he fits several models, binomial and quasibinomial and then
accepts the quasibinomial.
The output for residual
2005 Aug 12
1
as.formula and lme ( Fixed effects: Error in as.vector(x, "list") : cannot coerce to vector)
This is a continuing issue with the one on the list a long time ago (I
couldn't find a solution to it from the web):
--------------------------------------------------------------------------
> Using a formula converted with as.formula with lme leads
> to an error message. Same works ok with lm, and with
> lme and a fixed formula.
>
> # demonstrates problems with lme and
2007 Feb 14
1
how to report logistic regression results
Dear all,
I am comparing logistic regression models to evaluate if one predictor
explains additional variance that is not yet explained by another predictor.
As far as I understand Baron and Li describe how to do this, but my question
is now: how do I report this in an article? Can anyone recommend a
particular article that shows a concrete example of how the results from te
following simple
2010 Jun 03
1
compare results of glms
dear list!
i have run several glm analysises to estimate a mean rate of dung decay
for independent trials. i would like to compare these results
statistically but can't find any solution. the glm calls are:
dung.glm1<-glm(STATE~DAYS, data=o_cov, family="binomial(link="logit"))
dung.glm2<-glm(STATE~DAYS, data=o_cov_T12, family="binomial(link="logit"))
as
2003 Feb 01
0
AIC.default (PR#2518)
There is a bug in AIC.default and AIC.lm, as illustrated below.
(I've only checked this under 1.6.1, and can't easily check if it has
already been reported since the site is down.)
> lm1 <- lm(y ~ x, list(x=1:10, y=jitter(1:10)))
> lm2 <- lm(y ~ x, list(x=1:10, y=jitter(1:10)))
> AIC(lm1, lm2)
df AIC
lm1 3 -18.662493
lm2 3 -7.265906
> AIC(lm1, lm2, k = 2)
2001 Feb 23
1
as.formula and lme ( Fixed effects: Error in as.vector(x, "list") : cannot coerce to vector)
Using a formula converted with as.formula with lme leads
to an error message. Same works ok with lm, and with
lme and a fixed formula.
# demonstrates problems with lme and as.formula
demo<-data.frame(x=1:20,y=(1:20)+rnorm(20),subj=as.factor(rep(1:2,10)))
demo.lm1<-lme(y~x,data=demo,random=~1|subj)
print(summary(demo.lm1))
newframe<-data.frame(x=1:5,subj=rep(1,5))
2004 May 07
1
contrasts in a type III anova
Hello,
I use a type III anova ("car" package) to analyse an unbalanced data design. I
have two factors and I would have the effect of the interaction. I read that
the result could be strongly influenced by the contrasts. I am really not an
expert and I am not sure to understand indeed about what it is...
Consequently, I failed to properly used the fit.contrast function (gregmisc
2002 Apr 30
1
MemoryProblem in R-1.4.1
Hi all,
In a simulation context, I'm applying some my function, "myfun" say, to a
list of glm obj, "list.glm":
>length(list.glm) #number of samples simulated
[1] 1000
>class(list.glm[[324]]) #any component of the list
[1] "glm" "lm"
>length(list.glm[[290]]$y) #sample size
[1] 1000
Because length(list.glm) and the sample size are rather large,
2002 May 16
1
glm(y ~ -1 + c, "binomial") question
This is a question about removing the intercept in a binomial
glm() model with categorical predictors. V&R (3rd Ed. Ch7) and
Chambers & Hastie (1993) were very helpful but I wasn't sure I
got all the answers.
In a simplistic example suppose I want to explore how disability
(3 levels, profound, severe, and mild) affects the dichotomized
outcome. The glm1 model (see below) is
2011 Jul 28
0
R: Re: Problem with anova.lmRob() "robust" package
I'm sorry, maybe the question was bad posed.
Ista has well described my problem.
Thanks
Massimo
>----Messaggio originale----
>Da: izahn at psych.rochester.edu
>Data: 28/07/2011 17.52
>A: "David Winsemius"<dwinsemius at comcast.net>
>Cc: "m.fenati at libero.it"<m.fenati at libero.it>, <r-help at r-project.org>
>Ogg: Re: [R]
2011 Sep 21
1
Problem with predict and lines in plotting binomial glm
Problems with predict and lines in plotting binomial glm
Dear R-helpers
I have found quite a lot of tips on how to work with glm through this mailing list, but still have a problem that I can't solve.
I have got a data set of which the x-variable is count data and the y-variable is proportional data, and I want to know what the relationship between the variables are.
The data was
2012 Jan 13
2
Help needed in interpreting linear models
Dear members of the R-help list,
I have sent the email below to the R-SIG-ME list to ask for help in
interpreting some R output of fitted linear models.
Unfortunately, I haven't yet received any answers. As I am not sure if my
email was sent successfully to the mailing list I
am asking for help here:
Dear members of the R-SIG-ME list,
I am new to linear models and struggling with
2010 Apr 08
2
Overfitting/Calibration plots (Statistics question)
This isn't a question about R, but I'm hoping someone will be willing
to help. I've been looking at calibration plots in multiple regression
(plotting observed response Y on the vertical axis versus predicted
response [Y hat] on the horizontal axis).
According to Frank Harrell's "Regression Modeling Strategies" book
(pp. 61-63), when making such a plot on new data