Displaying 20 results from an estimated 5000 matches similar to: "test for existance of a method for given class"
2005 Feb 25
1
anova grouping of factors in lme4 / lmer
Hi. I'm using lmer() from the lme4 package (version 0.8-3) and I can't get
anova() to group variables properly. I'm fitting the mixed model
Response ~ Weight + Experimenter + (1|SUBJECT.NAME) + (1|Date.StudyDay)
where Weight is numeric and Experimenter is a factor, ie,
> str(data.df)
`data.frame': 4266 obs. of 5 variables:
$ SUBJECT.NAME : Factor w/ 2133 levels
2005 Apr 18
2
refitting lm() with same x, different y
Dear All,
Is there is a fast way of refitting lm() when the design matrix stays constant
but the response is different? For example,
y1 ~ X
y2 ~ X
y3 ~ X
...etc.
where y1 is the 1st instance of the response vector. Calling lm() every
time seems rather wasteful since the QR-decomposition of X needs to be
calculated only once. It would be nice if qr() was called only once and
then the same
2007 Aug 02
1
simulate() and glm fits
Dear All,
I have been trying to simulate data from a fitted glm using the simulate()
function (version details at the bottom). This works for lm() fits and
even for lmer() fits (in lme4). However, for glm() fits its output does
not make sense to me -- am I missing something or is this a bug?
Consider the following count data, modelled as gaussian, poisson and
binomial responses:
counts
2005 May 17
1
setting value arg of pdSymm() in nlme
Dear All,
I wish to model random effects that have known between-group covariance
structure using the lme() function from library nlme. However, I have yet
to get even a simple example to work. No doubt this is because I am
confusing my syntax, but I would appreciate any guidance as to how. I have
studied Pinheiro & Bates carefully (though it's always possible I've
missed
2005 Oct 20
0
survreg anova: problem with indirect invocation
Dear R help,
I've encountered a problem with survreg's anova(). I am currently
writing general code to fit a variety of models using different fitting
functions. Here's a simple example of what I'm trying to do:
---begin code---
# general function to analyse data
analyse.data <- function(formula, FUN, data, ...)
{
fit <- FUN(formula, data=data, ...)
anova(fit)
2011 Aug 23
1
Implementing a "plugin" paradigm with R methods
Dear list,
I was wondering how to best implement some sort of a "plugin" paradigm
using R methods and the dispatcher:
Say we have a function/method ('foo') that does something useful, but
that should be open for extension in ONE specific area by OTHERS using
my package. Of course they could go ahead and write a whole new 'foo'
method including the features they'd
2001 Apr 23
1
several bugs (PR#918)
# Your mailer is set to "none" (default on Windows),
# hence we cannot send the bug report directly from R.
# Please copy the bug report (after finishing it) to
# your favorite email program and send it to
#
# r-bugs@r-project.org
#
######################################################
1. as.numeric behaves differently in R than in S and I think this
shows a bug in how S3
2016 Jul 26
3
Exportar datos en formato de Excel
Hola.
En mi caso, no he podido resolver los problemas con el Java para usar
XLConnect, que en los papeles me parece el mejor. Supongo que algo de la
arquitectura del sistema o bien de la relación entre el Java, el R y el
RStudio.
Así que utilizo
library(openxlsx)
write.xlsx(datos, file = "EDA1.xlsx") #donde datos es el objeto que quiero
guardar.
Requiere instalar el RTools, según tipo y
2005 Apr 21
2
apply vs sapply vs loop - lm() call appl(y)ied on array
Christoph --
There was just a thread on this earlier this week. You can search in the
archives for the title: "refitting lm() with same x, different y".
(Actually, it doesn't turn up in the R site search yet, at least for me.
But if you just go to the archive of recent messages, available through
CRAN, you can search on refitting and find it. The original post was from
William
2002 Feb 28
1
get deviance from glm() for given parameter values
Dear all,
I would like to get glm() return its results (at least the deviance) for
some given parameter values (ie without actually fitting the model). I
tried to set `maxit = 0' but this does not work, eg:
> glm(y ~ x, start = c(1, 1), maxit = 0)
Error in glm.control(...) : maximum number of iterations must be > 0
Any idea?
Thanks in advance.
Emmanuel Paradis
2008 Mar 30
1
package.skeleton.S4
Hi the devel list.
I am adapting the package.skeleton to S4 classes and methods
I would have been very proud to post a new working function on this list.
Unfortunately, I do not manage to solve all the problems. Mainly
- sys.source does not compile a file with setClass
- dumpMethod does not exists yet
In the following code, thise two problems are notified by a line
#################
Still
2008 Aug 21
1
Interpreting Logistic Regression
Hi !
This is Madhavi from Mumbai, India. Incidently this is my first post.
I am working on Credit Scoring Model and using R, I have run the logistic regression. I have received following Output.
I have two questions
(a) What is the significance of "family = binomial(link = logit)". Why do I have to mention Binomial? Is it because my dependent variable assumes only two values 0 and 1?
2008 Jun 24
2
logistic regression
Hi everyone,
I'm sorry if this turns out to be more a statistical question than one
specifically about R - but would greatly appreciate your advice anyway.
I've been using a logistic regression model to look at the relationship
between a binary outcome (say, the odds of picking n white balls from a bag
containing m balls in total) and a variety of other binary parameters:
2006 Apr 23
1
lme: null deviance, deviance due to the random effects, residual deviance
A maybe trivial and stupid question:
In the case of a lm or glm fit, it is quite informative (to me) to have
a look to the null deviance and the residual deviance of a model. This
is generally provided in the print method or the summary, eg:
Null Deviance: 658.8
Residual Deviance: 507.3
and (a bit simpled minded) I like to think that the proportion of
deviance 'explained' by the
2011 Nov 10
1
Sum of the deviance explained by each term in a gam model does not equal to the deviance explained by the full model.
Dear R users,
I read your methods of extracting the variance explained by each
predictor in different places. My question is: using the method you
suggested, the sum of the deviance explained by all terms is not equal to
the deviance explained by the full model. Could you tell me what caused
such problem?
> set.seed(0)
> n<-400
> x1 <- runif(n, 0, 1)
> ## to see problem
2010 Jun 02
1
Problems using gamlss to model zero-inflated and overdispersed count data: "the global deviance is increasing"
Dear all,
I am using gamlss (Package gamlss version 4.0-0, R version 2.10.1, Windows XP Service Pack 3 on a HP EliteBook) to relate bird counts to habit variables. However, most models fail because “the global deviance is increasing” and I am not sure what causes this behaviour. The dataset consists of counts of birds (duck) and 5 habit variables measured in the field (n= 182). The dependent
2007 Oct 08
2
variance explained by each term in a GAM
Hello fellow R's,
I do apologize if this is a basic question. I'm doing some GAMs using the mgcv package, and I am wondering what is the most appropriate way to determine how much of the variability in the dependent variable is explained by each term in the model. The information provided by summary.gam() relates to the significance of each term (F, p-value) and to the
2006 Nov 13
3
Profile confidence intervals and LR chi-square test
System: R 2.3.1 on Windows XP machine.
I am building a logistic regression model for a sample of 100 cases in
dataframe "d", in which there are 3 binary covariates: x1, x2 and x3.
----------------
> summary(d)
y x1 x2 x3
0:54 0:50 0:64 0:78
1:46 1:50 1:36 1:22
> fit <- glm(y ~ x1 + x2 + x3, data=d, family=binomial(link=logit))
>
2005 Oct 20
3
different F test in drop1 and anova
Hi,
I was wondering why anova() and drop1() give different tail
probabilities for F tests.
I guess overdispersion is calculated differently in the following
example, but why?
Thanks for any advice,
Tom
For example:
> x<-c(2,3,4,5,6)
> y<-c(0,1,0,0,1)
> b1<-glm(y~x,binomial)
> b2<-glm(y~1,binomial)
> drop1(b1,test="F")
Single term deletions
Model:
y ~
2007 Nov 13
2
question about glm behavior
Hello,
I was trying a glm fitting (as shown below) and I got a warning and a fitted
residual deviance larger than the null deviance. Is this the expected
behavor of glm? I would expect that even though the warning might be
warranted I should not get worse fitting with an additional covariate in the
model. Could anyone tell me what I'm missing?
I get the same results in both R2.5.1 on windows