similar to: AIC.default (PR#2518)

Displaying 20 results from an estimated 1000 matches similar to: "AIC.default (PR#2518)"

2001 Oct 18
0
uniform generator (default)
Recieving digests. > RNGkind(NULL) [1] "Marsaglia-Multicarry" "Kinderman-Ramage" I would appreciate it if anybody has any comments on the following. Please do not comment on the R functions themselves, since they merely mimic a (bivariate simplification of a) C routine called from S. In particular, I would like to know if anything is available with regard to the
2003 Jul 30
2
Comparing two regression slopes
Hello, I've written a simple (although probably overly roundabout) function to test whether two regression slope coefficients from two linear models on independent data sets are significantly different. I'm a bit concerned, because when I test it on simulated data with different sample sizes and variances, the function seems to be extremely sensitive both of these. I am wondering if
2005 Aug 12
1
as.formula and lme ( Fixed effects: Error in as.vector(x, "list") : cannot coerce to vector)
This is a continuing issue with the one on the list a long time ago (I couldn't find a solution to it from the web): -------------------------------------------------------------------------- > Using a formula converted with as.formula with lme leads > to an error message. Same works ok with lm, and with > lme and a fixed formula. > > # demonstrates problems with lme and
2001 Feb 23
1
as.formula and lme ( Fixed effects: Error in as.vector(x, "list") : cannot coerce to vector)
Using a formula converted with as.formula with lme leads to an error message. Same works ok with lm, and with lme and a fixed formula. # demonstrates problems with lme and as.formula demo<-data.frame(x=1:20,y=(1:20)+rnorm(20),subj=as.factor(rep(1:2,10))) demo.lm1<-lme(y~x,data=demo,random=~1|subj) print(summary(demo.lm1)) newframe<-data.frame(x=1:5,subj=rep(1,5))
2012 Jan 13
2
Help needed in interpreting linear models
Dear members of the R-help list, I have sent the email below to the R-SIG-ME list to ask for help in interpreting some R output of fitted linear models. Unfortunately, I haven't yet received any answers. As I am not sure if my email was sent successfully to the mailing list I am asking for help here: Dear members of the R-SIG-ME list, I am new to linear models and struggling with
2010 Apr 08
2
Overfitting/Calibration plots (Statistics question)
This isn't a question about R, but I'm hoping someone will be willing to help. I've been looking at calibration plots in multiple regression (plotting observed response Y on the vertical axis versus predicted response [Y hat] on the horizontal axis). According to Frank Harrell's "Regression Modeling Strategies" book (pp. 61-63), when making such a plot on new data
2006 Mar 10
1
add trend line to each group of data in: xyplot(y1+y2 ~ x | grp...
Although this should be trivial, I'm having a spot of trouble. I want to make a lattice plot of the format y1+y2 ~ x | grp but then fit a lm to each y variable and add an abline of those models in different colors. If the xyplot followed y~x|grp I would write a panel function as below, but I'm unsure of how to do that with y1 and y2 without reshaping the data before hand. Thoughts
2007 Aug 06
1
test the significances of two regression lines
R-help, I'm trying to test the significance of two regression lines , i.e. the significance of the slopes from two samples originated from the same population. Is it correct if I fit a liner model for each sample and then test the slope signicance with 'anova'. Something like this: lm1 <- lm(Y~ a1 + b1*X) # sample 1 lm2 <- lm(Y~ a2 + b2*X) # sample 2 anova(lm1, lm2)
2002 Dec 20
1
Printing correlation matrices (lm/glm)
Hi Folks, I'm analysing some data which, in its simplest aspect, has 3 factors A, B, C each at 2 levels. If I do lm1 <- lm(y ~ A*B) say, and then summary(lm1, corr=T) I get the correlation matrix of the estimated coeffcients with numerical values for the correlations (3 coeffs in this case). Likewise with 'glm' instead of 'lm'. However, if I do lm2 <- lm(y ~
2012 Jul 06
2
Anova Type II and Contrasts
the study design of the data I have to analyse is simple. There is 1 control group (CTRL) and 2 different treatment groups (TREAT_1 and TREAT_2). The data also includes 2 covariates COV1 and COV2. I have been asked to check if there is a linear or quadratic treatment effect in the data. I created a dummy data set to explain my situation: df1 <- data.frame( Observation =
2003 Oct 28
1
error message in simulation
Dear R-users, I am a dentist (so forgive me if my question looks stupid) and came across a problem when I did simulations to compare a few single level and two level regressions. The simulations were interrupted and an error message came out like 'Error in MEestimate(lmeSt, grps) : Singularity in backsolve at level 0, block 1'. My collegue suggested that this might be due to my codes
2005 Nov 17
1
anova.gls from nlme on multiple arguments within a function fails
Dear All -- I am trying to use within a little table producing code an anova comparison of two gls fitted objects, contained in a list of such object, obtained using nlme function gls. The anova procedure fails to locate the second of the objects. The following code, borrowed from the help page of anova.gls, exemplifies: --------------- start example code --------------- library(nlme) ##
2005 Aug 04
0
add1.lm and add1.glm not handling weights and offsets properly (PR#8049)
I am using R 2.1.1 under Mac OS 10.3.9. Two related problems (see notes 1. and 2. below) are illustrated by results of the following: y <- rnorm(10) x <- z <- 1:10 is.na(x[9]) <- TRUE lm0 <- lm(y ~ 1) lm1 <- lm(y ~ 1, weights = rep(1, 10)) add1(lm0, scope = ~ x) ## works ok add1(lm1, scope = ~ x) ## error lm2 <- lm(y ~ 1, offset = 1:10) add1(lm0, scope = ~ z) ##
2008 Nov 24
3
Is this correct?
I have to answer the following question for a homework assignment. A researcher was interested in whether people taking part in sports at university made more money after graduating, taking into account the students' GPA. They sampled 200 alumni from a large university. The variables are: income (income 10 years after graduating), sports (1 if they did sports, 0 if they did not), and GPA (the
2014 Jun 13
3
p values con LMER
Hola Manuel lo he tratado de hacer pero me sale Error: unexpected string constante in: "anova(a,as,test=Chisq") no tengo ni idea de por qué... Me resulta alucinante no poder contar ya con pvals.fnc. ¿Será imposible hacerse con ello? Saludos, Miguel -------------------------------------------- El vie, 13/6/14, Manuel Azcárate <mazcarategarcia en gmail.com> escribió:
2005 Aug 05
0
(PR#8049) add1.lm and add1.glm not handling weights and
David, Thanks. The reason add1.lm (and drop1.lm) do not support offsets is that lm did not when they were written, and the person who added offsets to lm did not change them. (I do wish they had not added an offset arg and just used the formula as in S's glm.) That is easy to add. For the other point, some care is needed if 'x' is supplied and the upper scope reduces the number
2008 Jun 04
1
Comparing two regression lines
Dear R users, Suppose I have two different response variables y1, y2 that I regress separately on the same explanatory variable, x; sample sizes are n1=n2. Is it legitimate to compare the regression slopes (equal variances assumed) by using lm(y~x*FACTOR), where FACTOR gets "y1" if y1 is the response, and "y2" if y2 is the response? The problem I see here is that the
2004 Aug 26
1
Why terms are dropping out of an lm() model
Hi all! I'm fairly new to R and not too experienced with regression. Because of one or both of those traits, I'm not seeing why some terms are being dropped from my model when doing a regression using lm(). I am trying to do a regression on some experimental data d, which has two numeric predictors, p1 and p2, and one numeric response, r. The aim is to compare polynomial models in p1
2010 Jul 07
3
Large discrepancies in the same object being saved to .RData
Hi developers, After some investigation I have found there can be large discrepancies in the same object being saved as an external "xx.RData" file. The immediate repercussion of this is the possible increased size of your .RData workspace for no apparent reason. The function and its three scenarios below highlight these discrepancies. Note that the object being returned is exactly
2007 Jul 17
1
Speed up computing: looping through data?
Dear all, Please excuse my ignorance, but I am having difficulty with this, and am unable to find help on the website/Google. I have a series of explanatory variables that I am aiming to get parsimony out of. For example, if I have 10 variables, a-j, I am initially looking at the linear relationships amongst them: my.lm1 <- lm(a ~ b+c+d+e+f+g+h+i+j, data=my.data) summary(my.lm1) my.lm2