Displaying 20 results from an estimated 30000 matches similar to: "Linear model - coefficients"
2013 Feb 05
1
Error en fix.by(by.x, x) : 'by' must specify valid column(s)
Gente soy principiante en R , trato de hacer un logit para la escuela
cuando corro las siguientes lineas me da el error:
Error en fix.by(by.x, x) : 'by' must specify valid column(s)
require(sqldf)
require(xtable)
source(file="c:/Users/Usuario/Desktop/R/libreria/.R")
k<-qnorm(0.05/2, mean=0, sd=1, lower.tail = FALSE, log.p = FALSE)
A1 <- factor(c(1,2,3,4)) # factor1 con 4
2008 Nov 10
1
question about contrast in R for multi-factor linear regression models?
Hi all,
I am using "lm" to fit some anova factor models with interactions.
The default setting for my unordered factors is "treatment". I
understand the resultant "lm" coefficients for one factors, but when
it comes to the interaction term, I got confused.
> options()$contrasts
unordered ordered
"contr.treatment"
2004 Sep 07
1
Contrast matrices for nested factors
Hi, I'd like to know if it's possible to specify different
contrast matrices in lm() for a factor that is nested within another one. This
is useful when we have a model where the nested factor has a different
number of levels, depending on the main factor.
Let me illustrate with an example to make it clearer. Consider
the following data set:
set.seed(1)
y <-
2011 May 14
1
Summary.Formula: prmsd and test statistic
Hello,
I'm a new user to R so apologies if this is a basic question, but after scouring the web on information for summary.formula, I still am searching for an answer.
I made a function to analyze my data - I have a categorical variable and three continuous variables. I am analyzing my continuous variables on the basis of my categorical variables.
radioanal <- function(a)
{
#Educational
2009 Sep 15
1
coefficients of aov results has less number of elements?
Hi,
I run the following commands. 'A' has 3 levels and 'B' has 4 levels.
Should there be totally 3+4 = 7 coefficients (A1, A2, A3, B1, B2, B3,
B4)?
> a=3
> b=4
> n=1000
> A = rep(sapply(1:a,function(x){rep(x,n)}),b)
> B = as.vector(sapply(sapply(1:b, function(x){rep(x,n)}), function(x){rep(x,a)}))
> Y = A + B + rnorm(a*b*n)
>
> fr =
2019 Aug 31
2
inconsistent handling of factor, character, and logical predictors in lm()
Dear Abby,
> On Aug 30, 2019, at 8:20 PM, Abby Spurdle <spurdle.a at gmail.com> wrote:
>
>> I think that it would be better to handle factors, character predictors, and logical predictors consistently.
>
> "logical predictors" can be regarded as categorical or continuous (i.e. 0 or 1).
> And the model matrix should be the same, either way.
I think that
2012 May 11
1
set specific contrasts using lapply
I have the following data set
> data
A B X1 X2 Y
1 A1 B1 1.1 2.9 1.2
2 A1 B2 1.0 3.2 2.3
3 A2 B1 1.0 3.3 1.6
4 A2 B2 0.5 2.6 3.1
> sapply(data, class)
A B X1 X2 Y
"factor" "factor" "numeric" "numeric" "numeric"
I'd like to set a specific type of contrasts to all the categorical factors
2006 Aug 09
3
categorical data
Dear List,
I neeed a grouped list with two sort of categorical data. I have a data
.frame like this.
year cat. b c
1 2006 a1 125 212
2 2006 a2 256 212
3 2005 a1 14 12
4 2004 a3 565 123
5 2004 a2 156 789
6 2005 a1 1 456
7 2003 a2 786 123
8 2003 a1 421 569
9 2002 a2 425 245
I need a list with the sum of b and c for every year and every cat (a1,
a2 or a3) in this year. I had used the tapply
2003 Jul 28
1
linear model coefficients
Hi,
I wonder if there is a possibility to avoid that R sets one level of a
factor equal zero in a model fit.
More precisely, I want to fit a two-way unbalanced linear model: o ~ 0 +
x + y
x is a factor with 10 levels, y is a factor with 9 levels. In order to
get a unique solution
i set the intercept =0 and impose that sum(y)=0 i.e.
res <- lm(o ~ 0 + x + y,
2003 May 19
1
multcomp and glm
I have run the following logistic regression model:
options(contrasts=c("contr.treatment", "contr.poly"))
m <- glm(wolf.cross ~ null.cross + feature, family = "binomial")
where:
wolf.cross = likelihood of wolves crossing a linear feature
null.cross = proportion of random paths that crossed a linear feature
feature = CATEGORY of linear feature with 5 levels:
2009 Sep 17
2
What does model.matrix() return?
Hi,
I don't understand what the meaning of the following lines returned by
model.matrix(). Can somebody help me understand it? What can they be
used for?
attr(,"assign")
[1] 0 1 2 2
attr(,"contrasts")
attr(,"contrasts")$A
[1] "contr.treatment"
attr(,"contrasts")$B
[1] "contr.treatment"
Regards,
Peng
> a=2
> b=3
> n=4
2012 Nov 22
1
prediction problem
Hello,
I am using the mda package and in particular the fda routine to
classify/predict in terms of color to a set of 20 samples for which i don?t
know the color.
I preformed a flexible discriminant analysis (FDA) using a set of 147
samples for which i know all the information.
My script and data follow in attachment.
A total of 23 predictors were considered. 20 of the predictors are
2007 Oct 09
2
fit.contrast and interaction terms
Dear R-users,
I want to fit a linear model with Y as response variable and X a categorical variable (with 4 categories), with the aim of comparing the basal category of X (category=1) with category 4. Unfortunately, there is another categorical variable with 2 categories which interact with x and I have to include it, so my model is s "reg3: Y=x*x3". Using fit.contrast to make the
2011 Apr 12
2
Testing equality of coefficients in coxph model
Dear all,
I'm running a coxph model of the form:
coxph(Surv(Start, End, Death.ID) ~ x1 + x2 + a1 + a2 + a3)
Within this model, I would like to compare the influence of x1 and x2 on the
hazard rate.
Specifically I am interested in testing whether the estimated coefficient
for x1 is equal (or not) to the estimated coefficient for x2.
I was thinking of using a Chow-test for this but the Chow
2010 Feb 18
3
parsing strings between [ ] in columns
Dear all,
I have a data.frame with a column like the x shown below
myDF<-data.frame(cbind(x=c("[[1, 0, 0], [0, 1]]",
"[[1, 1, 0], [0, 1]]","[[1, 0, 0], [1, 1]]",
"[[0, 0, 1], [0, 1]]")))
> myDF
x
1 [[1, 0, 0], [0, 1]]
2 [[1, 1, 0], [0, 1]]
3 [[1, 0, 0], [1, 1]]
4 [[0, 0, 1], [0, 1]]
As you can see my x column is composed of
2008 Oct 15
1
Parameter estimates from an ANCOVA
Hi all,
This is probably going to come off as unnecessary (and show my ignorance)
but I am trying to understand the parameter estimates I am getting from R
when doing an ANCOVA. Basically, I am accustomed to the estimate for the
categorical variable being equivalent to the respective cell means minus the
grand mean. I know is the case in JMP - all other estimates from these data
match the
2005 Apr 23
2
ANOVA with both discreet and continuous variable
Hi all,
I have dataset with 2 independent variable, one (x1)
is continuous, the other (x2) is a categorical
variable with 2 levels. The dependent variable (y) is
continuous. When I run linear regression y~x1*x2, I
found that the p value for the continuous independent
variable x1 changes when different contrasts was used
(helmert vs. treatment), while the p values for the
categorical x2 and
2003 Nov 16
1
SE of ANOVA (aov) with repeated measures and a bewtween-subject factor
Hallo!
I have data of the following design:
NSubj were measured at Baseline (visit 1) and at 3
following time points (visit 2, visit 3, visit 4).
There is or is not a treatment.
Most interesting is the question if there is a
difference in treatment between the results of visit 4
and baseline. (The other time points are also of
interest.) The level of significance is alpha=0.0179
(because of an
2009 Oct 06
1
linear model with coefficient constraints
I would like to perform a regression like the one below:
lm(x ~ 0 + a1 + a2 + a3 + b1 + b2 + b3 + c1 + c2 + c3, data=data)
However, the data has the property that a1+a2+a3 = A, b1+b2+b3 = B, and
c1+c2+c3 = C, where A, B, and C are positive constants. So there are two
extra degrees of freedom, and R handles this by producing NA for two of the
coefficients. Instead, I would prefer to remove the
2010 Sep 29
1
Understanding linear contrasts in Anova using R
#I am trying to understand how R fits models for contrasts in a
#simple one-way anova. This is an example, I am not stupid enough to want
#to simultaneously apply all of these contrasts to real data. With a few
#exceptions, the tests that I would compute by hand (or by other software)
#will give the same t or F statistics. It is the contrast estimates that
R produces
#that I can't seem to