Displaying 20 results from an estimated 37 matches for "mylme".
Did you mean:
mlme
2004 Mar 08
2
getting the std errors in the lm function
Hello,
I have a simple question for you:
making:
mylm<-lm(y~x)
summary(mylm)
I get the following results:
******************************************************
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 16.54087 0.19952 82.91 <2e-16 ***
x[1:19] -2.32337 0.04251 -54.66 <2e-16 ***
******************************************************
2010 Sep 14
1
NA confusion (length question)
Hi folks,
I am running a very simple regression using
mylm <- lm(mass ~ tarsus, na.action=na.exclude)
I would like the use the residuals from this analysis for more
regression but I'm running into a snag when I try
cbind(mylm$residuals, mydata) # where my data is the original data set
The error tells me that it cannot use cbind because the length of
mylm$residuals is
2003 Oct 11
1
Subclassing lm
I'd trying to subclass the "lm" class to produce a "mylm" class whose
instances behave like lm objects (are accepted by methods like summary.lm)
but have additional data or slots of my own design.
For starters:
setClass("mylm", "lm")
produces the somewhat cryptic:
Warning message:
Old-style (``S3'') class "mylm" supplied as a
2012 Sep 06
0
lme( y ~ ns(x, df=splineDF)) error
...rror in eval(expr, envir, enclos) : object 'splineDF' not found"
#
# First make sure splineDF does not exist in .GlobalEnv, else we would be in WhichApproach==2.
if(exists("splineDF", where=".GlobalEnv")) remove(list="splineDF", pos=".GlobalEnv")
mylme<-lme(fixed= y ~ ns(x, df=splineDF)
, random= ~ 1 | ID
, correlation = corAR1()
, data=longdat
)
} else if(WhichApproach==2) {
# returns: "Error in model.frame.default(formula = ~ID + y + x + splineDF, data = list( :
# variable lengths differ (found for 'splineDF')&qu...
2012 Sep 26
0
lme(y ~ ns(x, df=splineDF)) error
...rror in eval(expr, envir, enclos) : object 'splineDF' not found"
#
# First make sure splineDF does not exist in .GlobalEnv, else we would be in WhichApproach==2.
if(exists("splineDF", where=".GlobalEnv")) remove(list="splineDF", pos=".GlobalEnv")
mylme<-lme(fixed= y ~ ns(x, df=splineDF)
, random= ~ 1 | ID
, correlation = corAR1()
, data=longdat
)
} else if(WhichApproach==2) {
# returns: "Error in model.frame.default(formula = ~ID + y + x + splineDF, data = list( :
# variable lengths differ (found for 'splineDF'...
2011 May 20
2
extraction of mean square value from ANOVA
Hello,
I am randomly generating values and then using an ANOVA table to find the
mean square value. I would like to form a loop that extracts the mean square
value from ANOVA in each iteration. Below is an example of what I am doing.
a<-rnorm(10)
b<-factor(c(1,1,2,2,3,3,4,4,5,5))
c<-factor(c(1,2,1,2,1,2,1,2,1,2))
mylm<-lm(a~b+c)
anova(mylm)
Since I would like to use a loop to
2009 Mar 31
1
using "substitute" inside a legend
Hello list,
I have a linear regression:
mylm = lm(y~x-1)
I've been reading old mail postings as well as the plotmath demo and I came
up with a way to print an equation resulting from a linear regression:
model = substitute(list("y"==slope%*%"x", R^2==rsq),
list(slope=round(mylm$coefficients[[1]],2),rsq=round(summary(mylm)$adj.r.squared,
2)))
I have four models and I
2009 May 14
1
automated polynomial regression
Dear all -
We perform some measurements with a machine that needs to be
recalibrated. The best calibration we get with polynomial regression.
The data might look like follows:
> true_y <- c(1:50)*.8
> # the real values
> m_y <- c((1:21)*1.1, 21.1, 22.2, 23.3 ,c(25:50)*.9)/0.3-5.2
> # the measured data
> x <- c(1:50)
> # and the x-axes
>
> # Now I do the following:
2009 Jan 24
2
how to prevent duplications of data within a loop
Hi All,
I had posted a question on a similar topic, but I think it was not
focused. I am posting a modification that I think better accomplishes
this.
I hope this is ok, and I apologize if it is not. :)
I am looping through variables and running several regressions. I have
reason to believe that the data is being duplicated because I have
been
monitoring the memory use on unix.
How can I avoid
2007 Apr 06
2
lm() intercept at the end, rather than at the beginning
Hi,
I wonder if someone has already figured out a way of making
summary(mylm) # where mylm is an object of the class lm()
to print the "(Intercept)" at the last line, rather than the first
line of the output. I don't know about, say, biostatistics, but in
economics the intercept is usually the least interesting of the
parameters of a regression model. That's why, say, Stata
2006 Apr 23
1
lme: null deviance, deviance due to the random effects, residual deviance
...and $deviance elements as in glm objects...
I tried to find out an answer on R-help & Pineihro & Bates (2000).
Partial success only:
- null deviance: Response: possibly yes: see
http://tolstoy.newcastle.edu.au/R/help/05/12/17796.html (Spencer
Graves). The (null?) deviance is -2*logLik(mylme), but a personnal trial
with some glm objects did not led to the same numbers that the one given
by the print.glm method...
- the deviance due to the the random effect(s). I was supposing that the
coefficients given by ranef(mylme) may be an entry... but beyond this, I
guess those coefficients...
2006 Jan 16
4
Standardized beta-coefficients in regression
Hello list,
I am used to give a lot of attention to the standardized regression
coefficients, which in SPSS are listed automatically.
Is there alternative to running the last two lines in the following example to
get all the information?
ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14)
trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69)
summary( lm(ctl ~ trt) )
2005 Jun 16
1
regressing each column of a matrix on all other columns
DeaR list
I would like to predict the values of each column of a matrix A by
regressing it on all other columns of the same matrix A. I do this with
a for loop:
A <- B <- matrix(round(runif(10*3,1,10),0),10)
A
for (i in 1:length(A[1,])) B[,i] <- as.matrix(predict(lm( A[,i] ~
A[,-i] )))
B
It works fine, but I need it to be faster. I've looked at *apply but
just can't
2007 Oct 30
1
Some matrix and sandwich questions
Dear R-help,
I have a four-part question about regression, matrices, and sandwich package.
1) In the sandwich package, I would like to better understand the
meat() function.
>From the bread() documentation, for a simple OLS regression, bread() returns
(1/n * X'X)^(-1)
That is, for a simple regression (per the documentation on bread()):
MyLM <- lm(y ~ x)
bread(MyLM)
2006 Apr 25
5
Heteroskedasticity in Tobit models
Hello,
I've had no luck finding an R package that has the ability to estimate a
Tobit model allowing for heteroskedasticity (multiplicative, for example).
Am I missing something in survReg? Is there another package that I'm
unaware of? Is there an add-on package that will test for
heteroskedasticity?
Thanks for your help.
Cheers,
Alan Spearot
--
Alan Spearot
Department of Economics
2012 May 11
0
contrasts with an imbalance in a factor
...0L))
## this is what I really want to be able to do, but without contrast
## complaining about the imbalance in the number of rows
cat("### With scanner\n")
fixedFormula=as.formula("fmri ~ group * task + scanner")
randomFormula = as.formula("random = ~ 1 | subject")
mylme = lme(fixed=fixedFormula, random=randomFormula, data=my.model)
## now look at the risky choices (40outcome and 80outcome) versus the
## safe choices (20outcome)
con=contrast(
mylme,
a=list(group="Group1", task=c("40outcome", "80outcome"), scanner=levels(my.model$s...
2007 Nov 20
1
plotting confidence intervals of regression line
Hello,
I am trying to generate a confidence interval (90 or 95%) of a regression
line. This is primarily just for illustration on a scatter plot (i.e. I am
trying to make this
http://www.ast.cam.ac.uk/~rgm/scratch/statsbook/graphics/anima4.gif).
I have been trying to use the predict.lm function, with interval set as
"confidence", but this still seems to be giving me a prediction
2004 Mar 02
2
Some timings for 64 bit Opteron (ATLAS, GOTO, std)
Hi Martin,
When I attended the LinuxWorld Expo in NYC back in January, I chatted with
some folks at the AMD booth, as well as guys from Penguin Computing (where
we bought our Opteron box). I was told that the Operton has this somewhat
strange setup that the memory is controlled by one CPU. The net effect of
this being that when both CPUs are running, one might only be running at
around 90%
2012 Apr 18
0
Error in eval when using contrast and nlme
...will be " , inNumberOfOutputBriks, " stats briks\n")
outStats <- vector(mode="numeric", length=inNumberOfOutputBriks)
##cat (inData, "\n")
if ( ! all(inData == 0 ) ) {
##try(
##if( inherits(
print(inModelFormula)
print(inRandomFormula)
mylme <- lme(fixed=inModelFormula, random=inRandomFormula, data = inModel)
print(mylme)
##,
## silent=FALSE),
##"try-error") ) {
##temp <- 0
## cat (paste("Error on slice", inZ, "\n"))
##} else {
temp <- as.vector(unlist...
2009 Sep 20
3
plotting least-squares regression against x-axis
Hi,
I want to plot the residuals of a least-squares regression.
plot(lm(y~x), which=1)
does this, but it plots the y-axis of my data on the x-axis of the
residuals plot. That is, it plots the residual for each y-value in the
data. Can I instead use the x-axis of my data as the x-axis of the
residuals plot, showing the residual for a given x?
Thanks!
Jason Priem
University of North