similar to: lm() intercept at the end, rather than at the beginning

Displaying 20 results from an estimated 10000 matches similar to: "lm() intercept at the end, rather than at the beginning"

2004 Mar 08
2
getting the std errors in the lm function
Hello, I have a simple question for you: making: mylm<-lm(y~x) summary(mylm) I get the following results: ****************************************************** Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 16.54087 0.19952 82.91 <2e-16 *** x[1:19] -2.32337 0.04251 -54.66 <2e-16 *** ******************************************************
2003 Oct 11
1
Subclassing lm
I'd trying to subclass the "lm" class to produce a "mylm" class whose instances behave like lm objects (are accepted by methods like summary.lm) but have additional data or slots of my own design. For starters: setClass("mylm", "lm") produces the somewhat cryptic: Warning message: Old-style (``S3'') class "mylm" supplied as a
2005 Oct 20
5
spliting an integer
Hi there, From the vector X of integers, X = c(11999, 122000, 81997) I would like to make these two vectors: Z= c(1999, 2000, 1997) Y =c(1 , 12 , 8) That is, each entry of vector Z receives the four last digits of each entry of X, and Y receives "the rest". Any suggestions? Thanks in advance, Dimitri [[alternative HTML version deleted]]
2005 Oct 25
2
Inf in regressions
Hi, Suppose I I wish to run lm( y ~ x + z + log(w) ) where w assumes non-negative values. A problem arises when w=0, as log(0) = -Inf, and R doesn't accept that (as it "accepts" NA). Is there a way to tell R to do with -Inf the same it does with NA, i.e, to ignore it? ( Otherwise I have to do something like w[w==0] <- NA which doesn't hurt, but might be a bit
2007 Oct 30
1
Some matrix and sandwich questions
Dear R-help, I have a four-part question about regression, matrices, and sandwich package. 1) In the sandwich package, I would like to better understand the meat() function. >From the bread() documentation, for a simple OLS regression, bread() returns (1/n * X'X)^(-1) That is, for a simple regression (per the documentation on bread()): MyLM <- lm(y ~ x) bread(MyLM)
2010 Sep 14
1
NA confusion (length question)
Hi folks, I am running a very simple regression using mylm <- lm(mass ~ tarsus, na.action=na.exclude) I would like the use the residuals from this analysis for more regression but I'm running into a snag when I try cbind(mylm$residuals, mydata) # where my data is the original data set The error tells me that it cannot use cbind because the length of mylm$residuals is
2006 Mar 03
3
memory once again
Dear all, A few weeks ago, I asked this list why small Stata files became huge R files. Thomas Lumley said it was because "Stata uses single-precision floating point by default and can use 1-byte and 2-byte integers. R uses double precision floating point and four-byte integers." And it seemed I couldn't do anythig about it. Is it true? I mean, isn't there a (more or less
2009 Mar 31
1
using "substitute" inside a legend
Hello list, I have a linear regression: mylm = lm(y~x-1) I've been reading old mail postings as well as the plotmath demo and I came up with a way to print an equation resulting from a linear regression: model = substitute(list("y"==slope%*%"x", R^2==rsq), list(slope=round(mylm$coefficients[[1]],2),rsq=round(summary(mylm)$adj.r.squared, 2))) I have four models and I
2005 Jun 24
2
Gini with frequencies
Hi there, I am trying to compute Gini coefficients for vectors containing income classes. The data I possess look loke this: yit <- c(135, 164, 234, 369) piit <- c(367, 884, 341, 74 ) where yit is the vector of income classes, and fit is the vector of associated frequencies.(This data is from Rustichini, Ichino and Checci (Journal of Public Economics, 1999) ). In ineq pacakge, Gini( )
2009 May 14
1
automated polynomial regression
Dear all - We perform some measurements with a machine that needs to be recalibrated. The best calibration we get with polynomial regression. The data might look like follows: > true_y <- c(1:50)*.8 > # the real values > m_y <- c((1:21)*1.1, 21.1, 22.2, 23.3 ,c(25:50)*.9)/0.3-5.2 > # the measured data > x <- c(1:50) > # and the x-axes > > # Now I do the following:
2011 May 20
2
extraction of mean square value from ANOVA
Hello, I am randomly generating values and then using an ANOVA table to find the mean square value. I would like to form a loop that extracts the mean square value from ANOVA in each iteration. Below is an example of what I am doing. a<-rnorm(10) b<-factor(c(1,1,2,2,3,3,4,4,5,5)) c<-factor(c(1,2,1,2,1,2,1,2,1,2)) mylm<-lm(a~b+c) anova(mylm) Since I would like to use a loop to
2006 Jan 26
1
efficiency with "%*%"
Hi, x and y are (numeric) vectors. I wonder if one of the following is more efficient than the other: x%*%y or sum(x*y) ? Thanks, Dimitri Szerman
2007 Apr 05
2
creating a data frame from a list
Dear all, A few months ago, I asked for your help on the following problem: I have a list with three (named) numeric vectors: > lst = list(a=c(A=1,B=8) , b=c(A=2,B=3,C=0), c=c(B=2,D=0) ) > lst $a A B 1 8 $b A B C 2 3 0 $c B D 2 0 Now, I'd love to use this list to create the following data frame: > dtf = data.frame(a=c(A=1,B=8,C=NA,D=NA), +
2009 Jan 24
2
how to prevent duplications of data within a loop
Hi All, I had posted a question on a similar topic, but I think it was not focused. I am posting a modification that I think better accomplishes this. I hope this is ok, and I apologize if it is not. :) I am looping through variables and running several regressions. I have reason to believe that the data is being duplicated because I have been monitoring the memory use on unix. How can I avoid
2006 Apr 24
1
omitting coefficients in summary.lm()
Hi, I'm running a regression using lm(), in which one of the right-hand side variables is factor with many levels (say, 80). I am not intersted in the estimates of the resulting dummies, but I have to include them in my regression equation. So, I don't want the estimates associated with theses dummies to be printed by summary.lm( ). Is there an easy way to do this? Thank you, Dimitri
2006 Apr 29
1
splitting and saving a large dataframe
Hi, I searched for this in the mailing list, but found no results. I have a large dataframe ( dim(mydata)= 1297059 16, object.size(mydata= 145280576) ) , and I want to perform some calculations which can be done by a factor's levels, say, mydata$myfactor. So what I want is to split this dataframe into nlevels(mydata$myfactor) = 80 levels. But I must do this efficiently, that is, I
2006 Jul 05
1
creating a data frame from a list
Dear all, I have a list with three (named) numeric vectors: > lst = list(a=c(A=1,B=8) , b=c(A=2,B=3,C=0), c=c(B=2,D=0) ) > lst $a A B 1 8 $b A B C 2 3 0 $c B D 2 0 Now, I'd love to use this list to create the following data frame: > dtf = data.frame(a=c(A=1,B=8,C=NA,D=NA), + b=c(A=2,B=3,C=0,D=NA), + c=c(A=NA,B=2,C=NA,D=0) ) > dtf a b
2006 Jul 12
1
help in vectorization
Hi, I have two data frames. One is like > dtf = data.frame(y=c(rep(2002,4), rep(2003,5)), + m=c(9:12, 1:5), + def=c(.74,.75,.76,.78,.80,.82,.85,.85,.87)) and the other dtf2 = data.frame(y=rep( c(2002,2003),20), m=c(trunc(runif(20,1,5)),trunc(runif(20,9,12))), inc=rnorm(40,mean=300,sd=150) ) What I want is to divide
2005 Jun 16
1
regressing each column of a matrix on all other columns
DeaR list I would like to predict the values of each column of a matrix A by regressing it on all other columns of the same matrix A. I do this with a for loop: A <- B <- matrix(round(runif(10*3,1,10),0),10) A for (i in 1:length(A[1,])) B[,i] <- as.matrix(predict(lm( A[,i] ~ A[,-i] ))) B It works fine, but I need it to be faster. I've looked at *apply but just can't
2005 Jun 09
1
getting more than the coefficients
Hi there, I am trying to export a regression output to Latex. I am using the xtable function in the xtable library. Doing myfit <- lm(myformula, mydata) print.xtable(xtable(myfit), file="myfile") only returns the estimated coefficients and the correspondent standard erros, t-statiscs and p-values. But I wish to get a bit more, say, the number of observations used in the