Displaying 9 results from an estimated 9 matches for "y_hat".
Did you mean:
b_hat
2007 Oct 17
1
y_hat
Hello,
suppose one has the following values
x1 <- rnorm(10,5,1)
x2 <- rgamma(10,5,1)
y <- rnorm(10,4,1)
mydat <- data.frame(y,x1,x2)
then one can use glm like
mod <- glm(y~x1+x2, data=mydat, family=gaussian)
But how could I estimate y_hat?
Thanks alot!
Sam
---------------------------------
[[alternative HTML version deleted]]
2009 Feb 19
1
matrix computation???
Hello
Can anyone tell me what I am doing wrong below? My Y and y_hat are the same.
A<-scale(stackloss)
n1<- dim(A)[1];n2<-dim(A)[2]
X<-svd(A)
Y<- matrix(A[,"stack.loss"],nrow=n1)
Y
y_hat <-matrix((X$u%*% t(X$u))%*%Y,nrow=n1,byrow=T)
y_hat
[[alternative HTML version deleted]]
2023 Jan 26
1
Failing to install the rgl package
Hi,
I try to execute the seven lines of code below to plot a graph. But I
am failing as the messages below show. Where am I going wrong?
install.packages("rgl")
library(rgl)
y_hat = X%*%B_hat
open3d(windowRect = c(100,100,900,900),family = "serif")
color = rainbow(length(y_hat))[rank(y_hat)]
plot3d(educ,exper,wage,col = color,type = "s",size = 0.5,xlim =
c(0,20),ylim = c(0,60),zlim = c(-10,70),box = FALSE,axes = TRUE)
planes3d(B_hat[2],B_hat[3],-1,B_hat[1...
2010 Jun 23
1
Estimate of variance and prediction for multiple linear regression
...=rnorm(10,mean=5)
x1=rnorm(10,mean=2)
x2=rnorm(10)
lin=lm(y~x1+x2)
summary(lin)
## In the summary, 'Residual standard error: 1.017 on 7 degrees of freedom',
1.017 is the estimate of the constance variance?
Q2:
beta0=lin$coefficients[1]
beta1=lin$coefficients[2]
beta2=lin$coefficients[3]
y_hat=beta0+beta1*x1+beta2*x2
## Is there any built-in function in R to obtain y_hat directly?
Q3:
If I want to apply this regression result to another dataset, that is, new
x1 and x2. Is the built-in function in 2 still available?
Thank you in advance!
[[alternative HTML version deleted]]
2006 Jul 11
1
test regression against given slope for reduced major axis regression (RMA)
...hlf, p. 465/471
n <- length(x)
mydf <- n-2
## least square fit:
x2 <- (x-mean(x))^2
y2 <- (y-mean(y))^2
## regression (pedestrian solution):
xy <- (x-mean(x))*(y-mean(y))
slope1 <- sum(xy)/sum(x2)
intercept_a <- mean(y) - slope1 * mean(x)
## model data y_hat:
y_hat <- intercept_a + slope1 * x
## least squares of model data:
y_hat2 <- (y - y_hat)^2
s2yx <- sum(y_hat2) / (n-2)
sb <- sqrt(s2yx/sum(x2))
ts <- (slope1 - slope_2) / sb
pvalue <- 2*(pt(abs(ts), df, lower.tail=FALSE))
## 0.95 for one-tailed 0.975 for two-tai...
2010 Mar 25
1
Manually calculate SVM
...#39;ve gotten used
to using the svm function in the e1071 package. It works great.
Now, I want to do/learn some more interesting stuff. (Perhaps my own
kernel and/or scoring system). So I want to better understand
1) how calculation of the kernel happens.
2) how to calculate the predicted value (y_hat) given a list of support
vectors and coefficients.
I've seen all the formulas and many of the books. I get most of it
conceptually. Where I'm having trouble is making the leap from concept
to actual use. Ideally, I'd love to code some of the basic stuff in R
or Perl in scratch. It...
2005 Dec 05
1
Help
...I apologize if it is too simple question for all. I have a multivariate
dataset having 7 variables as independent and 1 dependent variable. 248
data points are there. I want to do out sample forecast first
considering 156 points. So I'll have to start from 157th point and
calculate the 157th y_hat value. In this way it will go to 248th data
point. Can any one tell me how I can do with for loop. Thanks a lot in
advance.
Thanks & Regards,
SUMANTA BASAK.
-------------------------------------------------------------------------------------------------------------------
This e-mail...
2005 Jul 20
1
predict.lm - standard error of predicted means?
...1 2
0.2708064 0.7254615
predict.lm(model,newdata=data.frame(x=c(10,20)),se.fit=T,interval="prediction")$se.fit
1 2
0.2708064 0.7254615
I was surprised to find that the standard errors returned were in fact the
standard errors of the sampling distribution of Y_hat:
sqrt(MSE(1/n + (x-x_bar)^2/SS_x)),
not the standard errors of Y_new (predicted value):
sqrt(MSE(1 + 1/n + (x-x_bar)^2/SS_x)).
Is there a reason this quantity is called the "standard error of predicted
means" if it doesn't relate to the prediction distribution?
Turning to Neter e...
2006 Jan 10
2
Obtaining the adjusted r-square given the regression coefficients
Hi people,
I want to obtain the adjusted r-square given a set of coefficients (without the intercept), and I don't know if there is a function that does it. Exist????????????????
I know that if you make a linear regression, you enter the dataset and have in "summary" the adjusted r-square. But this is calculated using the coefficients that R obtained,and I want other coefficients