similar to: Ridge Regression variable selection

Displaying 20 results from an estimated 400 matches similar to: "Ridge Regression variable selection"

2013 Apr 27
1
Selecting ridge regression coefficients for minimum GCV
Hi all, I have run a ridge regression as follows: reg=lm.ridge(final$l~final$lag1+final$lag2+final$g+final$u, lambda=seq(0,10,0.01)) Then I enter : select(reg) and it returns: modified HKB estimator is 19.3409 modified L-W estimator is 36.18617 smallest value of GCV at 10 I think it means that it is advisable to
2009 Jun 04
0
help needed with ridge regression and choice of lambda with lm.ridge!!!
Hi, I'm a beginner in the field, I have to perform the ridge regression with lm.ridge for many datasets, and I wanted to do it in an automatic way. In which way I can automatically choose lambda ? As said, right now I'm using lm.ridge MASS function, which I found quite simple and fast, and I've seen that among the returned values there are HKB estimate of the ridge constant and L-W
2011 Dec 05
1
finding interpolated values along an empirical parametric curve
Given the following data, I am plotting log.det ~ norm.beta, where the points depend on a parameter, lambda (but there is no functional form). I want to find the (x,y) positions along this curve corresponding to two special values of lambda lambda.HKB <- 0.004275357 lambda.LW <- 0.03229531 and draw reference lines at ~ -45 degrees (or normal to the curve) thru these points. How can I do
2013 Apr 26
1
Regression coefficients
Hi all, I have run a ridge regression as follows: reg=lm.ridge(final$l~final$lag1+final$lag2+final$g+final$g+final$u, lambda=seq(0,10,0.01)) Then I enter : select(reg) and it returns: modified HKB estimator is 19.3409 modified L-W estimator is 36.18617 smallest value of GCV at 10 I think it means that it is
2011 Aug 23
1
obtaining p-values for lm.ridge() coefficients (package 'MASS')
Dear all I'm familiarising myself with Ridge Regressions in R and the following is bugging me: How does one get p-values for the coefficients obtained from MASS::lm.ridge() output (for a given lambda)? Consider the example below (adapted from PRA [1]): > require(MASS) > data(longley) > gr <- lm.ridge(Employed ~ .,longley,lambda = seq(0,0.1,0.001)) > plot(gr) > select(gr)
2009 Aug 21
1
applying summary() to an object created with ols()
Hello R-list, I am trying to calculate a ridge regression using first the *lm.ridge()* function from the MASS package and then applying the obtained Hoerl Kennard Baldwin (HKB) estimator as a penalty scalar to the *ols()* function provided by Frank Harrell in his Design package. It looks like this: > rrk1<-lm.ridge(lnbcpc ~ lntex + lnbeerp + lnwinep + lntemp + pop, subset(aa,
2013 Apr 30
0
Ridge regression
Hi all, I have run a ridge regression on a data set 'final' as follows: reg=lm.ridge(final$l~final$lag1+final$lag2+final$g+final$u, lambda=seq(0,10,0.01)) Then I enter : select(reg) and it returns: modified HKB estimator is 19.3409 modified L-W estimator is 36.18617 smallest value of GCV at 10 I think it
2012 Nov 27
4
Order function
I have a set of data with 2 columns: time, size. There are 20 sets of data. The data is looking at whether the size of a seed affects the time it takes to germinate. How do I then create a numerical variable called 'order' with values 1 to 20 in order to plot a graph or order against time? -- View this message in context: http://r.789695.n4.nabble.com/Order-function-tp4651022.html
2009 Mar 17
1
Likelihood of a ridge regression (lm.ridge)?
Dear all, I want to get the likelihood (or AIC or BIC) of a ridge regression model using lm.ridge from the MASS library. Yet, I can't really find it. As lm.ridge does not return a standard fit object, it doesn't work with functions like e.g. BIC (nlme package). Is there a way around it? I would calculate it myself, but I'm not sure how to do that for a ridge regression. Thank you in
2007 Apr 12
1
Question on ridge regression with R
Hi, I am working on a project about hospital efficiency. Due to the high multicolinearlity of the data, I want to fit the model using ridge regression. However, I believe that the data from large hospital(indicated by the number of patients they treat a year) is more accurate than from small hosptials, and I want to put more weight on them. How do I do this with lm.ridge? I know I just need
2009 Aug 01
2
Cox ridge regression
Hello, I have questions regarding penalized Cox regression using survival package (functions coxph() and ridge()). I am using R 2.8.0 on Ubuntu Linux and survival package version 2.35-4. Question 1. Consider the following example from help(ridge): > fit1 <- coxph(Surv(futime, fustat) ~ rx + ridge(age, ecog.ps, theta=1), ovarian) As I understand, this builds a model in which `rx' is
2005 Aug 24
1
lm.ridge
Hello, I have posted this mail a few days ago but I did it wrong, I hope is right now: I have the following doubts related with lm.ridge, from MASS package. To show the problem using the Longley example, I have the following doubts: First: I think coefficients from lm(Employed~.,data=longley) should be equal coefficients from lm.ridge(Employed~.,data=longley, lambda=0) why it does not happen?
2007 Apr 17
1
value of complexity parameter in ridge regression
Hi, What is the optimum range to look for a value of lambda while doing ridge regression. Can/ should lambda be greater than 1 ? I have conflicting (or what appears conflicting to me) sources that use lambda >= 0, without any upper limit, but that makes the search space infinite.. right ?? So, perhaps my question is: is there an upper limit to lambda. Does the value of lambda convey
2008 May 07
1
use of sequence on ridge regression
Dear R users. I have a doubt about the use of the sequence option on Ridge regression. I'm trying to understand the use of this option when variables are highly linear correlated. I'm running a model where the variables HtShoes and Ht have high VIF values. My program is written below, but I'm not sure about the correct way of using the sequence option: library (faraway) data (seatpos)
2012 Jul 06
4
Poisson Ridge Regression
Dear everyone I'm dealing with a problem related to Poisson Ridge Regression. If anyone can help me in this regard by telling if any changes in the source code of "glm.fit" may help -- Regards Umesh Khatri
2010 Dec 09
1
survival: ridge log-likelihood workaround
Dear all, I need to calculate likelihood ratio test for ridge regression. In February I have reported a bug where coxph returns unpenalized log-likelihood for final beta estimates for ridge coxph regression. In high-dimensional settings ridge regression models usually fail for lower values of lambda. As the result of it, in such settings the ridge regressions have higher values of lambda (e.g.
2010 Feb 16
1
survival - ratio likelihood for ridge coxph()
It seems to me that R returns the unpenalized log-likelihood for the ratio likelihood test when ridge regression Cox proportional model is implemented. Is this as expected? In the example below, if I am not mistaken, fit$loglik[2] is unpenalized log-likelihood for the final estimates of coefficients. I would expect to get the penalized log-likelihood. I would like to check if this is as expected.
2009 Dec 02
1
Ridge regression
Dear list, I have a couple of questions concerning ridge regression. I am using the lm.ridge(...) function in order to fit a model to my microarray data. Thus *model=lm.ridge(...)* I retrieve some coefficients and some scales for each gene. First of all, I would like to ask: the real coefficients of the model are not included in the first argument of the output but in the result of coef(model),
2000 Mar 28
2
Logistic ridge regression ...
Hi I have some data (v. large amount) with a (0,1) response where I want to minimise the errors in the betas rather than SS or deviance. So can anyone point me to a ridge regression function or equivalent for such a logistic regression case? John -.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.- r-help mailing list -- Read
2005 Feb 16
2
R: ridge regression
hi all a technical question for those bright statisticians. my question involves ridge regression. definition: n=sample size of a data set X is the matrix of data with , say p variables Y is the y matrix i.e the response variable Z(i,j) = ( X(i,j)- xbar(j) / [ (n-1)^0.5* std(x(j))] Y_new(i)=( Y(i)- ybar(j) ) / [ (n-1)^0.5* std(Y(i))] (note that i have scaled the Y matrix as well) k is