Displaying 20 results from an estimated 3000 matches similar to: "value of complexity parameter in ridge regression"
2008 May 06
1
mgcv::gam shrinkage of smooths
In Dr. Wood's book on GAM, he suggests in section 4.1.6 that it might be
useful to shrink a single smooth by adding S=S+epsilon*I to the penalty
matrix S. The context was the need to be able to shrink the term to zero if
appropriate. I'd like to do this in order to shrink the coefficients towards
zero (irrespective of the penalty for "wiggliness") - but not necessarily
all the
2009 Mar 17
1
Likelihood of a ridge regression (lm.ridge)?
Dear all,
I want to get the likelihood (or AIC or BIC) of a ridge regression model
using lm.ridge from the MASS library. Yet, I can't really find it. As
lm.ridge does not return a standard fit object, it doesn't work with
functions like e.g. BIC (nlme package). Is there a way around it? I would
calculate it myself, but I'm not sure how to do that for a ridge regression.
Thank you in
2009 Aug 01
2
Cox ridge regression
Hello,
I have questions regarding penalized Cox regression using survival
package (functions coxph() and ridge()). I am using R 2.8.0 on Ubuntu
Linux and survival package version 2.35-4.
Question 1. Consider the following example from help(ridge):
> fit1 <- coxph(Surv(futime, fustat) ~ rx + ridge(age, ecog.ps, theta=1), ovarian)
As I understand, this builds a model in which `rx' is
2007 Apr 12
1
Question on ridge regression with R
Hi,
I am working on a project about hospital efficiency. Due to the high
multicolinearlity of the data, I want to fit the model using ridge
regression. However, I believe that the data from large hospital(indicated
by the number of patients they treat a year) is more accurate than from
small hosptials, and I want to put more weight on them. How do I do this
with lm.ridge?
I know I just need
2005 Aug 24
1
lm.ridge
Hello, I have posted this mail a few days ago but I did it wrong, I hope
is right now:
I have the following doubts related with lm.ridge, from MASS package. To
show the problem using the Longley example, I have the following doubts:
First: I think coefficients from lm(Employed~.,data=longley) should be
equal coefficients from lm.ridge(Employed~.,data=longley, lambda=0) why
it does not happen?
2013 Apr 27
1
Selecting ridge regression coefficients for minimum GCV
Hi all,
I have run a ridge regression as follows:
reg=lm.ridge(final$l~final$lag1+final$lag2+final$g+final$u,
lambda=seq(0,10,0.01))
Then I enter :
select(reg) and it returns: modified HKB estimator is 19.3409
modified L-W estimator is 36.18617
smallest value of GCV at 10
I think it means that it is advisable to
2008 May 07
1
use of sequence on ridge regression
Dear R users. I have a doubt about the use of the sequence option on
Ridge regression. I'm trying to understand the use of this option when
variables are highly linear correlated. I'm running a model where the
variables HtShoes and Ht have high VIF values. My program is written
below, but I'm not sure about the correct way of using the sequence
option:
library (faraway)
data (seatpos)
2012 Jul 06
4
Poisson Ridge Regression
Dear everyone
I'm dealing with a problem related to Poisson Ridge Regression. If
anyone can help me in this regard by telling if any changes in the
source code of "glm.fit" may help
--
Regards
Umesh Khatri
2010 Dec 09
1
survival: ridge log-likelihood workaround
Dear all,
I need to calculate likelihood ratio test for ridge regression. In February I have reported a bug where coxph returns unpenalized log-likelihood for final beta estimates for ridge coxph regression. In high-dimensional settings ridge regression models usually fail for lower values of lambda. As the result of it, in such settings the ridge regressions have higher values of lambda (e.g.
2012 Dec 27
1
Ridge Regression variable selection
Unlike L1 (lasso) regression or elastic net (mixture of L1 and L2), L2 norm
regression (ridge regression) does not select variables. Selection of
variables would not work properly, and it's unclear why you would want to
omit "apparently" weak variables anyway.
Frank
maths123 wrote
> I have a .txt file containing a dataset with 500 samples. There are 10
> variables.
>
>
2010 Feb 16
1
survival - ratio likelihood for ridge coxph()
It seems to me that R returns the unpenalized log-likelihood for the ratio likelihood test when ridge regression Cox proportional model is implemented. Is this as expected?
In the example below, if I am not mistaken, fit$loglik[2] is unpenalized log-likelihood for the final estimates of coefficients. I would expect to get the penalized log-likelihood. I would like to check if this is as expected.
2009 Dec 02
1
Ridge regression
Dear list,
I have a couple of questions concerning ridge regression. I am using the
lm.ridge(...) function in order to fit a model to my microarray data.
Thus *model=lm.ridge(...)*
I retrieve some coefficients and some scales for each gene. First of all, I
would like to ask: the real coefficients of the model are not included in
the first argument of the output but in the result of coef(model),
2009 Aug 19
1
ridge regression
Dear all,
I considered an ordinary ridge regression problem. I followed three
different ways:
1. estimate beta without any standardization
2. estimate standardized beta (standardizing X and y) and then again convert
back
3. estimate beta using lm.ridge() function
X<-matrix(c(1,2,9,3,2,4,7,2,3,5,9,1),4,3)
y<-t(as.matrix(cbind(2,3,4,5)))
n<-nrow(X)
p<-ncol(X)
#Without
2005 Feb 16
2
R: ridge regression
hi all
a technical question for those bright statisticians.
my question involves ridge regression.
definition:
n=sample size of a data set
X is the matrix of data with , say p variables
Y is the y matrix i.e the response variable
Z(i,j) = ( X(i,j)- xbar(j) / [ (n-1)^0.5* std(x(j))]
Y_new(i)=( Y(i)- ybar(j) ) / [ (n-1)^0.5* std(Y(i))] (note that i have
scaled the Y matrix as well)
k is
2000 Mar 28
2
Logistic ridge regression ...
Hi
I have some data (v. large amount) with a (0,1) response where I want to
minimise the errors in the betas rather than SS or deviance.
So can anyone point me to a ridge regression function or equivalent for
such a logistic regression case?
John
-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-.-
r-help mailing list -- Read
2009 Aug 19
1
Ridge regression [Repost]
Dear all,
For an ordinary ridge regression problem, I followed three different
approaches:
1. estimate beta without any standardization
2. estimate standardized beta (standardizing X and y) and then again convert
back
3. estimate beta using lm.ridge() function
X<-matrix(c(1,2,9,3,2,4,7,2,3,5,9,1),4,3)
y<-as.matrix(c(2,3,4,5))
n<-nrow(X)
p<-ncol(X)
#Without standardization
2011 Aug 23
1
obtaining p-values for lm.ridge() coefficients (package 'MASS')
Dear all
I'm familiarising myself with Ridge Regressions in R and the following
is bugging me: How does one get p-values for the coefficients obtained
from MASS::lm.ridge() output (for a given lambda)? Consider the
example below (adapted from PRA [1]):
> require(MASS)
> data(longley)
> gr <- lm.ridge(Employed ~ .,longley,lambda = seq(0,0.1,0.001))
> plot(gr)
> select(gr)
2013 Mar 31
1
Rock Ridge for core/fs/iso9660
Hi,
i have now a retriever of Rock Ridge names from ISO directory
records and their eventual Continuation Areas.
Further i have a detector for SUSP and Rock Ridge signatures.
Both have been tested in libisofs by comparing their results with
the Rock Ridge info as perceived by the library.
50 ISO images tested. Some bugs repaired. Now they are in sync.
(The macro case
2017 May 04
4
lm() gives different results to lm.ridge() and SPSS
Hallo,
I hope I am posting to the right place. I was advised to try this list by Ben Bolker (https://twitter.com/bolkerb/status/859909918446497795). I also posted this question to StackOverflow (http://stackoverflow.com/questions/43771269/lm-gives-different-results-from-lm-ridgelambda-0). I am a relative newcomer to R, but I wrote my first program in 1975 and have been paid to program in about
2017 May 04
2
lm() gives different results to lm.ridge() and SPSS
Hi Simon,
Yes, if I uses coefficients() I get the same results for lm() and lm.ridge(). So that's consistent, at least.
Interestingly, the "wrong" number I get from lm.ridge()$coef agrees with the value from SPSS to 5dp, which is an interesting coincidence if these numbers have no particular external meaning in lm.ridge().
Kind regards,
Nick
----- Original Message -----