Displaying 20 results from an estimated 5000 matches similar to: "ridge regression"
2010 Dec 09
1
survival: ridge log-likelihood workaround
Dear all,
I need to calculate likelihood ratio test for ridge regression. In February I have reported a bug where coxph returns unpenalized log-likelihood for final beta estimates for ridge coxph regression. In high-dimensional settings ridge regression models usually fail for lower values of lambda. As the result of it, in such settings the ridge regressions have higher values of lambda (e.g.
2017 May 04
4
lm() gives different results to lm.ridge() and SPSS
Hallo,
I hope I am posting to the right place. I was advised to try this list by Ben Bolker (https://twitter.com/bolkerb/status/859909918446497795). I also posted this question to StackOverflow (http://stackoverflow.com/questions/43771269/lm-gives-different-results-from-lm-ridgelambda-0). I am a relative newcomer to R, but I wrote my first program in 1975 and have been paid to program in about
2010 Feb 16
1
survival - ratio likelihood for ridge coxph()
It seems to me that R returns the unpenalized log-likelihood for the ratio likelihood test when ridge regression Cox proportional model is implemented. Is this as expected?
In the example below, if I am not mistaken, fit$loglik[2] is unpenalized log-likelihood for the final estimates of coefficients. I would expect to get the penalized log-likelihood. I would like to check if this is as expected.
2017 May 04
2
lm() gives different results to lm.ridge() and SPSS
Hi Simon,
Yes, if I uses coefficients() I get the same results for lm() and lm.ridge(). So that's consistent, at least.
Interestingly, the "wrong" number I get from lm.ridge()$coef agrees with the value from SPSS to 5dp, which is an interesting coincidence if these numbers have no particular external meaning in lm.ridge().
Kind regards,
Nick
----- Original Message -----
2002 Aug 04
5
Pseudo R^2 for logit - really naive question
I am using GLM to calculate logit models based on cross-sectional data. I
am now down to the hard work of making the results intelligible to very
average readers. Is there any way to calculate a psuedo analoque to the R^2
in standard linear regression for use as a purely descriptive statistic of
goodness of fit? Most of the readers of my report will be vaguely familiar
and more comfortable with
2017 May 05
1
lm() gives different results to lm.ridge() and SPSS
Thanks, I was getting to try this, but got side tracked by actual work...
Your analysis reproduces the SPSS unscaled estimates. It still remains to figure out how Nick got
>
coefficients(lm(ZDEPRESSION ~ ZMEAN_PA * ZDIVERSITY_PA, data=s1))
(Intercept) ZMEAN_PA ZDIVERSITY_PA ZMEAN_PA:ZDIVERSITY_PA
0.07342198 -0.39650356
2017 May 05
1
lm() gives different results to lm.ridge() and SPSS
Hi John,
Thanks for the comment... but that appears to mean that SPSS has a big problem. I have always been told that to include an interaction term in a regression, the only way is to do the multiplication by hand. But then it seems to be impossible to stop SPSS from re-standardizing the variable that corresponds to the interaction term. Am I missing something? Is there a way to perform the
2017 May 05
6
lm() gives different results to lm.ridge() and SPSS
Hi,
Here is (I hope) all the relevant output from R.
> mean(s1$ZDEPRESSION, na.rm=T) [1] -1.041546e-16 > mean(s1$ZDIVERSITY_PA, na.rm=T) [1] -9.660583e-16 > mean(s1$ZMEAN_PA, na.rm=T) [1] -5.430282e-15 > lm.ridge(ZDEPRESSION ~ ZMEAN_PA * ZDIVERSITY_PA, data=s1)$coef ZMEAN_PA ZDIVERSITY_PA ZMEAN_PA:ZDIVERSITY_PA
-0.3962254 -0.3636026
2003 Sep 14
3
Re: Logistic Regression
Christoph Lehman had problems with seperated data in two-class logistic regression.
One useful little trick is to penalize the logistic regression using a quadratic penalty on the coefficients.
I am sure there are functions in the R contributed libraries to do this; otherwise it is easy to achieve via IRLS
using ridge regressions. Then even though the data are separated, the penalized
2002 Aug 05
3
Formatting POSIXt values in plot axis labels
Hello.
I have an XYY series that I would like to graph with matplot() or some
other single function that will do the trick.
The X in question is a vector of POSIXt values obtained from strptime().
Is it possible to tell matplot() how to handle POSIXt x values?
I have examined the examples at
http://lark.cc.ukans.edu/~pauljohn/R/statsRus.html#5.22 , but would
prefer not have to overlay the
2011 Aug 23
1
obtaining p-values for lm.ridge() coefficients (package 'MASS')
Dear all
I'm familiarising myself with Ridge Regressions in R and the following
is bugging me: How does one get p-values for the coefficients obtained
from MASS::lm.ridge() output (for a given lambda)? Consider the
example below (adapted from PRA [1]):
> require(MASS)
> data(longley)
> gr <- lm.ridge(Employed ~ .,longley,lambda = seq(0,0.1,0.001))
> plot(gr)
> select(gr)
2017 May 04
0
lm() gives different results to lm.ridge() and SPSS
Hi Nick,
I think that the problem here is your use of $coef to extract the coefficients of the ridge regression. The help for lm.ridge states that coef is a "matrix of coefficients, one row for each value of lambda. Note that these are not on the original scale and are for use by the coef method."
I ran a small test with simulated data, code is copied below, and indeed the output from
2007 Apr 12
1
Question on ridge regression with R
Hi,
I am working on a project about hospital efficiency. Due to the high
multicolinearlity of the data, I want to fit the model using ridge
regression. However, I believe that the data from large hospital(indicated
by the number of patients they treat a year) is more accurate than from
small hosptials, and I want to put more weight on them. How do I do this
with lm.ridge?
I know I just need
2017 May 05
0
lm() gives different results to lm.ridge() and SPSS
I asked you before, but in case you missed it: Are you looking at the right place in SPSS output?
The UNstandardized coefficients should be comparable to R, i.e. the "B" column, not "Beta".
-pd
> On 5 May 2017, at 01:58 , Nick Brown <nick.brown at free.fr> wrote:
>
> Hi Simon,
>
> Yes, if I uses coefficients() I get the same results for lm() and
2009 Aug 01
2
Cox ridge regression
Hello,
I have questions regarding penalized Cox regression using survival
package (functions coxph() and ridge()). I am using R 2.8.0 on Ubuntu
Linux and survival package version 2.35-4.
Question 1. Consider the following example from help(ridge):
> fit1 <- coxph(Surv(futime, fustat) ~ rx + ridge(age, ecog.ps, theta=1), ovarian)
As I understand, this builds a model in which `rx' is
2009 Aug 19
1
Ridge regression [Repost]
Dear all,
For an ordinary ridge regression problem, I followed three different
approaches:
1. estimate beta without any standardization
2. estimate standardized beta (standardizing X and y) and then again convert
back
3. estimate beta using lm.ridge() function
X<-matrix(c(1,2,9,3,2,4,7,2,3,5,9,1),4,3)
y<-as.matrix(c(2,3,4,5))
n<-nrow(X)
p<-ncol(X)
#Without standardization
2003 May 08
2
natural splines
Apologies if this is this too obscure for R-help.
In package splines, ns(x,,knots,intercept=TRUE) produces an n by K+2
matrix N, the values of K+2 basis functions for the natural splines with K
(internal) knots, evaluated at x. It does this by first generating an
n by K+4 matrix B of unconstrained splines, then postmultiplying B by
H, a K+4 by K+2 representation of the nullspace of C (2 by K+4),
2017 May 05
0
lm() gives different results to lm.ridge() and SPSS
I had no problems running regression models in SPSS and R that yielded the same results for these data.
The difference you are observing is from fitting different models. In R, you fitted:
res <- lm(DEPRESSION ~ ZMEAN_PA * ZDIVERSITY_PA, data=dat)
summary(res)
The interaction term is the product of ZMEAN_PA and ZDIVERSITY_PA. This is not a standardized variable itself and not the same as
2017 May 05
0
lm() gives different results to lm.ridge() and SPSS
Dear Nick,
On 2017-05-05, 9:40 AM, "R-devel on behalf of Nick Brown"
<r-devel-bounces at r-project.org on behalf of nick.brown at free.fr> wrote:
>>I conjecture that something in the vicinity of
>> res <- lm(DEPRESSION ~ scale(ZMEAN_PA) + scale(ZDIVERSITY_PA) +
>>scale(ZMEAN_PA * ZDIVERSITY_PA), data=dat)
>>summary(res)
>> would reproduce the
2004 Feb 01
5
Stepwise regression and PLS
Dear all,
I am a newcomer to R. I intend to using R to do stepwise regression and
PLS with a data set (a 55x20 matrix, with one dependent and 19
independent variable). Based on the same data set, I have done the same
work using SPSS and SAS. However, there is much difference between the
results obtained by R and SPSS or SAS.
In the case of stepwise, SPSS gave out a model with 4 independent