similar to: Different results of coefficients by packages penalized and glmnet

Displaying 20 results from an estimated 100 matches similar to: "Different results of coefficients by packages penalized and glmnet"

2007 Nov 30
1
Puzzling message: "no man files in this package"
Dear R developers, When building/checking my package (in R 2.6.1 under windows) I run into some messages that I do not completely understand and that do not give me precise enough leads to pinpoint where the error in my package is. I would be very grateful for any suggestions. Did anyone else encounter the same problem before? When building or installing the package, I get the message (no error
2017 Jun 20
5
fitting cosine curve
Hi R users, I have a question about fitting a cosine curve. I don't know how to set the approximate starting values. Besides, does the method work for sine curve as well? Thanks. Part of the dataset is in the following: y=c(16.82, 16.72, 16.63, 16.47, 16.84, 16.25, 16.15, 16.83, 17.41, 17.67, 17.62, 17.81, 17.91, 17.85, 17.70, 17.67, 17.45, 17.58, 16.99, 17.10) t=c(7, 37, 58, 79, 96,
2017 Jun 21
1
fitting cosine curve
Using a more stable nonlinear modeling tool will also help, but key is to get the periodicity right. y=c(16.82, 16.72, 16.63, 16.47, 16.84, 16.25, 16.15, 16.83, 17.41, 17.67, 17.62, 17.81, 17.91, 17.85, 17.70, 17.67, 17.45, 17.58, 16.99, 17.10) t=c(7, 37, 58, 79, 96, 110, 114, 127, 146, 156, 161, 169, 176, 182, 190, 197, 209, 218, 232, 240) lidata <- data.frame(y=y, t=t) #I use the
2017 Jun 20
0
fitting cosine curve
Hi lily, You can get fairly good starting values just by eyeballing the curves: plot(y) lines(supsmu(1:20,y)) lines(0.6*cos((1:20)/3+0.6*pi)+17.2) Jim On Wed, Jun 21, 2017 at 9:17 AM, lily li <chocold12 at gmail.com> wrote: > Hi R users, > > I have a question about fitting a cosine curve. I don't know how to set the > approximate starting values. Besides, does the method
2012 May 05
0
penalized quantile regression (rq.fit.lasso)
Dear all: I have a question about how to get the optimal estimate of coefficients using the penalized quantile regression (LASSO penalty in quantile regression defined in Koenker 2005). In R, I found both rq(y ~ x, method="lasso",lambda = 30) and rq.fit.lasso(x, y, tau = 0.5, lambda = 1, beta = .9995, eps = 1e-06) can give the estimates. But, I didn't find a way using either of
2005 Jan 19
1
recursive penalized regression
Hi, Few days ago I posted a question to r-sig-finance, which I thought would be an easy one. To my surprise I have received no replies, which makes me think that it is either harder than I thought, or that it makes no sense. I am reposting the message (with some modifications) on the R-help in a hope to get some leads, suggestions for alternatives, etc. My apologies to those who had seen this on
2010 Aug 03
1
Penalized Gamma GLM
Hi, I couldn't find a package to fit a penalized (lasso/ridge) Gamma regression model. Does anybody know any? Thanks in advance, Lars. [[alternative HTML version deleted]]
2009 Aug 03
1
penalized logistic regression
Hi, R users, Is there any package for penalized logistic regression with more than two response classes? I read the manual for stepPlr, but it seems it's only for binary case. Thank you, Annie [[alternative HTML version deleted]]
2011 May 20
1
Contrasts in Penalized Package
Hi, The "penalized" documentation says that "Unordered factors are turned into as many dummy variables as the factor has levels". This is done by a function in the package called contr.none. I'm trying to figure out how exactly is a model matrix created with this contrast option when the user calls the function with a formula. I typed "library(penalized) ;
2007 Jun 10
0
penalized cox regression
Hi, What is the function to calculate penalized cox regression? frailtyPenal in frailtypack R package imposes max 2 strata. I want to use a function that reduces all my variables without stratifying them in advance. Look forward to your reply carol --------------------------------- [[alternative HTML version deleted]]
2009 Sep 25
1
Penalized Logistic Regression - Query
Dear R users, Is there any package that I could use to perform Penalized Logistic Regression (i.e. Ridge/Lasso regularization) including also an offset term in the model (i.e. a variable with a known coefficient of 1 rather than an estimated coefficient)? I couldn't find any package that would allow using offset terms. Any guidance will help. Many thanks! Axel. [[alternative HTML version
2010 Feb 16
1
penalized package for ridge regression
Dear all, I am using "penalized" package for "Ridge" regression. I do not know how can I get regression coefficients using that package . Please help me. Thanks -- Linda Garcia [[alternative HTML version deleted]]
2010 Oct 25
0
penalized regression analysis
Hi All, I am using the package 'penalized' to perform a multiple regression on a dataset of 33 samples and 9 explanatory variables. The analysis appears to have performed as outlined and I have ended up with 4 explanatory variables and their respective regression coefficients. What I am struggling to understand is where do I get the variance explained information from and how do I
2012 Jul 09
0
firth's penalized likelihood bias reduction approach
hi all, I have a binary data set and am now confronted with a "separation" issue. I have two predictors, mood (neutral and sad) and game type (fair and non-fair). By "separation", I mean that in the non-fair game, whereas 20% (4/20) of sad-mood participants presented a positive response (coded as 1) in the non-fair game, none of neutral-mood participants did so (0/20). Thus,
2003 Apr 20
1
survreg penalized likelihood?
What objective function is maximized by survreg with the default Weibull model? I'm getting finite parameters in a case that has the likelihood maximzed at Infinite, so it can't be a simple maximum likelihood. Consider the following: ############################# > set.seed(3) > Stress <- rep(1:3, each=3) > ch.life <- exp(9-3*Stress) > simLife <- rexp(9,
2009 Sep 26
2
Design Package - Penalized Logistic Reg. - Query
Dear R experts, The lrm function in the Design package can perform penalized (Ridge) logistic regression. It is my understanding that the ridge solutions are not equivalent under scaling of the inputs, so one normally standardizes the inputs. Do you know if input standardization is done internally in lrm or I would have to do it prior to applying this function. Also, as I'm new in R (coming
2005 Aug 13
1
Penalized likelihood-ratio chi-squared statistic: L.R. model for Goodness of fit?
Dear R list, From the lrm() binary logistic model we derived the G2 value or the likelihood-ratio chi-squared statistic given as L.R. model, in the output of the lrm(). How can this value be penalized for non-linearity (we used splines in the lrm function)? lrm.iRVI <- lrm(arson ~ rcs(iRVI,5), penalty=list(simple=10,nonlinear=100,nonlinear.interaction=4)) This didn’t work
2010 Mar 09
1
penalized maximum likelihood estimation and logistf
Hi, I got two questions and would really appreciate any help from here. First, is the penalized maximum likelihood estimation(Firth Type Estimation) only fit for binary response (0,1 or TRUE, FALSE)? Can it be applied to multinomial logistic regression? If yes, what's the formula for LL and U(beta_i)? Can someone point me to the right reference? Second, when I used *logistf *on a dataset with
2009 Oct 30
0
different L2 regularization behavior between lrm, glmnet, and penalized? (original question)
Dear Robert, The differences have to do with diffent scaling defaults. lrm by default standardizes the covariates to unit sd before applying penalization. penalized by default does not do any standardization, but if asked standardizes on unit second central moment. In your example: x = c(-2, -2, -2, -2, -1, -1, -1, 2, 2, 2, 3, 3, 3, 3) z = c(0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1) You
2009 Oct 14
1
different L2 regularization behavior between lrm, glmnet, and penalized?
The following R code using different packages gives the same results for a simple logistic regression without regularization, but different results with regularization. This may just be a matter of different scaling of the regularization parameters, but if anyone familiar with these packages has insight into why the results differ, I'd appreciate hearing about it. I'm new to