search for: penalizing

Displaying 20 results from an estimated 553 matches for "penalizing".

2007 Nov 30
1
Puzzling message: "no man files in this package"
Dear R developers, When building/checking my package (in R 2.6.1 under windows) I run into some messages that I do not completely understand and that do not give me precise enough leads to pinpoint where the error in my package is. I would be very grateful for any suggestions. Did anyone else encounter the same problem before? When building or installing the package, I get the message (no error
2011 May 01
1
Different results of coefficients by packages penalized and glmnet
Dear R users: Recently, I learn to use penalized logistic regression. Two packages (penalized and glmnet) have the function of lasso. So I write these code. However, I got different results of coef. Can someone kindly explain. # lasso using penalized library(penalized) pena.fit2<-penalized(HRLNM,penalized=~CN+NoSus,lambda1=1,model="logistic",standardize=TRUE) pena.fit2
2009 Oct 14
1
different L2 regularization behavior between lrm, glmnet, and penalized?
The following R code using different packages gives the same results for a simple logistic regression without regularization, but different results with regularization. This may just be a matter of different scaling of the regularization parameters, but if anyone familiar with these packages has insight into why the results differ, I'd appreciate hearing about it. I'm new to
2011 May 20
1
Contrasts in Penalized Package
Hi, The "penalized" documentation says that "Unordered factors are turned into as many dummy variables as the factor has levels". This is done by a function in the package called contr.none. I'm trying to figure out how exactly is a model matrix created with this contrast option when the user calls the function with a formula. I typed "library(penalized) ;
2009 Oct 30
0
different L2 regularization behavior between lrm, glmnet, and penalized? (original question)
Dear Robert, The differences have to do with diffent scaling defaults. lrm by default standardizes the covariates to unit sd before applying penalization. penalized by default does not do any standardization, but if asked standardizes on unit second central moment. In your example: x = c(-2, -2, -2, -2, -1, -1, -1, 2, 2, 2, 3, 3, 3, 3) z = c(0, 0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 1) You
2005 Aug 13
1
Penalized likelihood-ratio chi-squared statistic: L.R. model for Goodness of fit?
Dear R list, From the lrm() binary logistic model we derived the G2 value or the likelihood-ratio chi-squared statistic given as L.R. model, in the output of the lrm(). How can this value be penalized for non-linearity (we used splines in the lrm function)? lrm.iRVI <- lrm(arson ~ rcs(iRVI,5), penalty=list(simple=10,nonlinear=100,nonlinear.interaction=4)) This didn’t work
2010 Oct 25
0
penalized regression analysis
Hi All, I am using the package 'penalized' to perform a multiple regression on a dataset of 33 samples and 9 explanatory variables. The analysis appears to have performed as outlined and I have ended up with 4 explanatory variables and their respective regression coefficients. What I am struggling to understand is where do I get the variance explained information from and how do I
2010 Aug 04
5
Question regarding significance of a covariate in a coxme survival model
Hi, I am running a Cox Mixed Effects Hazard model using the library coxme. I am trying to model time to onset (age_sym1) of thought problems (e.g. hearing voices) (sym1). As I have siblings in my dataset, I have decided to account for this by including a random effect for family (famid). My covariate of interest is Mother's diagnosis where a 0 is bipolar, 1 is control, and 2 is major
2011 Nov 03
0
L1 penalization for proportional odds logistic regression
Dear community, I am currently attempting to perform a (L1) penalized ordinal logistic regression with proportional odds. For the moment I only found R packages allowing to perform forward or backward continuation ratio model with several penalizations. Does anyone have a clue of what R package I could use ? I am not even quite sure that penalized logistic regression with proportional odds has
2009 Sep 25
1
Penalized Logistic Regression - Query
Dear R users, Is there any package that I could use to perform Penalized Logistic Regression (i.e. Ridge/Lasso regularization) including also an offset term in the model (i.e. a variable with a known coefficient of 1 rather than an estimated coefficient)? I couldn't find any package that would allow using offset terms. Any guidance will help. Many thanks! Axel. [[alternative HTML version
2010 Feb 16
1
penalized package for ridge regression
Dear all, I am using "penalized" package for "Ridge" regression. I do not know how can I get regression coefficients using that package . Please help me. Thanks -- Linda Garcia [[alternative HTML version deleted]]
2009 Sep 26
2
Design Package - Penalized Logistic Reg. - Query
Dear R experts, The lrm function in the Design package can perform penalized (Ridge) logistic regression. It is my understanding that the ridge solutions are not equivalent under scaling of the inputs, so one normally standardizes the inputs. Do you know if input standardization is done internally in lrm or I would have to do it prior to applying this function. Also, as I'm new in R (coming
2010 Mar 09
1
penalized maximum likelihood estimation and logistf
Hi, I got two questions and would really appreciate any help from here. First, is the penalized maximum likelihood estimation(Firth Type Estimation) only fit for binary response (0,1 or TRUE, FALSE)? Can it be applied to multinomial logistic regression? If yes, what's the formula for LL and U(beta_i)? Can someone point me to the right reference? Second, when I used *logistf *on a dataset with
2007 May 10
2
Nonlinear constrains with optim
Dear All I am dealing at the moment with optimization problems with nonlinear constraints. Regenoud is quite apt to solve that kind of problems, but the precision of the optimal values for the parameters is sometimes far from what I need. Optim seems to be more precise, but it can only accept box-constrained optimization problems. I read in the list archives that optim can also be used with
2009 Aug 03
1
penalized logistic regression
Hi, R users, Is there any package for penalized logistic regression with more than two response classes? I read the manual for stepPlr, but it seems it's only for binary case. Thank you, Annie [[alternative HTML version deleted]]
2010 Aug 03
1
Penalized Gamma GLM
Hi, I couldn't find a package to fit a penalized (lasso/ridge) Gamma regression model. Does anybody know any? Thanks in advance, Lars. [[alternative HTML version deleted]]
2012 Jul 09
0
firth's penalized likelihood bias reduction approach
hi all, I have a binary data set and am now confronted with a "separation" issue. I have two predictors, mood (neutral and sad) and game type (fair and non-fair). By "separation", I mean that in the non-fair game, whereas 20% (4/20) of sad-mood participants presented a positive response (coded as 1) in the non-fair game, none of neutral-mood participants did so (0/20). Thus,
2003 Sep 14
3
Re: Logistic Regression
Christoph Lehman had problems with seperated data in two-class logistic regression. One useful little trick is to penalize the logistic regression using a quadratic penalty on the coefficients. I am sure there are functions in the R contributed libraries to do this; otherwise it is easy to achieve via IRLS using ridge regressions. Then even though the data are separated, the penalized
2006 Apr 13
3
Penalized Splines as BLUPs using lmer?
Dear R-list, I?m trying to use the lmer of the lme4 package to fit a linear mixed model of the form Y = Xb + Zu + e and I can?t figure out how to control the covariance structure of u. I want u ~ N(0,sigma^2*I). More precisely I?m trying to smooth a curve through data using the "Penalized Splines as BLUPs" method as described in Ruppert, Wand & Carroll (2003). So I have Z = [Z1
2006 Feb 28
1
Collinearity in nls problem
Dear R-Help list, I have a nonlinear least squares problem, which involves a changepoint; at the beginning, the outcome y is constant, and after a delay, t0, y follows a biexponential decay. I log-transform the data, to stabilize the error variance. At time t < t0, my model is log(y_i)=log(exp(a0)+exp(b0)) at time t >= t0, the model is log(y_i)=log(exp(a0-a1*(t_i - t0))+exp(b0=b1*(t_i -