similar to: New glmnet package on CRAN

Displaying 20 results from an estimated 4000 matches similar to: "New glmnet package on CRAN"

2010 Apr 04
0
Major glmnet upgrade on CRAN
glmnet_1.2 has been uploaded to CRAN. This is a major upgrade, with the following additional features: * poisson family, with dense or sparse x * Cox proportional hazards family, for dense x * wide range of cross-validation features. All models have several criteria for cross-validation. These include deviance, mean absolute error, misclassification error and "auc" for logistic or
2010 Apr 04
0
Major glmnet upgrade on CRAN
glmnet_1.2 has been uploaded to CRAN. This is a major upgrade, with the following additional features: * poisson family, with dense or sparse x * Cox proportional hazards family, for dense x * wide range of cross-validation features. All models have several criteria for cross-validation. These include deviance, mean absolute error, misclassification error and "auc" for logistic or
2010 Nov 04
0
glmnet_1.5 uploaded to CRAN
This is a new version of glmnet, that incorporates some bug fixes and speedups. * a new convergence criterion which which offers 10x or more speedups for saturated fits (mainly effects logistic, Poisson and Cox) * one can now predict directly from a cv.object - see the help files for cv.glmnet and predict.cv.glmnet * other new methods are deviance() for "glmnet" and coef() for
2013 Mar 02
0
glmnet 1.9-3 uploaded to CRAN (with intercept option)
This update adds an intercept option (by popular request) - now one can fit a model without an intercept Glmnet is a package that fits the regularization path for a number of generalized linear models, with with "elastic net" regularization (tunable mixture of L1 and L2 penalties). Glmnet uses pathwise coordinate descent, and is very fast. The current list of models covered are:
2013 Mar 02
0
glmnet 1.9-3 uploaded to CRAN (with intercept option)
This update adds an intercept option (by popular request) - now one can fit a model without an intercept Glmnet is a package that fits the regularization path for a number of generalized linear models, with with "elastic net" regularization (tunable mixture of L1 and L2 penalties). Glmnet uses pathwise coordinate descent, and is very fast. The current list of models covered are:
2013 Apr 25
0
glmnet webinar Friday May 3 at 10am PDT
I will be giving a webinar on glmnet on Friday May 3, 2013 at 10am PDT (pacific daylight time) The one-hour webinar will consist of: - Intro to lasso and elastic net regularization, and coefficient paths - Why is glmnet so efficient and flexible - New features of the latest version of glmnet - Live glmnet demonstration - Question and Answer period To sign up for the webinar, please go to
2005 Nov 28
0
glmpath: L1 regularization path for glms
We have uploaded to CRAN the first version of glmpath, which fits the L1 regularization path for generalized linear models. The lars package fits the entire piecewise-linear L1 regularization path for the lasso. The coefficient paths for L1 regularized glms, however, are not piecewise linear. glmpath uses convex optimization - in particular predictor-corrector methods- to fit the
2005 Nov 28
0
glmpath: L1 regularization path for glms
We have uploaded to CRAN the first version of glmpath, which fits the L1 regularization path for generalized linear models. The lars package fits the entire piecewise-linear L1 regularization path for the lasso. The coefficient paths for L1 regularized glms, however, are not piecewise linear. glmpath uses convex optimization - in particular predictor-corrector methods- to fit the
2013 Feb 10
0
glmnet_1.9-1 submitted to CRAN
This new version of glmnet has some bug fixes, and some new features * new arguments lower.limits=-Inf and upper.limits=Inf (defaults shown) for all the coefficients in glmnet. Users can provide limits on coefficients. See the documentation for glmnet. Typical usage: glmnet(x,y,lower=0) Here the argument is abbreviated, and by giving a single value, this uses the same value for all parameters.
2013 Feb 10
0
glmnet_1.9-1 submitted to CRAN
This new version of glmnet has some bug fixes, and some new features * new arguments lower.limits=-Inf and upper.limits=Inf (defaults shown) for all the coefficients in glmnet. Users can provide limits on coefficients. See the documentation for glmnet. Typical usage: glmnet(x,y,lower=0) Here the argument is abbreviated, and by giving a single value, this uses the same value for all parameters.
2003 Apr 30
0
Least Angle Regression packages for R
Least Angle Regression software: LARS "Least Angle Regression" ("LAR") is a new model selection algorithm; a useful and less greedy version of traditional forward selection methods. LAR is described in detail in a paper by Brad Efron, Trevor Hastie, Iain Johnstone and Rob Tibshirani, soon to appear in the Annals of Statistics. The paper, as well as R and Splus packages, are
2003 Apr 30
0
Least Angle Regression packages for R
Least Angle Regression software: LARS "Least Angle Regression" ("LAR") is a new model selection algorithm; a useful and less greedy version of traditional forward selection methods. LAR is described in detail in a paper by Brad Efron, Trevor Hastie, Iain Johnstone and Rob Tibshirani, soon to appear in the Annals of Statistics. The paper, as well as R and Splus packages, are
2010 Nov 19
0
glmnet_1.5.1 uploaded to CRAN
In glmnet_1.5 a poor default was set for the argument type which caused the program to be very slow or even crash when nvar (p) is very large. The argument type (now called type.gaussian) has two options, "covariance" or "naive", and is used for the default family="gaussion" model (squared error loss). When type.gaussian="covariance", all inner-products
2004 Jan 07
0
Statistical Learning and Datamining course based on R/Splus tools
Short course: Statistical Learning and Data Mining Trevor Hastie and Robert Tibshirani, Stanford University Sheraton Hotel Palo Alto, CA Feb 26-27, 2004 This two-day course gives a detailed overview of statistical models for data mining, inference and prediction. With the rapid developments in internet technology, genomics and other high-tech industries, we rely increasingly more on data
2004 Jul 12
0
Statistical Learning and Data Mining Course
Short course: Statistical Learning and Data Mining Trevor Hastie and Robert Tibshirani, Stanford University Georgetown University Conference Center Washington DC September 20-21, 2004 This two-day course gives a detailed overview of statistical models for data mining, inference and prediction. With the rapid developments in internet technology, genomics and other high-tech industries, we
2005 Jan 04
0
Statistical Learning and Data Mining Course
Short course: Statistical Learning and Data Mining Trevor Hastie and Robert Tibshirani, Stanford University Sheraton Hotel, Palo Alto, California February 24 & 25, 2005 This two-day course gives a detailed overview of statistical models for data mining, inference and prediction. With the rapid developments in internet technology, genomics and other high-tech industries, we rely
2006 Mar 07
0
Statistical Learning and Datamining Course
Short course: Statistical Learning and Data Mining II: tools for tall and wide data Trevor Hastie and Robert Tibshirani, Stanford University Sheraton Hotel, Palo Alto, California, April 3-4, 2006. This two-day course gives a detailed overview of statistical models for data mining, inference and prediction. With the rapid developments in internet technology, genomics, financial
2006 Jan 14
0
Data Mining Course
Short course: Statistical Learning and Data Mining II: tools for tall and wide data Trevor Hastie and Robert Tibshirani, Stanford University Sheraton Hotel, Palo Alto, California, April 3-4, 2006. This two-day course gives a detailed overview of statistical models for data mining, inference and prediction. With the rapid developments in internet technology, genomics, financial
2007 May 17
0
New version 0.9-7 of lars package
I uploaded a new version of the lars package to CRAN, which incorporates some nontrivial changes. 1) lars now has normalize and intercept options, both defaulted to TRUE, which means the variables are scaled to have unit euclidean norm, and an intercept is included in the model. Either or both can be set to FALSE. 2) lars has an additional type = "stepwise" option; now the list is
2007 May 17
0
New version 0.9-7 of lars package
I uploaded a new version of the lars package to CRAN, which incorporates some nontrivial changes. 1) lars now has normalize and intercept options, both defaulted to TRUE, which means the variables are scaled to have unit euclidean norm, and an intercept is included in the model. Either or both can be set to FALSE. 2) lars has an additional type = "stepwise" option; now the list is