similar to: Version 0.93 of GAM package on CRAN

Displaying 20 results from an estimated 7000 matches similar to: "Version 0.93 of GAM package on CRAN"

2004 Aug 06
2
gam --- a new contributed package
I have contributed a "gam" library to CRAN, which implements "Generalized Additive Models". This implementation follows closely the description in the GAM chapter 7 of the "white" book "Statistical Models in S" (Chambers & Hastie (eds), 1992, Wadsworth), as well as the philosophy in "Generalized Additive Models" (Hastie & Tibshirani 1990,
2004 Aug 06
2
gam --- a new contributed package
I have contributed a "gam" library to CRAN, which implements "Generalized Additive Models". This implementation follows closely the description in the GAM chapter 7 of the "white" book "Statistical Models in S" (Chambers & Hastie (eds), 1992, Wadsworth), as well as the philosophy in "Generalized Additive Models" (Hastie & Tibshirani 1990,
2005 Oct 12
0
step.gam- question
This is covered in the helpfile, but perhaps not clearly enough. The gam chapter in the "white book" has more details. step.gam moves around the terms in the scope aregumnet in an ordered fashion. So if a scope element is ~ 1 + x +s(x,4) + s(x,8) and the formula at some stage is ~ x + .... then if direction="both", the routine checks both "1" and
2010 Apr 28
0
New package for ICA uploaded to CRA
I have uploaded a new package to CRAN called ProDenICA. This fits ICA models directly via product-density estimation of the source densities. This package was promised on page 567 in the 2nd edition of our book 'Elements of Statistical Learning' (Hastie, Tibshirani and Friedman, 2009, Springer) . Apologies that it is so late. The method fits each source density by a tilted gaussian
2010 Apr 28
0
New package for ICA uploaded to CRA
I have uploaded a new package to CRAN called ProDenICA. This fits ICA models directly via product-density estimation of the source densities. This package was promised on page 567 in the 2nd edition of our book 'Elements of Statistical Learning' (Hastie, Tibshirani and Friedman, 2009, Springer) . Apologies that it is so late. The method fits each source density by a tilted gaussian
2006 Mar 07
0
Statistical Learning and Datamining Course
Short course: Statistical Learning and Data Mining II: tools for tall and wide data Trevor Hastie and Robert Tibshirani, Stanford University Sheraton Hotel, Palo Alto, California, April 3-4, 2006. This two-day course gives a detailed overview of statistical models for data mining, inference and prediction. With the rapid developments in internet technology, genomics, financial
2006 Jan 14
0
Data Mining Course
Short course: Statistical Learning and Data Mining II: tools for tall and wide data Trevor Hastie and Robert Tibshirani, Stanford University Sheraton Hotel, Palo Alto, California, April 3-4, 2006. This two-day course gives a detailed overview of statistical models for data mining, inference and prediction. With the rapid developments in internet technology, genomics, financial
2003 Sep 14
3
Re: Logistic Regression
Christoph Lehman had problems with seperated data in two-class logistic regression. One useful little trick is to penalize the logistic regression using a quadratic penalty on the coefficients. I am sure there are functions in the R contributed libraries to do this; otherwise it is easy to achieve via IRLS using ridge regressions. Then even though the data are separated, the penalized
2005 Dec 13
3
Age of an object?
It would be nice to have a date stamp on an object. In S/Splus this was always available, because objects were files. I have looked around, but I presume this information is not available. -------------------------------------------------------------------- Trevor Hastie hastie at stanford.edu Professor, Department of Statistics, Stanford University Phone:
2004 Jun 24
3
problem with model.matrix
This works: > model.matrix(~I(pos>3),data=data.frame(pos=c(1:5))) (Intercept) I(pos > 3)TRUE 1 1 0 2 1 0 3 1 0 4 1 1 5 1 1 attr(,"assign") [1] 0 1 attr(,"contrasts") attr(,"contrasts")$"I(pos > 3)" [1] "contr.treatment"
2004 Jan 07
0
Statistical Learning and Datamining course based on R/Splus tools
Short course: Statistical Learning and Data Mining Trevor Hastie and Robert Tibshirani, Stanford University Sheraton Hotel Palo Alto, CA Feb 26-27, 2004 This two-day course gives a detailed overview of statistical models for data mining, inference and prediction. With the rapid developments in internet technology, genomics and other high-tech industries, we rely increasingly more on data
2004 Jul 12
0
Statistical Learning and Data Mining Course
Short course: Statistical Learning and Data Mining Trevor Hastie and Robert Tibshirani, Stanford University Georgetown University Conference Center Washington DC September 20-21, 2004 This two-day course gives a detailed overview of statistical models for data mining, inference and prediction. With the rapid developments in internet technology, genomics and other high-tech industries, we
2005 Jan 04
0
Statistical Learning and Data Mining Course
Short course: Statistical Learning and Data Mining Trevor Hastie and Robert Tibshirani, Stanford University Sheraton Hotel, Palo Alto, California February 24 & 25, 2005 This two-day course gives a detailed overview of statistical models for data mining, inference and prediction. With the rapid developments in internet technology, genomics and other high-tech industries, we rely
2010 Nov 04
0
glmnet_1.5 uploaded to CRAN
This is a new version of glmnet, that incorporates some bug fixes and speedups. * a new convergence criterion which which offers 10x or more speedups for saturated fits (mainly effects logistic, Poisson and Cox) * one can now predict directly from a cv.object - see the help files for cv.glmnet and predict.cv.glmnet * other new methods are deviance() for "glmnet" and coef() for
2003 Apr 30
0
Least Angle Regression packages for R
Least Angle Regression software: LARS "Least Angle Regression" ("LAR") is a new model selection algorithm; a useful and less greedy version of traditional forward selection methods. LAR is described in detail in a paper by Brad Efron, Trevor Hastie, Iain Johnstone and Rob Tibshirani, soon to appear in the Annals of Statistics. The paper, as well as R and Splus packages, are
2003 Apr 30
0
Least Angle Regression packages for R
Least Angle Regression software: LARS "Least Angle Regression" ("LAR") is a new model selection algorithm; a useful and less greedy version of traditional forward selection methods. LAR is described in detail in a paper by Brad Efron, Trevor Hastie, Iain Johnstone and Rob Tibshirani, soon to appear in the Annals of Statistics. The paper, as well as R and Splus packages, are
2008 Jun 02
0
New glmnet package on CRAN
glmnet is a package that fits the regularization path for linear, two- and multi-class logistic regression models with "elastic net" regularization (tunable mixture of L1 and L2 penalties). glmnet uses pathwise coordinate descent, and is very fast. Some of the features of glmnet: * by default it computes the path at 100 uniformly spaced (on the log scale) values of the
2008 Jun 02
0
New glmnet package on CRAN
glmnet is a package that fits the regularization path for linear, two- and multi-class logistic regression models with "elastic net" regularization (tunable mixture of L1 and L2 penalties). glmnet uses pathwise coordinate descent, and is very fast. Some of the features of glmnet: * by default it computes the path at 100 uniformly spaced (on the log scale) values of the
2010 Apr 04
0
Major glmnet upgrade on CRAN
glmnet_1.2 has been uploaded to CRAN. This is a major upgrade, with the following additional features: * poisson family, with dense or sparse x * Cox proportional hazards family, for dense x * wide range of cross-validation features. All models have several criteria for cross-validation. These include deviance, mean absolute error, misclassification error and "auc" for logistic or
2010 Apr 04
0
Major glmnet upgrade on CRAN
glmnet_1.2 has been uploaded to CRAN. This is a major upgrade, with the following additional features: * poisson family, with dense or sparse x * Cox proportional hazards family, for dense x * wide range of cross-validation features. All models have several criteria for cross-validation. These include deviance, mean absolute error, misclassification error and "auc" for logistic or