Displaying 4 results from an estimated 4 matches for "mazumd".
Did you mean:
mazuma
2012 Mar 07
0
sparsenet: a new package for sparse model selection
We have put a new package sparsenet on CRAN.
Sparsenet fits regularization paths for sparse model selection via coordinate descent,
using a penalized least-squares framework and a non-convex penalty.
The package is based on our JASA paper
Rahul Mazumder, Jerome Friedman and Trevor Hastie: SparseNet : Coordinate Descent with Non-Convex Penalties. (JASA 2011)
http://www.stanford.edu/~hastie/Papers/Sparsenet/jasa_MFH_final.pdf
We use Zhang's MC+ penalty to impose sparsity in model selection. This penalty
parametrizes a family ranging between...
2012 Mar 07
0
sparsenet: a new package for sparse model selection
We have put a new package sparsenet on CRAN.
Sparsenet fits regularization paths for sparse model selection via coordinate descent,
using a penalized least-squares framework and a non-convex penalty.
The package is based on our JASA paper
Rahul Mazumder, Jerome Friedman and Trevor Hastie: SparseNet : Coordinate Descent with Non-Convex Penalties. (JASA 2011)
http://www.stanford.edu/~hastie/Papers/Sparsenet/jasa_MFH_final.pdf
We use Zhang's MC+ penalty to impose sparsity in model selection. This penalty
parametrizes a family ranging between...
2013 Apr 02
0
softImpute_1.0 uploaded to CRAN
SoftImpute is a new package for matrix completion - i.e. for imputing missing values in matrices.
SoftImpute was written by myself and Rahul Mazumder.
softImpute uses uses squared-error loss with nuclear norm regularization - one can think of it as
the "lasso" for matrix approximation - to find a low-rank approximation to the observed entries in the matrix.
This low-rank approximation is then used to impute the missing entries.
sof...
2013 Apr 02
0
softImpute_1.0 uploaded to CRAN
SoftImpute is a new package for matrix completion - i.e. for imputing missing values in matrices.
SoftImpute was written by myself and Rahul Mazumder.
softImpute uses uses squared-error loss with nuclear norm regularization - one can think of it as
the "lasso" for matrix approximation - to find a low-rank approximation to the observed entries in the matrix.
This low-rank approximation is then used to impute the missing entries.
sof...