similar to: Equivalent of S> assign( , ,frame=1)

Displaying 20 results from an estimated 5000 matches similar to: "Equivalent of S> assign( , ,frame=1)"

2007 Apr 05
1
Logistic/Cox regression: Parameter estimates directly from model matrix
Hi out there Is there a way to get the estimated coefficients in a logistic / Cox regression without having to specify a 'formula' but by only giving the model matrix? Example for Cox regression: ## predictors n <- 50 q1 <- rnorm(n) q2 <- rgamma(n, 2, 2) Z <- cbind(q1, q2) ## response ttf <- rexp(n) tf <- round(runif(n)) ## compute estimates res <- coxph(Surv(ttf,
2005 Dec 13
3
Age of an object?
It would be nice to have a date stamp on an object. In S/Splus this was always available, because objects were files. I have looked around, but I presume this information is not available. -------------------------------------------------------------------- Trevor Hastie hastie at stanford.edu Professor, Department of Statistics, Stanford University Phone:
2004 Jun 24
3
problem with model.matrix
This works: > model.matrix(~I(pos>3),data=data.frame(pos=c(1:5))) (Intercept) I(pos > 3)TRUE 1 1 0 2 1 0 3 1 0 4 1 1 5 1 1 attr(,"assign") [1] 0 1 attr(,"contrasts") attr(,"contrasts")$"I(pos > 3)" [1] "contr.treatment"
2004 Aug 06
2
gam --- a new contributed package
I have contributed a "gam" library to CRAN, which implements "Generalized Additive Models". This implementation follows closely the description in the GAM chapter 7 of the "white" book "Statistical Models in S" (Chambers & Hastie (eds), 1992, Wadsworth), as well as the philosophy in "Generalized Additive Models" (Hastie & Tibshirani 1990,
2004 Aug 06
2
gam --- a new contributed package
I have contributed a "gam" library to CRAN, which implements "Generalized Additive Models". This implementation follows closely the description in the GAM chapter 7 of the "white" book "Statistical Models in S" (Chambers & Hastie (eds), 1992, Wadsworth), as well as the philosophy in "Generalized Additive Models" (Hastie & Tibshirani 1990,
2002 Feb 20
1
plot.hclust: strange behaviour with "manufactured" hclust object
I've been trying to get plot.hclust to work with a hclust object I created and have not had much success. It seems that there is some "hidden" characteristic of a hclust object that I can't see. This is most easily seen in the following example, where plot.hclust works on one object, but when this object is "dumped" and then re-read, plot.hclust no longer works. Is
2002 Feb 21
0
plot.hclust: strange behaviour with "manufactured"
This worked for me with your example: source("dumpdata.R") storage.mode(x.hc$merge) <- "integer" plot(x.hc) (R-1.4.1 compiled from source on WinNT4.) Andy > -----Original Message----- > From: Hugh Chipman [mailto:hachipma at icarus.math.uwaterloo.ca] > Sent: Wednesday, February 20, 2002 5:32 PM > To: andy_liaw at merck.com > Cc: r-help at stat.math.ethz.ch
2005 Aug 09
8
Digest reading is tedious
Like many, I am sure, I get R-Help in digest form. Its easy enough to browse the subject lines, but then if an entry interests you, you have to embark on this tedious search or scroll to find it. It would be great to have a "clickable" digest, where the topics list is a set of pointers, and clicking on a topic takes you to that entry. I can think of at least one way to do this via
2004 Jan 07
0
Statistical Learning and Datamining course based on R/Splus tools
Short course: Statistical Learning and Data Mining Trevor Hastie and Robert Tibshirani, Stanford University Sheraton Hotel Palo Alto, CA Feb 26-27, 2004 This two-day course gives a detailed overview of statistical models for data mining, inference and prediction. With the rapid developments in internet technology, genomics and other high-tech industries, we rely increasingly more on data
2004 Jul 12
0
Statistical Learning and Data Mining Course
Short course: Statistical Learning and Data Mining Trevor Hastie and Robert Tibshirani, Stanford University Georgetown University Conference Center Washington DC September 20-21, 2004 This two-day course gives a detailed overview of statistical models for data mining, inference and prediction. With the rapid developments in internet technology, genomics and other high-tech industries, we
2005 Jan 04
0
Statistical Learning and Data Mining Course
Short course: Statistical Learning and Data Mining Trevor Hastie and Robert Tibshirani, Stanford University Sheraton Hotel, Palo Alto, California February 24 & 25, 2005 This two-day course gives a detailed overview of statistical models for data mining, inference and prediction. With the rapid developments in internet technology, genomics and other high-tech industries, we rely
2006 Mar 07
0
Statistical Learning and Datamining Course
Short course: Statistical Learning and Data Mining II: tools for tall and wide data Trevor Hastie and Robert Tibshirani, Stanford University Sheraton Hotel, Palo Alto, California, April 3-4, 2006. This two-day course gives a detailed overview of statistical models for data mining, inference and prediction. With the rapid developments in internet technology, genomics, financial
2006 Jan 14
0
Data Mining Course
Short course: Statistical Learning and Data Mining II: tools for tall and wide data Trevor Hastie and Robert Tibshirani, Stanford University Sheraton Hotel, Palo Alto, California, April 3-4, 2006. This two-day course gives a detailed overview of statistical models for data mining, inference and prediction. With the rapid developments in internet technology, genomics, financial
2013 Dec 01
0
MOOC on Statistical Learning with R
Rob Tibshirani and I are offering a MOOC in January on Statistical Learning. This “massive open online course" is free, and is based entirely on our new book “An Introduction to Statistical Learning with Applications in R” (James, Witten, Hastie, Tibshirani 2013, Springer). http://www-bcf.usc.edu/~gareth/ISL/ The pdf of the book will also be free. The course, hosted on Open edX, consists of
2010 Nov 04
0
glmnet_1.5 uploaded to CRAN
This is a new version of glmnet, that incorporates some bug fixes and speedups. * a new convergence criterion which which offers 10x or more speedups for saturated fits (mainly effects logistic, Poisson and Cox) * one can now predict directly from a cv.object - see the help files for cv.glmnet and predict.cv.glmnet * other new methods are deviance() for "glmnet" and coef() for
2003 Apr 30
0
Least Angle Regression packages for R
Least Angle Regression software: LARS "Least Angle Regression" ("LAR") is a new model selection algorithm; a useful and less greedy version of traditional forward selection methods. LAR is described in detail in a paper by Brad Efron, Trevor Hastie, Iain Johnstone and Rob Tibshirani, soon to appear in the Annals of Statistics. The paper, as well as R and Splus packages, are
2003 Apr 30
0
Least Angle Regression packages for R
Least Angle Regression software: LARS "Least Angle Regression" ("LAR") is a new model selection algorithm; a useful and less greedy version of traditional forward selection methods. LAR is described in detail in a paper by Brad Efron, Trevor Hastie, Iain Johnstone and Rob Tibshirani, soon to appear in the Annals of Statistics. The paper, as well as R and Splus packages, are
2008 Jun 02
0
New glmnet package on CRAN
glmnet is a package that fits the regularization path for linear, two- and multi-class logistic regression models with "elastic net" regularization (tunable mixture of L1 and L2 penalties). glmnet uses pathwise coordinate descent, and is very fast. Some of the features of glmnet: * by default it computes the path at 100 uniformly spaced (on the log scale) values of the
2008 Jun 02
0
New glmnet package on CRAN
glmnet is a package that fits the regularization path for linear, two- and multi-class logistic regression models with "elastic net" regularization (tunable mixture of L1 and L2 penalties). glmnet uses pathwise coordinate descent, and is very fast. Some of the features of glmnet: * by default it computes the path at 100 uniformly spaced (on the log scale) values of the
2011 Apr 20
0
glmnet_1.6 uploaded to CRAN
We have submitted glmnet_1.6 to CRAN This version has an improved convergence criterion, and it also uses a variable screening algorithm that dramatically reduces the time to convergence (while still producing the exact solutions). The speedups in some cases are by a factors of 20 to 50, depending on the particular problem and loss function. See our paper