search for: shrinkage

Displaying 20 results from an estimated 103 matches for "shrinkage".

2005 Feb 15
1
shrinkage estimates in lme
Hello. Slope estimates in lme are shrinkage estimates which pull the OLS slope estimates towards the population estimates, the degree of which depends on the group sample size and the distance between the group-based estimate and the overall population estimate. Although these shrinkage estimates as said to be more precise with respect to t...
2008 May 06
1
mgcv::gam shrinkage of smooths
...zero. IE, my informal prior is to keep the contribution of a specific term small. 1) Is adding eps*I to the penalty matrix an effective way to achieve this goal? 2) How do I accomplish this in practice using mgcv::gam? Thanks. -- View this message in context: http://www.nabble.com/mgcv%3A%3Agam-shrinkage-of-smooths-tp17093645p17093645.html Sent from the R help mailing list archive at Nabble.com.
2005 Jun 01
0
determine the shrinkage threshold in PAMR?
1. According to the doc of PAMR, the shrinkage threshold is determined by cross-validation. Does this mean that user need not tune any parameter? 2. I tried two applications using PAMR, the results are very disappointing. The attached are the cross-validation results. You can see that the classification errors are relatively high (0.2 at the b...
2008 Jan 02
0
How to select a reasonable shrinkage coefficient in stepplr?
Dear R-users, I am using stepplr for L2 regularized logistic regression. Since number of attribute is too large i discarded interaction terms. Everything is fine but only problem i have faced that i cannot choose a good shrinkage coefficient (lambda). If CV is the best way to estimate, can you please elaborately tell me how to select lambda in stepplr using CV? Except CV is there any other procedure available? Thanks. -- View this message in context: http://www.nabble.com/How-to-select-a-reasonable-shrinkage-coefficient-...
2005 Apr 19
2
cross validation and parameter determination
Hi all, In Tibshirani's PNAS paper about nearest shrunken centroid analysis of microarrays (PNAS vol 99:6567), they used cross validation to choose the amount of shrinkage used in the model, and then test the performance of the model with the cross-validated shrinkage in separate independent testing set. If I don't have the luxury of having independent testing set, can I just use the cross validation performance as the performance estimate? In other words, can I...
2010 Apr 13
0
exract Shrinkage intensity lambda and lambda.var
does anyone know how to extract Shrinkage intensity lambda and lambda.var values after run cov.shrink(x)? thanks, KZ [[alternative HTML version deleted]]
2009 Aug 14
1
Permutation test and R2 problem
Hi, I have optimized the shrinkage parameter (GCV)for ridge and got my r2 value is 70% . to check the sensitivity of the result, I did permutation test. I permuted the response vector and run for 1000 times and draw a distribution. But now, I get r2 values highest 98% and some of them more than 70 %. Is it expected from such type...
2012 Dec 08
0
Oracle Approximating Shrinkage in R?
Hi, Can anyone point me to an implementation in R of the oracle approximating shrinkage technique for covariance matrices? Rseek, Google, etc. aren't turning anything up for me. Thanks in advance, Matt Considine
2007 Jun 21
1
mgcv: lowest estimated degrees of freedom
...02, Ecol. Model. 157, p. 157-177). One criterion to decide if a term should be dropped from a model is if the estimated degrees of freedom (EDF) for the term are close to their lower limit. What would be the minimum number of EDF's for a) Univariate thin plate regression splines(TPRS) with shrinkage, i.e. s(...,bs="ts") b) Bivariate tensor products of TPRS with shrinkage? Thanks for any help, Julian Burgos -- Julian M. Burgos Fisheries Acoustics Research Lab School of Aquatic and Fishery Science University of Washington 1122 NE Boat Street Seattle, WA 98105 Phone: 206-221-...
2006 May 27
2
boosting - second posting
...ease, + # 0: no monotone restrictions + distribution="gaussian", # bernoulli, adaboost, gaussian, + # poisson, and coxph available + n.trees=3000, # number of trees + shrinkage=0.005, # shrinkage or learning rate, + # 0.001 to 0.1 usually work + interaction.depth=3, # 1: additive model, 2: two-way interactions, etc. + bag.fraction = 0.5, # subsampling fraction, 0.5 is probably best +...
2006 Apr 11
2
variable selection when categorical variables are available
...as a set? For example, I have a four-level factor variable d, so dummies are d1,d2,d3, as stepwise regression operates d, adding or removing, d1,d2,d3 are simultaneously added/removed. What's the concern here if operating dummies individually? Model interpretability or anything else? (it seems shrinkage methods can operate them one by one) Thanks mike [[alternative HTML version deleted]]
2014 Jul 02
0
How do I call a C++ function (for k-means) within R?
...ribution=as.character(distribution.call.name), n.trees=as.integer(n.trees), interaction.depth=as.integer(interaction.depth), n.minobsinnode=as.integer(n.minobsinnode), n.classes = as.integer(nClass), shrinkage=as.double(shrinkage), bag.fraction=as.double(bag.fraction), nTrain=as.integer(nTrain), fit.old=as.double(NA), n.cat.splits.old=as.integer(0), n.trees.old=as.integer(0), verbose=as...
2002 Mar 01
2
step, leaps, lasso, LSE or what?
...ng overfitting. In Hastie, Tibshirani and Friedman "The Elements of Statistical Learning" chapter 3, they describe a number of procedures that seem better. The use of cross-validation in the training stage presumably helps guard against overfitting. They seem particularly favorable to shrinkage through ridge regressions, and to the "lasso". This may not be too surprising, given the authorship. Is the lasso "generally accepted" as being a pretty good approach? Has it proved its worth on a variety of problems? Or is it at the "interesting idea" stage? What, i...
2013 Jun 24
2
Nomogram (rms) for model with shrunk coefficients
...ach(d) ddist<-datadist(d) options(datadist='ddist') model<-lrm(y~x1+x2, x=TRUE, y=TRUE, data=d) plot(nomogram(model)) ##Nomogram is printed, as expected ##Now the model is internally validated, and regression coefficients are penalized bootstrap<-validate(model, bw=FALSE, B=100) shrinkage<-round(bootstrap[4,5],2) final<-round(model$coef*shrinkage, 3) final.lp<-cbind(model$x)%*%final[-1] final["Intercept"]<-round(lrm.fit(y=d$y, offset=final.lp)$coef,3) final.lp<-final[1]+model$x%*%final[-1] ##The object 'final' now contains all model parameters, yet...
2009 Jun 17
1
gbm for cost-sensitive binary classification?
...ne has similar experience and can advise me how to implement cost-sensitive classification with gbm. model.gbm <- gbm.fit(tr[,1:DIM],tr.y,offset = NULL,misc = NULL,distribution = "bernoulli",w = tr.w,var.monotone = NULL,n.trees = NTREE,interaction.depth = TREEDEPTH,n.minobsinnode = 10,shrinkage = 0.05,bag.fraction = BAGRATIO,train.fraction = 1.0,keep.data = TRUE,verbose = TRUE,var.names = NULL,response.name = NULL); or model.gbm <- gbm(tr.y ~ .,distribution = "bernoulli",data=data.frame(cbind(tr[,1:DIM],tr.y)),weights = tr.w,var.monotone=NULL,n.trees=NTREE,interaction.dep...
2008 Sep 18
1
caret package: arguments passed to the classification or regression routine
...on. here is the code I used and the error: gbm.test <- train(x.enet, y.matrix[,7], method="gbm", distribution=list(name="quantile",alpha=0.5), verbose=FALSE, trControl=trainControl(method="cv",number=5), tuneGrid=gbmGrid ) Model 1: interaction.depth=1, shrinkage=0.1, n.trees=300 collapsing over other values of n.trees Error in gbm.fit(trainX, modY, interaction.depth = tuneValue$.interaction.depth, : formal argument "distribution" matched by multiple actual arguments The same error occured with distribution="laplace". I also tried...
2011 Dec 05
1
finding interpolated values along an empirical parametric curve
...1)) lambdaf <- c(expression(~widehat(beta)^OLS), ".005", ".01", ".02", ".04", ".08") op <- par(mar=c(4, 4, 1, 1) + 0.2, xpd=TRUE) with(pd, {plot(norm.beta, log.det, type="b", cex.lab=1.25, pch=16, cex=1.5, col=clr, xlab='shrinkage: ||b||', ylab='variance: log |(Var(b)|)') text(norm.beta, log.det, lambdaf, cex=1.25, pos=2) text(min(norm.beta), max(log.det), "Variance vs. Shrinkage", cex=1.5, pos=4) }) # How to find the (x,y) positions for these values of lambda along the curve of...
2003 Sep 23
1
Very small estimated random effect variance (lme)
...cts model (lme), of the type: lme1 <- lme(y ~ x, random=~x|group, ...) For some datasets, i obtain very small standard deviations of the random effects. I compared these to standard deviations of the slope and intercept using a lmList approach. Of course, the SD from the lme is always smaller (shrinkage estimator), but in some cases (the problem cases) the SD from the lme seems way too small. E.g.: SD of intercept = 0.14, SD of slope = 0.0004, SD residual=0.11. An lmList gives a slope SD of 0.07. I have about n=6 observations per group, and about 20-100 groups depending on the dataset. thank you...
2013 Mar 24
3
Parallelizing GBM
...rallelization. I normally rely on gbm.fit for speed reasons, and I usually call it this way gbm_model <- gbm.fit(trainRF,prices_train, offset = NULL, misc = NULL, distribution = "multinomial", w = NULL, var.monotone = NULL, n.trees = 50, interaction.depth = 5, n.minobsinnode = 10, shrinkage = 0.001, bag.fraction = 0.5, nTrain = (n_train/2), keep.data = FALSE, verbose = TRUE, var.names = NULL, response.name = NULL) Does anybody know an easy way to parallelize the model (in this case it means simply having 4 cores on the same machine working on the problem)? Any suggestion is welcom...
2010 Apr 26
3
R.GBM package
HI, Dear Greg, I AM A NEW to GBM package. Can boosting decision tree be implemented in 'gbm' package? Or 'gbm' can only be used for regression? IF can, DO I need to combine the rpart and gbm command? Thanks so much! -- Sincerely, Changbin -- [[alternative HTML version deleted]]