Displaying 20 results from an estimated 5000 matches similar to: "How to select a reasonable shrinkage coefficient in stepplr?"
2008 Oct 06
2
stepplr
Hello everybody,
I am trying to install the library stepplr under windows (http://www.maths.bris.ac.uk/R/web/packages/stepPlr/index.html), in order to use the function plr, but I still have problem to find the right link for this purpose!
I am very thankful for your help!
Samor
2005 Feb 15
1
shrinkage estimates in lme
Hello. Slope estimates in lme are shrinkage estimates which pull the
OLS slope estimates towards the population estimates, the degree of
which depends on the group sample size and the distance between the
group-based estimate and the overall population estimate. Although
these shrinkage estimates as said to be more precise with respect to the
true values, they are also biased. So there is a
2005 Jun 01
0
determine the shrinkage threshold in PAMR?
1. According to the doc of PAMR, the shrinkage
threshold is determined by cross-validation. Does this
mean that user need not tune any parameter?
2. I tried two applications using PAMR, the results
are very disappointing. The attached are the
cross-validation results. You can see that the
classification errors are relatively high (0.2 at the
best), in the case of two categories classification,
2006 May 27
2
boosting - second posting
Hi
I am using boosting for a classification and prediction problem.
For some reason it is giving me an outcome that doesn't fall between 0
and 1 for the predictions. I have tried type="response" but it made no
difference.
Can anyone see what I am doing wrong?
Screen output shown below:
> boost.model <- gbm(as.factor(train$simNuance) ~ ., # formula
+
2008 May 06
1
mgcv::gam shrinkage of smooths
In Dr. Wood's book on GAM, he suggests in section 4.1.6 that it might be
useful to shrink a single smooth by adding S=S+epsilon*I to the penalty
matrix S. The context was the need to be able to shrink the term to zero if
appropriate. I'd like to do this in order to shrink the coefficients towards
zero (irrespective of the penalty for "wiggliness") - but not necessarily
all the
2008 Sep 18
1
caret package: arguments passed to the classification or regression routine
Hi,
I am having problems passing arguments to method="gbm" using the train()
function.
I would like to train gbm using the laplace distribution or the quantile
distribution.
here is the code I used and the error:
gbm.test <- train(x.enet, y.matrix[,7],
method="gbm",
distribution=list(name="quantile",alpha=0.5), verbose=FALSE,
2013 Jun 23
1
Which is the final model for a Boosted Regression Trees (GBM)?
Hi R User,
I was trying to find a final model in the following example by using the Boosted regression trees (GBM). The program gives the fitted values but I wanted to calculate the fitted value by hand to understand in depth. Would you give moe some hints on what is the final model for this example?
Thanks
KG
-------
The following script I used
#-----------------------
library(dismo)
2010 Apr 13
0
exract Shrinkage intensity lambda and lambda.var
does anyone know how to extract Shrinkage intensity lambda and lambda.var
values after run cov.shrink(x)?
thanks,
KZ
[[alternative HTML version deleted]]
2012 Dec 08
0
Oracle Approximating Shrinkage in R?
Hi,
Can anyone point me to an implementation in R of the oracle
approximating shrinkage technique for covariance matrices? Rseek,
Google, etc. aren't turning anything up for me.
Thanks in advance,
Matt Considine
2006 May 25
0
boosting
Hi
I am using boosting for a classification and prediction problem.
For some reason it is giving me an outcome that doesn't fall between 0
and 1 for the predictions. I have tried type="response" but it made no
difference.
Can anyone see what I am doing wrong?
Screen output shown below:
> boost.model <- gbm(as.factor(train$simNuance) ~ ., # formula
+
2006 Dec 31
0
(no subject)
> > If one compares the random effect estimates, in fact, one sees that
> > they are in the correct proportion, with the expected signs. They are
> > just approximately eight orders of magnitude too small. Is this a bug?
>
> BLUPs are essentially shrinkage estimates, where shrinkage is
> determined with magnitude of variance. Lower variance more
> shrinkage towards
2009 Jun 17
1
gbm for cost-sensitive binary classification?
I recently use gbm for a binary classification problem. As expected, it gets very good results, based on Area under ROC with 7-fold cross validation. However, the application (malware detection) is cost-sensitive, getting a FP (classify a clean sample as a dirty one) is much worse than getting a FN (miss a dirty sample). I would like to tune the gbm model biased to very low FP rate.
For this
2010 Feb 28
1
Gradient Boosting Trees with correlated predictors in gbm
Dear R users,
I’m trying to understand how correlated predictors impact the Relative
Importance measure in Stochastic Boosting Trees (J. Friedman). As Friedman
described “ …with single decision trees (referring to Brieman’s CART
algorithm), the relative importance measure is augmented by a strategy
involving surrogate splits intended to uncover the masking of influential
variables by others
2010 Jun 09
1
Finding the bootstrapped coefficient of variation and the stderr on the CV(boot)
Dear R-Helpers,
I am trying to bootstrap the coefficient of variation on a suite of
vectors, here I provide an example using one of the vectors in my
study. When I ran this script with the vector x <-c(0.625,
0.071428571, 0.133333333, 0.125, 0), it returned CV(boot) [the second
one], and stderr(boot) [the second one] without problem. However, when
I ran it with the vector in the
2008 Aug 05
1
Confidence interval for the coefficient of variation
Dear,
We are trying to determine the (one-sided) CI for the coefficient of
variation in a small sample (say n = 10), with mean 100 and standard
deviation 21.
It appears though that the R-function ci.cv() and our simulation do not
agree.
The R-code:
library(MBESS)
n = 10
ci.cv(mean = 100, sd = 21, n = 10, conf.level = 0.9)
U10.95 <- 0.3551754
ci.cv(mean = 100, sd = 21, n = 10, conf.level =
2007 Jun 14
0
Confidence interval for coefficient of variation
This is a function I coded a few years ago to calculate a confidence
interval for a coefficient of variation. The code is based on a paper
by Mark Vangel in The American Statistician. I have not used the
function much, but it could be useful for comparing cv's from
different groups.
Kevin Wright
confint.cv <- function(x,alpha=.05, method="modmckay"){
# Calculate the
2012 Mar 01
1
GLM with regularization
Hello,
Thank you for probably not so new question, but i am new to R.
Does any of packages have something like glm+regularization? So far i
see probably something close to that as a ridge regression in MASS but
I think i need something like GLM, in particular binomial regularized
versions of polynomial regression.
Also I am not sure how some of the K-fold crossvalidation helpers out
there
2005 Nov 30
1
Coefficient of Variance (log values) !
Hi,
I am calculating coefficient of variance (CV) for my ELISA's data. For
normal values, I use the standard formula CV = SD/Mean * 100. Now, I
would like to calculate CV for log values. Can any one please
suggest me, how I can do this in R?
Thanks in Advance.
Regards,
Sharon
2009 Aug 05
0
Statistical analyst position in Cambridge, MA
Health Insight Technologies, a VC backed startup with offices in Cambridge,
MA is looking for a motivated Statistical Analyst who is
passionate about creating statistical analyses that will provide insight
into the quality, safety and efficiency of the health care system.
Requirements for the position include:
?2-5 years direct experience working with R
?Extensive experience with data
2011 May 28
1
Questions regrading the lasso and glmnet
Hi all. Sorry for the long email. I have been trying to find someone local to work on this with me, without much luck. I went in to our local stats consulting service here, and the guy there told me that I already know more about model selection than he does. :-< He pointed me towards another professor that can perhaps help, but that prof is busy until mid-June, so I want to get as much