Displaying 20 results from an estimated 6000 matches similar to: "[OT] - standard errors for parameter estimates under ridge regression and lasso?"
2008 Jan 14
1
[Off Topic] searching for a quote
Dear community,
I'm trying to track down a quote, but can't recall the source or the
exact structure - not very helpful, I know - something along the lines
that:
80% of [applied] statistics is linear regression ...
?
Does this ring a bell for anyone?
Thanks,
Andrew
--
Andrew Robinson
Department of Mathematics and Statistics Tel: +61-3-8344-9763
University of
2007 Feb 10
0
Can we change environment within the browser?
Dear R-helpers,
when in the browser, is it possible to change the environment, so as
to be able to easily access (print, manipulate) objects in the parent,
or elsehwere?
I know that it is possible to evaluate expressions in different
environments, using eval(), but I would prefer to avoid that if
possible.
Thanks,
Andrew
--
Andrew Robinson
Department of Mathematics and Statistics
2006 Dec 10
0
lmer, gamma family, log link: interpreting random effects
Dear all,
I'm curious about how to interpret the results of the following code.
The first model is directly from the help page of lmer; the second is
the same model but using the Gamma family with log link. The fixed
effects make sense, because
y = 251.40510 + 10.46729 * Days
is about the same as
log(y) = 5.53613298 + 0.03502057 * Days
but the random effects seem quite
2007 Jun 14
0
How to get a point estimate from the studentized bootstrap?
Dear Friends and Colleagues,
I'm puzzling over how to interpret or use some bootstrap intervals. I
think that I know what I should do, but I want to check with
knowledgeable people first!
I'm using a studentized non-parametric bootstrap to estimate 95%
confidence intervals for three parameters. I estimate the variance of
the bootstrap replicates using another bootstrap. The script
2006 Oct 24
1
Cook's Distance in GLM (PR#9316)
Hi Community,
I'm trying to reconcile Cook's Distances computed in glm. The
following snippet of code shows that the Cook's Distances contours on
the plot of Residuals v Leverage do not seem to be the same as the
values produced by cooks.distance() or in the Cook's Distance against
observation number plot.
counts <- c(18,17,15,20,10,20,25,13,12)
outcome <- gl(3,1,9)
2007 Nov 06
1
A suggestion for an amendment to tapply
Dear R-developers,
when tapply() is invoked on factors that have empty levels, it returns
NA. This behaviour is in accord with the tapply documentation, and is
reasonable in many cases. However, when FUN is sum, it would also
seem reasonable to return 0 instead of NA, because "the sum of an
empty set is zero, by definition."
I'd like to raise a discussion of the possibility of an
2008 Aug 04
1
Decomposing tests of interaction terms in mixed-effects models
Dear R colleagues,
a friend and I are trying to develop a modest workflow for the problem
of decomposing tests of higher-order terms into interpretable sets of
tests of lower order terms with conditioning.
For example, if the interaction between A (3 levels) and C (2 levels)
is significant, it may be of interest to ask whether or not A is
significant at level 1 of C and level 2 of C.
The
2010 Jan 06
0
parcor 0.2-2 - Regularized Partial Correlation Matrices with (adaptive) Lasso, PLS, and Ridge Regression
Dear R-users,
we are happy to announce the release of our R package parcor.
The package contains tools to estimate the matrix of partial
correlations based on different regularized regression methods: Lasso,
adaptive Lasso, PLS, and Ridge Regression. In addition, parcor provides
cross-validation based model selection for Lasso, adaptive Lasso and
Ridge Regression.
More details can be found
2010 Jan 06
0
parcor 0.2-2 - Regularized Partial Correlation Matrices with (adaptive) Lasso, PLS, and Ridge Regression
Dear R-users,
we are happy to announce the release of our R package parcor.
The package contains tools to estimate the matrix of partial
correlations based on different regularized regression methods: Lasso,
adaptive Lasso, PLS, and Ridge Regression. In addition, parcor provides
cross-validation based model selection for Lasso, adaptive Lasso and
Ridge Regression.
More details can be found
2017 Oct 31
0
lasso and ridge regression
Dear All
The problem is about regularization methods in multiple regression when the
independent variables are collinear. A modified regularization method with
two tuning parameters l1 and l2 and their product l1*l2 (Lambda 1 and
Lambda 2) such that l1 takes care of ridge property and l2 takes care of
LASSO property is proposed
The proposed method is given
2011 Jun 14
2
Off-topic: (Simple?) Random Sampling when n is a random variable
Hi everyone,
I'm involved in a discussion with a colleague. He suggested a sample
design for a finite-sized process that (to all intents and purposes)
involves tossing a coin and examining the unit if the coin shows
Heads.
I should emphasize that we're both approaching the problem from a
design-based sampling theory point of view. So I have no argument
about the appropriateness of the
2007 Mar 16
3
ARIMA standard error
Hi,
Can anyone explain how the standard error in arima() is calculated?
Also, how can I extract it from the Arima object? I don't see it in there.
> x <- rnorm(1000)
> a <- arima(x, order = c(4, 0, 0))
> a
Call:
arima(x = x, order = c(4, 0, 0))
Coefficients:
ar1 ar2 ar3 ar4 intercept
-0.0451 0.0448 0.0139 -0.0688 0.0010
s.e.
2006 Feb 10
1
Splitting printed output in Sweave
Dear R community,
I'm trying to figure out if there is any way to split the printed output
of some commands, for example summary.lme, so that I can intersperse
comments in Sweave. I don't mind running the command numerous times and
masking various portions of the output, or saving the output as an object
and printing it, but I can't figure out how to do either. Does anyone
have any
2005 Sep 08
1
Creating very small plots (2.5 cm wide) in Sweave
Hi everyone,
I was wondering if anyone has any code they could share for creating
thumbnail plots in Sweave. For example, I'd like a plot like the
following:
y <- c(40, 46, 39, 44, 23, 36, 70, 39, 30, 73, 53, 74)
x <- c(6, 4, 3, 6, 1, 5, 6, 2, 1, 8, 4, 6)
opar <- par(mar=c(3,3,0,0))
plot(x, y, xlab="", ylab="")
abline(h=mean(y), col="red")
par(opar)
2006 Feb 19
3
Changing predictor order in lm()
Dear community,
can anyone provide a snippet of code to force the lm() to fit a model with
terms in the formula in an arbitrary order? I am interested in something
like:
lm(y ~ A * B + C, data=data)
where the interaction of A and B should be in the formula before C. My
goal is to simplify my presentation of models using the anova() statement.
I have found that this should be possible using
2015 Oct 05
2
authorship and citation
As a fourth option, I wonder if the first author could fork the package?
Presumably, appropriately cited, a fork is permitted by the license under
which it was released. Then the original package, by both authors, still
exists (and a final version could point to the new one) and the new
package, citing the previous version appropriately, is by a single author.
The page of CRAN's policies
2012 Dec 27
1
Ridge Regression variable selection
Unlike L1 (lasso) regression or elastic net (mixture of L1 and L2), L2 norm
regression (ridge regression) does not select variables. Selection of
variables would not work properly, and it's unclear why you would want to
omit "apparently" weak variables anyway.
Frank
maths123 wrote
> I have a .txt file containing a dataset with 500 samples. There are 10
> variables.
>
>
2005 Sep 25
1
R CMD build produces tar error under FreeBSD 5.4
Hi R-helpers,
I am trying to build a package under FreeBSD 5.4-RELEASE #0 using R
Version 2.1.1.
I have constructed a package using package.skeleton(), when I try
$ R CMD build foo
* checking for file 'foo/DESCRIPTION' ... OK
* preparing 'foo':
* checking DESCRIPTION meta-information ... OK
* cleaning src
* removing junk files
tar: Option -L is not permitted in mode -x
Error:
2006 Apr 07
2
Command line support tools - suggestions?
Dear Community,
I'm interested in developing a package that could ease the
command-line learning curve for new users. It would provide more
detailed syntax checking and commentary as feedback. It would try to
anticipate common new-user errors, and provide feedback to help
correct them.
As a trivial example, instead of
> mean(c(1,2,NA))
[1] NA
we might have
> mean(c(1,2,NA))
[1]
2006 Jul 02
0
Rassist - Student-friendly package
The Rassist package has been loaded to CRAN. This package is designed
to make R easier for new users, by providing extra checks and
feedback.
Presently the package functionality includes:
* offers an alternative help facility, eg(.), with examples first,
with additional examples included. eg() offers a start help menu,
and eg(.) incorporates help.search(.) automatically. It also