Displaying 20 results from an estimated 132 matches for "overfitting".
2007 Oct 03
1
How to avoid overfitting in gam(mgcv)
Dear listers,
I'm using gam(from mgcv) for semi-parametric regression on small and
noisy datasets(10 to 200
observations), and facing a problem of overfitting.
According to the book(Simon N. Wood / Generalized Additive Models: An
Introduction with R), it is
suggested to avoid overfitting by inflating the effective degrees of
freedom in GCV evaluation with
increased "gamma" value(e.g. 1.4). But in my case, it didn't make a
significant chang...
2004 Dec 22
2
GAM: Overfitting
I am analyzing particulate matter data (PM10) on a small data set (147
observations). I fitted a semi-parametric model and am worried about
overfitting. How can one check for model fit in GAM?
Jean G. Orelien
2008 Feb 16
2
Possible overfitting of a GAM
The subject is a Generalized Additive Model. Experts caution us against
overfitting the data, which can cause inaccurate results. I am not a
statistician (my background is in Computer Science). Perhaps some kind soul
would take a look and vet the model for overfitting the data.
The study estimated the ebb and flow of traffic through a voting place. Just
one voting place was st...
2017 Nov 21
0
Do I need to transform backtest returns before using pbo (probability of backtest overfitting) package functions?
Hello,
I'm trying to understand how to use the pbo package by looking at a
vignette. I'm curious about a part of the vignette that creates simulated
returns data. The package author transforms his simulated returns in a way
that I'm unfamiliar with, and that I haven't been able to find an
explanation for after searching around. I'm curious if I need to replicate
the
2013 Jan 15
0
e1071 SVM, cross-validation and overfitting
I am accustomed to the LIBSVM package, which provides cross-validation
on training with the -v option
% svm-train -v 5 ...
This does 5 fold cross validation while building the model and avoids
over-fitting.
But I don't see how to accomplish that in the e1071 package. (I
learned that svm(... cross=5 ...) only _tests_ using cross-validation
-- it doesn't affect the training.) Can
2010 Apr 08
2
Overfitting/Calibration plots (Statistics question)
...dicted
response [Y hat] on the horizontal axis).
According to Frank Harrell's "Regression Modeling Strategies" book
(pp. 61-63), when making such a plot on new data (having obtained a
model from other data) we should expect the points to be around a line
with slope < 1, indicating overfitting. As he writes, "Typically, low
predictions will be too low and high predictions too high."
However, when I make these plots, both with real data and with simple
simulated data, I get the opposite: the points are scattered around a
line with slope >1. Low predictions are too high and h...
2006 Feb 28
3
does svm have a CV to obtain the best "cost" parameter?
Hi all,
I am using the "svm" command in the e1071 package.
Does it have an automatic way of setting the "cost" parameter?
I changed a few values for the "cost" parameter but I hope there is a
systematic way of obtaining the best "cost" value.
I noticed that there is a "cross" (Cross validation) parameter in the "svm"
function.
But I
2010 Jun 29
1
Model validation and penalization with rms package
...pentrace tells me that of the penalties 1, 2,..., 60, corrected AIC is
maximised by a penalty of 9. This is consistent with the corrected R^2
plot, which shows a maximum somewhere around 10. However, a penalty of
9 still gives an R^2 optimism of 0.09 (training R^2=0.28, test
R^2=0.19), suggesting overfitting.
Do we just have to live with this R^2 optimism? It can be decreased by
taking a bigger penalty, but then the corrected R^2 is reduced. Also,
a penalty of 9 gives a corrected slope of about 1.17 (corrected slope
of 1 is achieved with a penalty of about 1 or 2).
Thanks for any help/advice you can...
2017 Nov 21
0
Do I need to transform backtest returns before using pbo (probability of backtest overfitting) package functions?
Hi Joe,
The centering and re-scaling is done for the purposes of his example, and
also to be consistent with his definition of the sharpe function.
In particular, note that the sharpe function has the rf (riskfree)
parameter with a default value of .03/252 i.e. an ANNUAL 3% rate converted
to a DAILY rate, expressed in decimal.
That means that the other argument to this function, x, should be DAILY
2018 May 04
2
RFC: Are auto-generated assertions a good practice?
...y names are rendered -
but there's no option to produce only ranges without names (wouldn't really
make sense)).
> but I don't see how the complete auto-generated assertions could be worse
> at detecting a miscompile than incomplete manually-generated assertions?
>
Generally overfitting wouldn't result in being bad at detecting failures,
but excess false positives - if things in the output unrelated to the issue
change (in intentional/benign ways) & cause the tests to fail often &
become a burden for the project.
Not suggesting that's the case with these particula...
2010 Jul 14
1
question about SVM in e1071
Hi,
I have a question about the parameter C (cost) in svm function in e1071. I
thought larger C is prone to overfitting than smaller C, and hence leads to
more support vectors. However, using the Wisconsin breast cancer example on
the link:
http://planatscher.net/svmtut/svmtut.html
I found that the largest cost have fewest support vectors, which is contrary
to what I think. please see the scripts below:
Am I misunde...
2017 Nov 21
2
Do I need to transform backtest returns before using pbo (probability of backtest overfitting) package functions?
Wrong list.
Post on r-sig-finance instead.
Cheers,
Bert
On Nov 20, 2017 11:25 PM, "Joe O" <joerodonnell at gmail.com> wrote:
Hello,
I'm trying to understand how to use the pbo package by looking at a
vignette. I'm curious about a part of the vignette that creates simulated
returns data. The package author transforms his simulated returns in a way
that I'm
2002 Mar 01
2
step, leaps, lasso, LSE or what?
...thods that are available for
selecting
variables in a regression without simply imposing my own bias (having "good
judgement"). The methods implimented in leaps and step and stepAIC seem to
fall into the general class of stepwise procedures. But these are commonly
condemmed for inducing overfitting.
In Hastie, Tibshirani and Friedman "The Elements of Statistical Learning"
chapter 3,
they describe a number of procedures that seem better. The use of
cross-validation
in the training stage presumably helps guard against overfitting. They seem
particularly favorable to shrinkage thro...
2013 Mar 01
2
solving x in a polynomial function
Hi there,
Does anyone know how I solve for x from a given y in a polynomial
function? Here's some example code:
##example file
a<-1:10
b<-c(1,2,2.5,3,3.5,4,6,7,7.5,8)
po.lm<-lm(a~b+I(b^2)+I(b^3)+I(b^4)); summary(po.lm)
(please ignore that the model is severely overfit- that's not the point).
Let's say I want to solve for the value b where a = 5.5.
Any thoughts? I did
2018 May 04
0
RFC: Are auto-generated assertions a good practice?
I understand the overfit argument (but in most cases it just shows that a
unit test isn't minimized)...but I don't see how the complete
auto-generated assertions could be worse at detecting a miscompile than
incomplete manually-generated assertions?
The whole point of auto-generating complete checks is to catch
miscompiles/regressions sooner. Ie, before they get committed and result in
2018 May 04
2
RFC: Are auto-generated assertions a good practice?
Yep - all about balance.
The main risk are tests that overfit (golden files being the worst case -
checking that the entire output matches /exactly/ - this is what FileCheck
is intended to help avoid) and maintainability. In the case of the
autogenerated FileCheck lines I've seen so far - they seem like they still
walk a fairly good line of checking exactly what's intended. Though I
2017 Nov 21
0
Do I need to transform backtest returns before using pbo (probability of backtest overfitting) package functions?
Hi Eric,
Thank you, that helps a lot. If I'm understanding correctly, if I?m wanting
to use actual returns from backtests rather than simulated returns, I would
need to make sure my risk-adjusted return measure, sharpe ratio in this
case, matches up in scale with my returns (i.e. daily returns with daily
sharpe, monthly with monthly, etc). And I wouldn?t need to transform
returns like the
2017 Nov 21
1
Do I need to transform backtest returns before using pbo (probability of backtest overfitting) package functions?
Correct
Sent from my iPhone
> On 21 Nov 2017, at 22:42, Joe O <joerodonnell at gmail.com> wrote:
>
> Hi Eric,
>
> Thank you, that helps a lot. If I'm understanding correctly, if I?m wanting to use actual returns from backtests rather than simulated returns, I would need to make sure my risk-adjusted return measure, sharpe ratio in this case, matches up in scale with
2018 May 04
0
RFC: Are auto-generated assertions a good practice?
...39;s no option to produce only ranges without names (wouldn't really
> make sense)).
>
>
>> but I don't see how the complete auto-generated assertions could be worse
>> at detecting a miscompile than incomplete manually-generated assertions?
>>
>
> Generally overfitting wouldn't result in being bad at detecting failures,
> but excess false positives - if things in the output unrelated to the issue
> change (in intentional/benign ways) & cause the tests to fail often &
> become a burden for the project.
>
> Not suggesting that's the c...
2017 Nov 21
2
Do I need to transform backtest returns before using pbo (probability of backtest overfitting) package functions?
[re-sending - previous email went out by accident before complete]
Hi Joe,
The centering and re-scaling is done for the purposes of his example, and
also to be consistent with his definition of the sharpe function.
In particular, note that the sharpe function has the rf (riskfree)
parameter with a default value of .03/252 i.e. an ANNUAL 3% rate converted
to a DAILY rate, expressed in decimal.
That