Displaying 20 results from an estimated 4791 matches for "approximates".
Did you mean:
approximate
2003 May 08
2
approximation of CDF
Hi all,
is there any package in R capable of smooth approximation of CDF
basing on given sample?
(Thus, I am not speaking about ecdf)
In particular, I expect very much that the approximation should
subject to the property:
f(x0)<=f(x1) for x0<x1, where x0 and x1 belong to range of
the sample given.
Polynomial approximation could be OK for me as well.
P.S.
2011 Feb 18
3
Confidence Intervals on Standard Curve
Hi, I wonder if anyone could advise me with this:
I've been trying to make a standard curve in R with lm() of some
standards from a spectrophotometer, so as I can express the curve as a
formula, and so obtain values from my treated samples by plugging in
readings into the formula, instead of trying to judge things by eye,
with a curve drawn by hand.
It is a curve and so I used the
2003 May 08
1
AW: approximation of CDF
> Almost any method of fitting a density estimate would work on
> integrating (numerically) the result.
it is a nice idea concerning the monotony property, which
will be obtained automatically, but I am going to use results
of approximation analytically
> In particular, look at package polspline, where
> p(old)logspline does the integration for you.
thank you, I am going to
2017 Sep 25
5
bowed linear approximations
Dear Rich,
Assuming that I understand what you want to do, try adding the following to your script (which, by the way, is more complicated that it needs to be):
xx <- 10:50
m <- lm(y ~ x)
yy <- predict(m, data.frame(x=xx))
lines(spline(xx, yy), col="blue")
m <- lm(y ~ log(x))
yy <- predict(m, data.frame(x=xx))
points(xx, yy, col="magenta")
The first set of
2018 Mar 17
2
Clang executable sizes and build stats
Hi all,
I recently did a run where I built clang executables on FreeBSD 12-CURRENT [1], from trunk r250000 (2015-10-11) all through r327700 (2018-03-16), with increments of 100 revisions. This is mainly meant as an archive, for easily doing bisections, but there are also some interesting statistics.
From r250000 through r327700:
* the total (stripped) executable size grew by approximately 43%
*
2002 Sep 11
1
rational approximations to the normal cdf
In the R source, nmath/pnorm.c contains the
code for a rational function approximation
for the normal cdf. These constants are listed:
const double a[5] = {
2.2352520354606839287,
161.02823106855587881,
1067.6894854603709582,
18154.981253343561249,
0.065682337918207449113
};
The source file cites a paper by Cody (1969)
and states that these
2017 Sep 26
0
bowed linear approximations
Hi Rich,
If I understand your comment about "uniformly distributed along the log=x
axis" then I think John's (second) set of commands needs a change to the
definition of xx, as follows:
xx <- exp(seq(from=log(min(x)),to=log(max(x)),length=50))
m <- lm(y ~ log(x))
yy <- predict(m, data.frame(x=xx))
points(xx, yy, col="red")
HTH,
Eric
On Mon, Sep 25, 2017 at
2017 Sep 25
0
bowed linear approximations
Hello,
Please run the following code snippet and note the resulting plot:
x <- c(10, 50)
y <- c(0.9444483, 0.7680123)
plot(x,y,type="b",log="x")
for(i in 1:50){
xx <- exp(runif(1,log(min(x)),log(max(x)) ))
yy <- approx(x,y,xout=xx, method = "linear")
points(xx,yy$y)
}
notice the "log=x" plot parameter and the resulting "bow" in the
2011 Jul 22
3
Cox model approximaions (was "comparing SAS and R survival....)
For time scale that are truly discrete Cox proposed the "exact partial
likelihood". I call that the "exact" method and SAS calls it the
"discrete" method. What we compute is precisely the same, however they
use a clever algorithm which is faster. To make things even more
confusing, Prentice introduced an "exact marginal likelihood" which is
not
2008 Dec 23
1
Approximate Entropy?
Dear guRus,
is there a package that calculates the Approximate Entropy (ApEn) of a
time series?
RSiteSearch only gave me a similar question in 2004, which appears not
to have been answered:
http://finzi.psych.upenn.edu/R/Rhelp02a/archive/28830.html
RSeek.org didn't yield any results at all.
Happy holidays (where appropriate),
Stephan
2010 May 17
2
best polynomial approximation
Dear R-users,
I learned today that there exists an interesting topic in numerical
analysis names "best polynomial approximation" (BSA). Given a function
f the BSA of degree k, say pk, is the polynomial such that
pk=arginf sup(|f-pk|)
Although given some regularity condition of f, pk is unique, pk IS NOT
calculated with least square. A quick google tour show a rich field of
research
2015 Jan 12
3
[LLVMdev] NP-hard problems in the LLVM optimizer?
Hi all.
I’ve heard a couple of times that some of the problems solved by various
passes in the optimizer are indeed NP-hard, even though the instances
are small enough to be tractable (and very quickly).
Is this true? If so, which are these problems?
Register allocation? Instruction scheduling?
Are they solved exactly or by approximations?
Or not solved at all (the need of solving them is
2006 Jan 31
1
approximation to ln \Phi(x)
I am using pnorm() with the log.p=T argument to get approximations to ln \Phi(x) and qnorm with the log.p=T argument to get estimates of \Phi^{-1}(exp(x)). What approximations are used in these two functions (I noticed in the source pnorm.c it doesn't look like Abramowitz and Stegen) and where can I find the citation?
Thanks,
Richard Morey
2018 Mar 17
0
[cfe-dev] Clang executable sizes and build stats
Thanks for raising this. This is something we've recently been looking at
too at Sony, as over the course of PS4's lifetime so far we've seen our
clang executable on Windows approximately double in size, which isn't ideal
for things like distributed build systems. A graph of clang.exe size on
our internal staging branch matches yours closely with it being more of a
death by a
2018 Mar 17
2
[cfe-dev] Clang executable sizes and build stats
I'm sure the x86 scheduler models are causing bloat. Every time a single
instruction appears on a line by itself like this in a scheduler model:
def: InstRW<[SBWriteResGroup2], (instregex "ANDNPDrr")>;
It causes that instruction to be its own group in the generated output. And
its replicated for each CPU. We should look into better using regular
expressions or taking
2023 Apr 21
1
Confusion about ks.test() handling of ties and exact vs approximate results
Hello,
Today I was investigating ks.test() with two numerical arguments (x and y) and was left a bit confused about the policy behind handling ties.
I might be missing something, so sorry in advance, but here is what confuses me:
The documentation states: "The presence of ties always generates a warning, since continuous distributions do not generate them"
But when I run a test with
2009 Apr 06
2
approximation function
Hi,
Having a set of values (non-time series data), what are the approximation functions that could determine the trend of the values?
Cheers,
Carol
[[alternative HTML version deleted]]
2010 Nov 01
2
Post-processing of approximated irregular time series
Hi all,
Issue: I merged two zoo objects (a regular and an irregular). After the merge I used the function 'na.approx' to have also values in the resolution of the regular time series.
Problem: After approximation some rows at the beginning or at the end of the zoo objects disappear due to the 'na.approx' algorithm. Now I just want to have all the rows of the regular time series
2017 Sep 26
0
bowed linear approximations
Dear Rich,
I think that it's generally a bad idea to give statistical (as opposed to simply technical) advice by email without knowing the context of the research. I think that you'd do well to seek help from a statistician, and not just do what I suggest below.
Interpolating the data only makes sense if there's no random component to the response (mag in your data). Otherwise, it
2011 May 15
5
Question on approximations of full logistic regression model
Hi,
I am trying to construct a logistic regression model from my data (104
patients and 25 events). I build a full model consisting of five
predictors with the use of penalization by rms package (lrm, pentrace
etc) because of events per variable issue. Then, I tried to approximate
the full model by step-down technique predicting L from all of the
componet variables using ordinary least squares