Displaying 20 results from an estimated 4000 matches similar to: "(1-1e-100)==1 true?"
2011 Mar 29
2
normal distribution and floating point traps (?): unexpected behavior
dear all,
here's a couple of questions that puzzled me in these last hours:
##### issue 1 qnorm(1-10e-100)!=qnorm(10e-100)
qnorm(1-1e-10) == -qnorm(1e-10)
# turns on to be FALSE. Ok I'm not a computer scientist but,
# but I had a look at the R inferno so I write:
all.equal(qnorm(1-1e-10) , -qnorm(1e-10))
# which turns TRUE, as one would expect, but
all.equal(qnorm(1-1e-100) ,
2004 Aug 06
3
Bug in qnorm or pnorm?
I found the following strange behavior using qnorm() and pnorm():
> x<-8.21;x-qnorm(pnorm(x))
[1] 0.0004638484
> x<-8.22;x-qnorm(pnorm(x))
[1] 0.01046385
> x<-8.23;x-qnorm(pnorm(x))
[1] 0.02046385
> x<-8.24;x-qnorm(pnorm(x))
[1] 0.03046385
> x<-8.25;x-qnorm(pnorm(x))
[1] 0.04046385
> x<-8.26;x-qnorm(pnorm(x))
[1] 0.05046385
> x<-8.27;x-qnorm(pnorm(x))
2019 Jun 21
4
Calculation of e^{z^2/2} for a normal deviate z
You may want to look into using the log option to qnorm
e.g., in round figures:
> log(1e-300)
[1] -690.7755
> qnorm(-691, log=TRUE)
[1] -37.05315
> exp(37^2/2)
[1] 1.881797e+297
> exp(-37^2/2)
[1] 5.314068e-298
Notice that floating point representation cuts out at 1e+/-308 or so. If you want to go outside that range, you may need explicit manipulation of the log values. qnorm()
2011 Sep 03
3
question with uniroot function
Dear all,
I have the following problem with the uniroot function. I want to find
roots for the fucntion "Fp2" which is defined as below.
Fz <- function(z){0.8*pnorm(z)+p1*pnorm(z-u1)+(0.2-p1)*pnorm(z-u2)}
Fp <- function(t){(1-Fz(abs(qnorm(1-(t/2)))))+(Fz(-abs(qnorm(1-(t/2)))))}
Fp2 <- function(t) {Fp(t)-0.8*t/alpha}
th <- uniroot(Fp2, lower =0, upper =1,
2019 Jun 23
2
Calculation of e^{z^2/2} for a normal deviate z
I agree with many the sentiments about the wisdom of computing very
small p-values (although the example below may win some kind of a prize:
I've seen people talking about p-values of the order of 10^(-2000), but
never 10^(-(10^8)) !). That said, there are a several tricks for
getting more reasonable sums of very small probabilities. The first is
to scale the p-values by dividing the
2001 Jul 02
2
Shapiro-Wilk test
Hi,
does the shapiro wilk test in R-1.3.0 work correctly? Maybe it does, but can
anybody tell me why the following sample doesn't give "W = 1" and
"p-value = 1":
R> x<-1:9/10;x
[1] 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
R> shapiro.test(qnorm(x))
Shapiro-Wilk normality test
data: qnorm(x)
W = 0.9925, p-value = 0.9986
I can't imagine a sample being
2012 Apr 24
2
Some Help Needed
Dear all,
I need to do some calculation where the code used are below. I get
error message when I choose k to be large, say greater than 25.
The error message is
"Error in integrate(temp, lower = 0, upper = 1, k, x, rho, m) :
the integral is probably divergent".
Can anyone give some help on resolving this. Thanks.
Hannah
m <- 100
alpha <- 0.05
rho <- 0.1
F0
2010 Oct 03
2
sampling from normal distribution
Hello
If i want to resampl from the tails of normal distribution , are these commans equivelant??
upper tail:qnorm(runif(n,pnorm(b),1)) if b is an upper tail boundary
or
upper tail:qnorm((1-p)+p(runif(n)) if p is the probability of each interval (the observatins are divided to intervals)
Regards
[[alternative HTML version deleted]]
2019 Jun 21
4
Calculation of e^{z^2/2} for a normal deviate z
Hello,
Well, try it:
p <- .Machine$double.eps^seq(0.5, 1, by = 0.05)
z <- qnorm(p/2)
pnorm(z)
# [1] 7.450581e-09 1.228888e-09 2.026908e-10 3.343152e-11 5.514145e-12
# [6] 9.094947e-13 1.500107e-13 2.474254e-14 4.080996e-15 6.731134e-16
#[11] 1.110223e-16
p/2
# [1] 7.450581e-09 1.228888e-09 2.026908e-10 3.343152e-11 5.514145e-12
# [6] 9.094947e-13 1.500107e-13 2.474254e-14 4.080996e-15
2009 Feb 06
1
16 digits and beyond? R64-bit a solution?
Hi,
I am working with some extremely small p-values and I want to capture
the corresponding quantiles.
I see the help file it says:
'qnorm' is based on Wichura's algorithm AS 241 which provides
precise results up to about 16 digits.
What happen after the 16th digits?
If I am running R in a server 64-bit, can that improve the chances that
beyond 16th digits to still have
2011 May 30
1
Error in minimizing an integrand using optim
Hi,
Am not sure if my code itself is correct. Here's what am trying to do:
Minimize integration of a function of gaussian distributed variable 'x' over
the interval qnorm(0.999) to Inf by changing value of parameter 'mu'. mu is
the shift in mean of 'x'.
Code:
# x follows gaussian distribution
# fx2 to be minimized by changing values of mu
# integration to be done over
2010 Nov 12
4
dnorm and qnorm
Hello all,
I have a question about basic statistics. Given a PDF value of 0.328161,
how can I find out the value of -0.625 in R? It is like reversing the dnorm
function but I do not know how to do it in R.
> pdf.xb <- dnorm(-0.625)
> pdf.xb
[1] 0.328161
> qnorm(pdf.xb)
[1] -0.444997
> pnorm(pdf.xb)
[1] 0.628605
Many thanks,
Edwin
--
View this message in context:
2012 Oct 17
1
how R implement qnorm()
how R implement qnorm()
I wonder anyone knows the mathematical process that R calculated the
quantile?
The reason I asked is soly by curiosity. I know the probability of a normal
distribution is calculated through integrate the Gaussian function, which
can be implemented easily (see code), while the calculation of quantile
(or Zα) in R is a bit confusing as it requires inverse error function (X
2006 Jan 31
1
approximation to ln \Phi(x)
I am using pnorm() with the log.p=T argument to get approximations to ln \Phi(x) and qnorm with the log.p=T argument to get estimates of \Phi^{-1}(exp(x)). What approximations are used in these two functions (I noticed in the source pnorm.c it doesn't look like Abramowitz and Stegen) and where can I find the citation?
Thanks,
Richard Morey
2006 Dec 13
2
persp() problem
Dear list,
I have a problem on persp()
x <- u1data #first coloum in attached data
y <- u2data #second coloum in attached data
f <- function(x,y){qgev(pnorm(rhoF*qnorm(pnorm((qnorm(y)-rho2*qnorm(x)/sqrt(1-rho2^2))))
+sqrt(1-rhoF^2)*qnorm(0.95)),-0.3935119, 0.4227890,
0.2701648)}
z <- outer(x,y,f)
persp(x,y,z)
The R will display:
"Error in persp.default(x, y,
2006 Oct 27
3
Power of test
What would be the R formulae for a two-sided test?
I have a formula for a one-sided test:
powertest <- function(a,m0,m1,n,s){
t1 = -qnorm(1-a)
num = abs(m0-m1) * sqrt(n)
t2 = num/s
pow = pnorm(t1 + t2)
}
Would you pls let me know if you know of?
Thank you,
ej
2019 Jun 24
2
Calculation of e^{z^2/2} for a normal deviate z
>>>>> William Dunlap via R-devel
>>>>> on Sun, 23 Jun 2019 10:34:47 -0700 writes:
>>>>> William Dunlap via R-devel
>>>>> on Sun, 23 Jun 2019 10:34:47 -0700 writes:
> include/Rmath.h declares a set of 'logspace' functions for use at the C
> level. I don't think there are core R functions that call
2000 Jan 12
1
Usage of p/d/qnorm
Hello,
could You please help: I am looking for a way to formulate test accuracy
measures such as test sensitivity, specificity, predictive values, and
correct classification rate using p/d/qnorm. The tests' primary values
follow a bimodal distribution, which is modelled by a mixture of two normal
distributions:
p * dnorm ((x - u1) / s1) / s1 +
(1 - p) * dnorm ((x - u2) / s2) / s2)
2011 Aug 02
3
how to get the percentile of a number in a vector
I'm familiar with the quantile() command, but what if I have a specific
number that I want to know its location in a vector? I know that in known
distributions, (for example the normal distribution), there is pnorm and
qnorm, but how can I do it with unknown vector?
thanks in advance
_________________________________________________________________
Walla! Mail - [1]Get
2005 Feb 21
4
rnorm??
I am wondering whether there is a bug in rnorm.
When generating rnorm(1000000) and counting
the cases > 4 and the cases < (-4) I get rather
unexpectedly low counts for the latter. The problem goes away
when using qnorm(runif(1000000)).
Fritz Scholz, PhD
Applied Statistics Group
Boeing Phantom Works
fritz.scholz at pss.boeing.com
425-865-3623
Tu/We 206-542-6545 (most likely)