Despite the need to focus on pbeta, I'm still willing to put in some effort. But I find it really helps to have 2-3 others involved, since the questions back and forth keep matters moving forward. Volunteers? Thanks to Martin for detailed comments. JN On 2020-03-26 10:34 a.m., Martin Maechler wrote:>>>>>> J C Nash >>>>>> on Thu, 26 Mar 2020 09:29:53 -0400 writes: > > > Given that a number of us are housebound, it might be a good time to try to > > improve the approximation. It's not an area where I have much expertise, but in > > looking at the qbeta.c code I see a lot of root-finding, where I do have some > > background. However, I'm very reluctant to work alone on this, and will ask > > interested others to email off-list. If there are others, I'll report back. > > Hi John. > Yes, qbeta() {in its "main branches"} does zero finding, but > zero finding of pbeta(...) - p* and I tried to explain in my > last e-mail that the real problem is that already pbeta() is not > accurate enough in some unstable corners ... > The order fixing should typically be > 1) fix pbeta() > 2) look at qbeta() which now may not even need a fix because its > problems may have been entirely a consequence of pbeta()'s inaccuracies. > And if there are cases where the qbeta() problems are not > only pbeta's "fault", it is still true that the fixes that > would still be needed crucially depend on the detailed > working of the function whose zero(s) are sought, i.e., pbeta() > > > Ben: Do you have an idea of parameter region where approximation is poor? > > I think that it would be smart to focus on that to start with. > > ---------------------------- > > Rmpfr matrix-/vector - products: > > > Martin: On a separate precision matter, did you get my query early in year about double > > length accumulation of inner products of vectors in Rmpfr? R-help more or > > less implied that Rmpfr does NOT use extra length. I've been using David > > Smith's FM Fortran where the DOT_PRODUCT does use double length, but it > > would be nice to have that in R. My attempts to find "easy" workarounds have > > not been successful, but I'll admit that other things took precedence. > > Well, the current development version of 'Rmpfr' on R-forge now > contains facilities to enlarge the precision of the computations > by a factor 'fPrec' with default 'fPrec = 1'; > notably, instead of x %*% y (where the `%*%` cannot have more > than two arguments) does have a counterpart matmult(x,y, ....) > which allows more arguments, namely 'fPrec', or directly 'precBits'; > and of course there are crossprod() and tcrossprod() one should > use when applicable and they also got the 'fPrec' and > 'precBits' arguments. > > {The %*% etc precision increase still does not work optimally > efficiency wise, as it simply increases the precision of all > computations by just increasing the precision of x and y (the inputs)}. > > The whole Matrix and Matrix-vector arithmetic is still > comparibly slow in Rmpfr .. mostly because I valued human time > (mine!) much higher than computer time in its implementation. > That's one reason I would never want to double the precision > everywhere as it decreases speed even more, and often times > unnecessarily: doubling the accuracy is basically "worst-case > scenario" precaution > > Martin >
This is also strange:
qbeta <- function (p, shape1, shape2, ncp = 0, lower.tail = TRUE, log.p =
FALSE)
{
if (missing(ncp))
.Call(C_qbeta, p, shape1, shape2, lower.tail, log.p)
else .Call(C_qnbeta, p, shape1, shape2, ncp, lower.tail,
log.p)
}
Since the default value is 0 for non-centrality, it seems like the logic above
is wrong. When ncp=0, C_qnbeta would be called rather than C_qbeta.
Am I missing something?
Ravi
________________________________
From: R-devel <r-devel-bounces at r-project.org> on behalf of J C Nash
<profjcnash at gmail.com>
Sent: Thursday, March 26, 2020 10:40:05 AM
To: Martin Maechler
Cc: r-devel at r-project.org
Subject: Re: [Rd] unstable corner of parameter space for qbeta?
Despite the need to focus on pbeta, I'm still willing to put in some effort.
But I find it really helps to have 2-3 others involved, since the questions back
and forth keep matters moving forward. Volunteers?
Thanks to Martin for detailed comments.
JN
On 2020-03-26 10:34 a.m., Martin Maechler wrote:>>>>>> J C Nash
>>>>>> on Thu, 26 Mar 2020 09:29:53 -0400 writes:
>
> > Given that a number of us are housebound, it might be a good time
to try to
> > improve the approximation. It's not an area where I have much
expertise, but in
> > looking at the qbeta.c code I see a lot of root-finding, where I
do have some
> > background. However, I'm very reluctant to work alone on this,
and will ask
> > interested others to email off-list. If there are others, I'll
report back.
>
> Hi John.
> Yes, qbeta() {in its "main branches"} does zero finding, but
> zero finding of pbeta(...) - p* and I tried to explain in my
> last e-mail that the real problem is that already pbeta() is not
> accurate enough in some unstable corners ...
> The order fixing should typically be
> 1) fix pbeta()
> 2) look at qbeta() which now may not even need a fix because its
> problems may have been entirely a consequence of pbeta()'s
inaccuracies.
> And if there are cases where the qbeta() problems are not
> only pbeta's "fault", it is still true that the fixes that
> would still be needed crucially depend on the detailed
> working of the function whose zero(s) are sought, i.e., pbeta()
>
> > Ben: Do you have an idea of parameter region where approximation
is poor?
> > I think that it would be smart to focus on that to start with.
>
> ----------------------------
>
> Rmpfr matrix-/vector - products:
>
> > Martin: On a separate precision matter, did you get my query early
in year about double
> > length accumulation of inner products of vectors in Rmpfr? R-help
more or
> > less implied that Rmpfr does NOT use extra length. I've been
using David
> > Smith's FM Fortran where the DOT_PRODUCT does use double
length, but it
> > would be nice to have that in R. My attempts to find
"easy" workarounds have
> > not been successful, but I'll admit that other things took
precedence.
>
> Well, the current development version of 'Rmpfr' on R-forge now
> contains facilities to enlarge the precision of the computations
> by a factor 'fPrec' with default 'fPrec = 1';
> notably, instead of x %*% y (where the `%*%` cannot have more
> than two arguments) does have a counterpart matmult(x,y, ....)
> which allows more arguments, namely 'fPrec', or directly
'precBits';
> and of course there are crossprod() and tcrossprod() one should
> use when applicable and they also got the 'fPrec' and
> 'precBits' arguments.
>
> {The %*% etc precision increase still does not work optimally
> efficiency wise, as it simply increases the precision of all
> computations by just increasing the precision of x and y (the inputs)}.
>
> The whole Matrix and Matrix-vector arithmetic is still
> comparibly slow in Rmpfr .. mostly because I valued human time
> (mine!) much higher than computer time in its implementation.
> That's one reason I would never want to double the precision
> everywhere as it decreases speed even more, and often times
> unnecessarily: doubling the accuracy is basically "worst-case
> scenario" precaution
>
> Martin
>
______________________________________________
R-devel at r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel
[[alternative HTML version deleted]]
>>>>> Ravi Varadhan >>>>> on Thu, 26 Mar 2020 18:33:43 +0000 writes:> This is also strange: > qbeta <- function (p, shape1, shape2, ncp = 0, lower.tail = TRUE, log.p = FALSE) > { > if (missing(ncp)) > .Call(C_qbeta, p, shape1, shape2, lower.tail, log.p) > else .Call(C_qnbeta, p, shape1, shape2, ncp, lower.tail, > log.p) > } > Since the default value is 0 for non-centrality, it seems like the logic above is wrong. When ncp=0, C_qnbeta would be called rather than C_qbeta. > Am I missing something? Yes. This has been on purpose (forever) and I think the help page mentions that - though probably a bit subtly. The way it is now, one can use both algorithms to compute what in principle should be the main thing. > Ravi > ________________________________ > From: R-devel <r-devel-bounces at r-project.org> on behalf of J C Nash <profjcnash at gmail.com> > Sent: Thursday, March 26, 2020 10:40:05 AM > To: Martin Maechler > Cc: r-devel at r-project.org > Subject: Re: [Rd] unstable corner of parameter space for qbeta? > Despite the need to focus on pbeta, I'm still willing to put in some effort. > But I find it really helps to have 2-3 others involved, since the questions back > and forth keep matters moving forward. Volunteers? > Thanks to Martin for detailed comments. > JN > On 2020-03-26 10:34 a.m., Martin Maechler wrote: >>>>>>> J C Nash >>>>>>> on Thu, 26 Mar 2020 09:29:53 -0400 writes: >> >> > Given that a number of us are housebound, it might be a good time to try to >> > improve the approximation. It's not an area where I have much expertise, but in >> > looking at the qbeta.c code I see a lot of root-finding, where I do have some >> > background. However, I'm very reluctant to work alone on this, and will ask >> > interested others to email off-list. If there are others, I'll report back. >> >> Hi John. >> Yes, qbeta() {in its "main branches"} does zero finding, but >> zero finding of pbeta(...) - p* and I tried to explain in my >> last e-mail that the real problem is that already pbeta() is not >> accurate enough in some unstable corners ... >> The order fixing should typically be >> 1) fix pbeta() >> 2) look at qbeta() which now may not even need a fix because its >> problems may have been entirely a consequence of pbeta()'s inaccuracies. >> And if there are cases where the qbeta() problems are not >> only pbeta's "fault", it is still true that the fixes that >> would still be needed crucially depend on the detailed >> working of the function whose zero(s) are sought, i.e., pbeta() >> >> > Ben: Do you have an idea of parameter region where approximation is poor? >> > I think that it would be smart to focus on that to start with. >> >> ---------------------------- >> >> Rmpfr matrix-/vector - products: >> >> > Martin: On a separate precision matter, did you get my query early in year about double >> > length accumulation of inner products of vectors in Rmpfr? R-help more or >> > less implied that Rmpfr does NOT use extra length. I've been using David >> > Smith's FM Fortran where the DOT_PRODUCT does use double length, but it >> > would be nice to have that in R. My attempts to find "easy" workarounds have >> > not been successful, but I'll admit that other things took precedence. >> >> Well, the current development version of 'Rmpfr' on R-forge now >> contains facilities to enlarge the precision of the computations >> by a factor 'fPrec' with default 'fPrec = 1'; >> notably, instead of x %*% y (where the `%*%` cannot have more >> than two arguments) does have a counterpart matmult(x,y, ....) >> which allows more arguments, namely 'fPrec', or directly 'precBits'; >> and of course there are crossprod() and tcrossprod() one should >> use when applicable and they also got the 'fPrec' and >> 'precBits' arguments. >> >> {The %*% etc precision increase still does not work optimally >> efficiency wise, as it simply increases the precision of all >> computations by just increasing the precision of x and y (the inputs)}. >> >> The whole Matrix and Matrix-vector arithmetic is still >> comparibly slow in Rmpfr .. mostly because I valued human time >> (mine!) much higher than computer time in its implementation. >> That's one reason I would never want to double the precision >> everywhere as it decreases speed even more, and often times >> unnecessarily: doubling the accuracy is basically "worst-case >> scenario" precaution >> >> Martin >> > ______________________________________________ > R-devel at r-project.org mailing list > https://stat.ethz.ch/mailman/listinfo/r-devel