similar to: choose(n, k) as n approaches k

Displaying 20 results from an estimated 1100 matches similar to: "choose(n, k) as n approaches k"

2020 Jan 14
2
[R] choose(n, k) as n approaches k
> On 14 Jan 2020, at 16:21 , Duncan Murdoch <murdoch.duncan at gmail.com> wrote: > > On 14/01/2020 10:07 a.m., peter dalgaard wrote: >> Yep, that looks wrong (probably want to continue discussion over on R-devel) >> I think the culprit is here (in src/nmath/choose.c) >> if (k < k_small_max) { >> int j; >> if(n-k < k
2020 Jan 14
4
[R] choose(n, k) as n approaches k
OK, I see what you mean. But in those cases, we don't get the catastrophic failures from the if (k < 0) return 0.; if (k == 0) return 1.; /* else: k >= 1 */ part, because at that point k is sure to be integer, possibly after rounding. It is when n-k is approximately but not exactly zero and we should return 1, that we either return 0 (negative case) or n
2020 Jan 14
1
[R] choose(n, k) as n approaches k
Yep, that looks wrong (probably want to continue discussion over on R-devel) I think the culprit is here (in src/nmath/choose.c) if (k < k_small_max) { int j; if(n-k < k && n >= 0 && R_IS_INT(n)) k = n-k; /* <- Symmetry */ if (k < 0) return 0.; if (k == 0) return 1.; /* else: k >= 1 */ if n is a near-integer, then k
2020 Jan 15
1
[R] choose(n, k) as n approaches k
That crossed my mind too, but presumably someone designed choose() to handle the near-integer cases specially. Otherwise, we already have beta() -- you just need to remember what the connection is ;-). I would expect that it has to do with the binomial and negative binomial distributions, but I can't offhand picture a calculation that leads to integer k, n plus/minus a tiny numerical error
2020 Jan 14
0
[R] choose(n, k) as n approaches k
On 14/01/2020 10:50 a.m., peter dalgaard wrote: > > >> On 14 Jan 2020, at 16:21 , Duncan Murdoch <murdoch.duncan at gmail.com> wrote: >> >> On 14/01/2020 10:07 a.m., peter dalgaard wrote: >>> Yep, that looks wrong (probably want to continue discussion over on R-devel) >>> I think the culprit is here (in src/nmath/choose.c) >>> if (k
2020 Jan 14
0
[R] choose(n, k) as n approaches k
On 14/01/2020 10:07 a.m., peter dalgaard wrote: > Yep, that looks wrong (probably want to continue discussion over on R-devel) > > I think the culprit is here (in src/nmath/choose.c) > > if (k < k_small_max) { > int j; > if(n-k < k && n >= 0 && R_IS_INT(n)) k = n-k; /* <- Symmetry */ > if (k < 0) return 0.;
2020 Jan 14
0
[R] choose(n, k) as n approaches k
At the risk of throwing oil on a fire. If we are talking about fractional values of choose() doesn't it make sense to look to the gamma function for the correct analytic continuation? In particular k<0 may not imply the function should evaluate to zero until we get k<=-1. Example: ``` r choose(5, 4) #> [1] 5 gchoose <- function(n, k) { gamma(n+1)/(gamma(n+1-k) * gamma(k+1))
2009 Dec 15
3
RFC: lchoose() vs lfactorial() etc
lgamma(x) and lfactorial(x) are defined to return ln|Gamma(x)| {= log(abs(gamma(x)))} or ln|Gamma(x+1)| respectively. Unfortunately, we haven't chosen the analogous definition for lchoose(). So, currently > lchoose(1/2, 1:10) [1] -0.6931472 -2.0794415 NaN -3.2425924 NaN -3.8869494 [7] NaN -4.3357508 NaN -4.6805913 Warning message: In
2008 Mar 19
1
choose incorrect for fractional and some negative integer values (PR#10766)
choose(-5,-7) uses integer arguments (as specified in Help) and returns a numeric value that is incorrect. Either the function or the documentation should be fixed. If the function is not fixed, a warning or an error would be helpful. The fact that choose(n,k) usually returns choose(n,round(k,0)) is not obvious from either the output or the documentation. I suggest issuing a warning when
2019 Sep 29
2
typeof(getOption("warn")) is "integer" instead of "double" in R unstable (2019-09-27 r77229)? Reproducible?
Hi, I have a failing unit test in my package tryCatchLog on the CRAN build infrastructure (https://cran.r-project.org/web/checks/check_results_tryCatchLog.html) with "R Under development (unstable) (2019-09-27 r77229)" and the unit tests just ensures consistent behaviour of R (not of my package) as a precondition: The failing unit test is caused by >
2019 Sep 29
2
typeof(getOption("warn")) is "integer" instead of "double" in R unstable (2019-09-27 r77229)? Reproducible?
Thanks a lot for pointing out the reason (and yes, I am testing quite to stringent in this case - it's my old testing disease ;-) For other readers: The R-devel NEWS is a good source to find possible change reasons: https://stat.ethz.ch/R-manual/R-devel/doc/html/NEWS.html On Sun, 2019-09-29 at 08:33 -0400, Duncan Murdoch wrote: > On 29/09/2019 7:55 a.m., nospam at altfeld-im.de wrote:
2020 Aug 10
2
qnbinom with small size is slow
Thanks Ben for verifying the issue. It is always reassuring to hear when others can reproduce the problem. I wrote a small patch that fixes the issue (https://github.com/r-devel/r-svn/pull/11): diff --git a/src/nmath/qnbinom.c b/src/nmath/qnbinom.c index b313ce56b2..d2e8d98759 100644 --- a/src/nmath/qnbinom.c +++ b/src/nmath/qnbinom.c @@ -104,6 +104,7 @@ double qnbinom(double p, double size,
2000 Dec 13
0
choose(n, k) for k>n: An inconsistency?
It took me by surprise to find that choose(4,5) delivers [1] NaN Warning message: NaNs produced in: choose(n, k) If we look at choose(4,5) as the number of ways of choosing 5 objects from 4 I would have expected 0 as result. Furthermore dhyper(5,4,6,5) does deliver 0 and this essentially equivalent to choose(4,5)*choose(6,0)/choose(10,5).
2020 Aug 07
2
qnbinom with small size is slow
Hi all, I recently noticed that `qnbinom()` can take a long time to calculate a result if the `size` argument is very small. For example qnbinom(0.5, mu = 3, size = 1e-10) takes ~30 seconds on my computer. I used gdb to step through the qnbinom.c implementation and noticed that in line 106 (https://github.com/wch/r-source/blob/f8d4d7d48051860cc695b99db9be9cf439aee743/src/nmath/qnbinom.c#L106)
2010 Jun 11
0
[LLVMdev] approaches to profiling jitted code?
Can anyone comment on how they are approaching profiling of jitted code in their llvm backends? For linux and windows, I'm wondering which tools are best integrated with (be it gprof or callgrind on linux, or vtune on either platform). Thanks b.
2005 Aug 11
0
clustering or homegenity approaches?
Hi, there: I have a question on the following dataset > rbind(t2[which(t4>0.3),][1:3,], t2[1:3,]) # don't worry about what this line means [,1] [,2] [,3] [,4] [,5] [1,] 34.216166 96.928587 330.125990 330.183222 330.201215 [2,] 2.819183 8.134491 8.275841 8.525256 8.828448 [3,] 2.819183 7.541680 7.550333 8.374636 8.690998 [4,] 4.672551
2007 Apr 20
1
Approaches of Frailty estimation: coxme vs coxph(...frailty(id, dist='gauss'))
Dear List, In documents (Therneau, 2003 : On mixed-effect cox models, ...), as far as I came to know, coxme penalize the partial likelihood (Ripatti, Palmgren, 2000) where as frailtyPenal (in frailtypack package) uses the penalized the full likelihood approach (Rondeau et al, 2003). How, then, coxme and coxph(...frailty(id, dist='gauss')) differs? Just the coding algorithm, or in
2011 Feb 01
0
general question on approaches to getting data from data providers
My question, buried in this rant, is " is there a mail list or other means for identifying sites with information likely to be important to many R users but the data is difficult to obtain due to the site's choice of technology?" Quite often, people here ask questions about scraping html to get various types of "public" information ( public being a bit debatable when
2013 Mar 20
0
Hands-on Webinar: Advances in Regression: Modern Ensemble and Data Mining Approaches (no charge)
Hands-on Webinar (no charge) Advances in Regression: Modern Ensemble and Data Mining Approaches **Part of the series: The Evolution of Regression from Classical Linear Regression to Modern Ensembles Register Now for Parts 3, 4: https://www1.gotomeeting.com/register/500959705 **All registrants will automatically receive access to recordings of Parts 1 & 2. Course Abstract: Overcoming Linear
2004 Jan 29
0
Any approaches to server usage reporting/metrics
Has anyone seen or implemented SAMBA server usage reporting? The sort of thing that can be reported to management. Perhaps even any ideas on how to approach this, even if it would need to be built in-house? My intention is to show "how much" the server is used, such as number of users, number of connections, etc. As you can imagine, I'd like to find a way to show what we get