similar to: The function cummax() seems to have a bug.

Displaying 20 results from an estimated 700 matches similar to: "The function cummax() seems to have a bug."

2014 Jul 14
2
cummax / cummin for complex numbers
Dear all, in R 3.1.0, this is happening: > cummin(c(1+1i,2-3i,4+5i)) Error in cummin(c(1 + (0+1i), 2 - (0+3i), 4 + (0+5i))) : 'cummax' not defined for complex numbers > cummax(c(1+1i,2-3i,4+5i)) Error in cummax(c(1 + (0+1i), 2 - (0+3i), 4 + (0+5i))) : 'cummin' not defined for complex numbers It may be fixed in R-devel, but I thought I'd mention it to make sure
2015 Sep 20
2
Long vectors: Missing values and R_xlen_t?
Is there a missing value constant defined for R_xlen_t, cf. NA_INTEGER (== R_NaInt == INT_MIN) for int(eger)? If not, is it correct to assume that missing values should be taken care/tested for before coercing from int or double? Thank you, Henrik
2011 Oct 04
1
a question about sort and BH
Hi, I have two questions want to ask. 1. If I have a matrix like this, and I want to figure out the rows whose value in the 3rd column are less than 0.05. How can I do it with R. hsa-let-7a--MBTD1 0.528239197 2.41E-05 hsa-let-7a--APOBEC1 0.507869409 5.51E-05 hsa-let-7a--PAPOLA 0.470451884 0.000221774 hsa-let-7a--NF2 0.469280186 0.000231065 hsa-let-7a--SLC17A5
2017 Jan 20
1
NaN behavior of cumsum
Hi! I noticed that cumsum behaves different than the other cumulative functions wrt. NaN values: > values <- c(1,2,NaN,1) > for ( f in c(cumsum, cumprod, cummin, cummax)) print(f(values)) [1] 1 3 NA NA [1] 1 2 NaN NaN [1] 1 1 NaN NaN [1] 1 2 NaN NaN The reason is that cumsum (in cum.c:33) contains an explicit check for ISNAN. Is that intentional? IMHO, ISNA would be better
2005 May 13
1
Lowest data level since DateX
Hello, I'm dealing with financial time series. I'm trying to find out X in this sentence: The most recent close is the lowest level since X(date). Here's an example of what I'm looking for: library(fBasics) data(DowJones30) tail(DowJones30[,1:5],n=10) I need to come up with a vector that would look like this AA AXP T ... 2000-12-21
2010 May 05
1
testInstalledBasic question
Hi, I'm currently in the process of writing an R-installation SOP for my company. As part of that process I'm using the recommendations from the 'R Installation and Administration' document, section 3.2, "Testing an installation". This is done on an XP machine, using the latest binary of 2.11.0. The binary is downloaded and then installed from the installer. I then
2013 Jul 12
1
robustbase compilation problem: probably boneheaded? maybe 32-bit?
With a recent SVN build (R Under development (unstable) (2013-07-10 r63264) -- "Unsuffered Consequences"), I'm having trouble installing the robustbase package. The bottom line is that I *think* it's a 32-bit-system problem, but I could easily be mistaken. robustbase is passing its package checks: http://cran.r-project.org/web/checks/check_results_robustbase.html ... but from
2018 Apr 19
2
R Bug: write.table for matrix of more than 2, 147, 483, 648 elements
On 18/04/2018 5:08 PM, Tousey, Colton wrote: > Hello, > > I want to report a bug in R that is limiting my capabilities to export a matrix with write.csv or write.table with over 2,147,483,648 elements (C's int limit). I found this bug already reported about before: https://bugs.r-project.org/bugzilla/show_bug.cgi?id=17182. However, there appears to be no solution or fixes in upcoming
2018 Apr 19
3
R Bug: write.table for matrix of more than 2, 147, 483, 648 elements
Le 19/04/2018 ? 09:30, Tomas Kalibera a ?crit?: > On 04/19/2018 02:06 AM, Duncan Murdoch wrote: >> On 18/04/2018 5:08 PM, Tousey, Colton wrote: >>> Hello, >>> >>> I want to report a bug in R that is limiting my capabilities to >>> export a matrix with write.csv or write.table with over >>> 2,147,483,648 elements (C's int limit). I found
2012 Nov 15
1
bug with mapply() on an S4 object
Hi, Starting with ordinary vectors, so we know what to expect: > mapply(function(x, y) {x * y}, 101:106, rep(1:3, 2)) [1] 101 204 309 104 210 318 > mapply(function(x, y) {x * y}, 101:106, 1:3) [1] 101 204 309 104 210 318 Now with an S4 object: setClass("A", representation(aa="integer")) a <- new("A", aa=101:106) > length(a)
2019 May 01
3
anyNA() performance on vectors of POSIXct
Inside of the anyNA() function, it will use the legacy any(is.na()) code if x is an OBJECT(). If x is a vector of POSIXct, it will be an OBJECT(), but it is also TYPEOF(x) == REALSXP. Therefore, it will skip the faster ITERATE_BY_REGION, which is typically 5x faster in my testing. Is the OBJECT() condition really necessary, or could it be moved after the switch() for the individual TYPEOF(x)
2019 Jan 05
1
unsorted - suggestion for performance improvement and ALTREP support for POSIXct
I believe the performance of isUnsorted() in sort.c could be improved by calling REAL() once (outside of the for loop), rather than calling it twice inside the loop. As an aside, it is implemented in the faster way in doSort() (sort.c line 401). The example below shows the performance improvement for a vectors of double of moving REAL() outside the for loop. # example as implemented in
2015 May 06
1
Shouldn't vector indexing with negative out-of-range index give an error?
On Wed, May 6, 2015 at 1:33 AM, Martin Maechler <maechler at lynne.stat.math.ethz.ch> wrote: >>>>>> John Chambers <jmc at stat.stanford.edu> >>>>>> on Tue, 5 May 2015 08:39:46 -0700 writes: > > > When someone suggests that we "might have had a reason" for some peculiarity in the original S, my usual reaction is "Or
2008 Jan 07
3
Seeking a more efficient way to find partition maxima
Hi. Suppose I have a vector that I partition into disjoint, contiguous subvectors. For example, let v = c(1,4,2,6,7,5), partition it into three subvectors, v1 = v[1:3], v2 = v[4], v3 = v[5:6]. I want to find the maximum element of each subvector. In this example, max(v1) is 4, max(v2) is 6, max(v3) is 7. If I knew that the successive subvector maxima would never decrease, as in the example,
2009 Nov 11
2
partial cumsum
Hello, I am searching for a function to calculate "partial" cumsums. For example it should calculate the cumulative sums until a NA appears, and restart the cumsum calculation after the NA. this: x <- c(1, 2, 3, NA, 5, 6, 7, 8, 9, 10) should become this: 1 3 6 NA 5 11 18 26 35 45 any ideas? thank you and best regards, stefan
2019 Jan 11
2
strtoi output of empty string inconsistent across platforms
Identified as root cause of a bug in data.table: https://github.com/Rdatatable/data.table/issues/3267 On my machine, strtoi("", base = 2L) produces NA_integer_ (which seems consistent with ?strtoi: "Values which cannot be interpreted as integers or would overflow are returned as NA_integer_"). But on all the other machines I've seen, 0L is returned. This seems to be
2019 Jan 11
2
strtoi output of empty string inconsistent across platforms
>>>>> Martin Maechler >>>>> on Fri, 11 Jan 2019 09:44:14 +0100 writes: >>>>> Michael Chirico >>>>> on Fri, 11 Jan 2019 14:36:17 +0800 writes: >> Identified as root cause of a bug in data.table: >> https://github.com/Rdatatable/data.table/issues/3267 >> On my machine, strtoi("", base =
2011 Aug 29
3
How to safely using OpenMP pragma inside a .C() function?
I am trying to parallelize part of a C function that is called from R (via .C) using OpenMP's "parallel for" pragma. I get mixed results: some runs finish with no problem, but some lead to R crashing after issuing a long error message involving memory violations. I found this post, which describes how a .Call() function can be made to avoid crashing R by raising the stack limit:
2019 Sep 16
1
Error: package or namespace load failed for ‘utils
>>>>> Laurent Gautier >>>>> on Sun, 15 Sep 2019 15:01:09 -0400 writes: > In case a search engine leads someone with the same issue > here, I am documenting the point I reached: > I can reproduce the issue with a small example when > forcing R to not load any package at startup time (using > an Renviron file): ``` package <-
2013 Jul 20
1
BH correction with p.adjust
Dear List, I have been trying to use p.adjust() to do BH multiple test correction and have gotten some unexpected results. I thought that the equation for this was: pBH = p*n/i where p is the original p value, n is the number of tests and i is the rank of the p value. However when I try and recreate the corrected p from my most significant value it does not match up to the one computed by the