similar to: sum(data.frame(),na.rm=TRUE) still throws an Error

Displaying 20 results from an estimated 10000 matches similar to: "sum(data.frame(),na.rm=TRUE) still throws an Error"

2003 Apr 28
2
sum(..., na.rm=TRUE) oddity
Hi all, I get two different results when using sum() and the switch na.rm. The result is correct when na.rm=FALSE. Linux Redhat 7.3, R version 1.6.1. I've had no luck searching the mail archives, so I was hoping somebody could explain/check this one for me. I will need to apply the function to missing data, simple as it is. Code: x<-matrix(runif(20,0,5)%/%1,4,5) # random matrix
2002 Apr 25
4
sum() with na.rm=TRUE, again
Hi: I remember a post several days ago by Jon Baron, concerning the behavior of sum() when one sets na.rm=TRUE: the result will be a zero sum for a vector of all NA's, as here, for the second row: > ss<- data.frame(x=c(1,NA,3,4),y=c(2,NA,4,NA)) > ss x y 1 1 2 2 NA NA 3 3 4 4 4 NA > apply(ss,1,sum,na.rm=TRUE) 1 2 3 4 3 0 7 4 I am rather alarmed by that zero, because
2015 Jun 01
0
sum(..., na.rm=FALSE): Summing over NA_real_ values much more expensive than non-NAs for na.rm=FALSE? Hmm...
This is a great example how you cannot figure it out after spending two hours troubleshooting, but a few minutes after you post to R-devel, it's just jumps to you (is there a word for this other than "impatient"?); Let me answer my own question. The discrepancy between my sum2() code and the internal code for base::sum() is that the latter uses LDOUBLE = long double (on some system
2008 May 02
0
Using option na.rm=True in function SD does not work for matrix with complete columns of NAs (PR#11364)
Dear R-developers, =20 according to the "what's new"-section in version R 2.7.0, there has been = a change in the working of co[rv] and so also in sd and var. =20 I am afraid, that the use of function sd with option "na.rm=3DT" has not = been changed appropriately. So the following problem exists with missing = data: =20 > sessionInfo() R version 2.7.0 (2008-04-22)=20
2009 Oct 29
1
weighted.mean uses zero when na.rm=TRUE (PR#14032)
The weighted.mean() function replaces NA values with 0.0 when the user specifies na.rm=TRUE: x <- c(101, 102, NA) mean(x, na.rm=TRUE) # 101.5, correct weighted.mean(x, na.rm=TRUE) # 67.66667, wrong weighted.mean(x, w=c(1,1,1), na.rm=TRUE) # 67.66667, wrong weighted.mean(x, w=c(1,1,1)/3, na.rm=TRUE) # 67.66667, wrong The weights are
2006 Aug 01
1
Global setting for na.rm=TRUE
Hello! Is it possible to set na.rm=TRUE in a global way? I'am constantly forgeting on this when performing analyses. I agree that one should be carefull with this when developing some code, but not necesarilly so in data analysis. Lep pozdrav / With regards, Gregor Gorjanc ---------------------------------------------------------------------- University of Ljubljana PhD student
2010 Jul 23
1
na.rm=TRUE
POS=sum(x[-1][x[-1]>0],na.rm=TRUE) is this the correct syntax? -- View this message in context: http://r.789695.n4.nabble.com/na-rm-TRUE-tp2299596p2299596.html Sent from the R help mailing list archive at Nabble.com.
2011 May 12
1
do.call and applying na.rm=TRUE
Hi all! I need to do something really simple using do.call. If I want to call the mean function inside do.call, how do I apply the condition na.rm=TRUE? So, I use do.call(mean, list(x)) where x is my data. This works fine if there are no NAs. Thanks, John [[alternative HTML version deleted]]
2007 Jun 20
2
Expected behavior from: all(c(NA, NA, NA) < NA, na.rm = TRUE)?
Hi all, Came across this curious behavior in: R version 2.5.0 Patched (2007-06-05 r41831) A simplified example is: > all(c(NA, NA, NA) > NA, na.rm = TRUE) [1] TRUE Is this expected by definition? If one reduces this to individual comparisons, such as : > NA > NA [1] NA > all(NA > NA) [1] NA > all(NA > NA, na.rm = TRUE) [1] TRUE the initial comparison on the 3
2011 Aug 12
2
rollapply.zoo() with na.rm=TRUE
Hi. I'm comparing output from rollapply.zoo, as produced by two versions of R and package zoo. I'm illustrating with an example from a R-help posting 'Zoo - bug ???' dated 2010-07-13. My question is not about the first version, or the questions raised in that posting, because the behaviour is as documented. I'm puzzled as to why na.rm no longer is passed to mean, i.e. why
2008 Jul 08
1
aggregate() function and na.rm = TRUE
All, I've been using aggregate() to compute means and standard deviations at time/treatment combinations for a longitudinal dataset, using na.rm = TRUE for missing data. This was working fine before, but now when I re-run some old code it isn't. I've backtracked my steps and can't seem to find out why it was working before but not now. In any event, below is a reproducible
2007 Dec 11
1
[Kurt.Hornik@wu-wien.ac.at: Re: range( <dates>, na.rm = TRUE )] (PR#10508)
------- Start of forwarded message ------- Date: Tue, 13 Nov 2007 21:44:57 +0100 To: Steve Mongin <sjm at ccbr.umn.edu> Cc: cran at r-project.org Subject: Re: range( <dates>, na.rm = TRUE ) In-Reply-To: <200711062044.OAA14064 at minnow.ccbr.umn.edu> Reply-To: Kurt.Hornik at wu-wien.ac.at From: Kurt Hornik <Kurt.Hornik at wu-wien.ac.at> X-AntiVirus: checked by AntiVir
2015 Jun 01
2
sum(..., na.rm=FALSE): Summing over NA_real_ values much more expensive than non-NAs for na.rm=FALSE? Hmm...
I'm observing that base::sum(x, na.rm=FALSE) for typeof(x) == "double" is much more time consuming when there are missing values versus when there are not. I'm observing this on both Window and Linux, but it's quite surprising to me. Currently, my main suspect is settings in on how R was built. The second suspect is my brain. I hope that someone can clarify the below
2017 Jun 06
0
sum() returns NA on a long *logical* vector when nb of TRUE values exceeds 2^31
>>>>> Herv? Pag?s <hpages at fredhutch.org> >>>>> on Fri, 2 Jun 2017 04:05:15 -0700 writes: > Hi, I have a long numeric vector 'xx' and I want to use > sum() to count the number of elements that satisfy some > criteria like non-zero values or values lower than a > certain threshold etc... > The problem is: sum()
2017 Jun 02
0
sum() returns NA on a long *logical* vector when nb of TRUE values exceeds 2^31
I second this feature request (it's understandable that this and possibly other parts of the code was left behind / forgotten after the introduction of long vector). I think mean() avoids full copies, so in the meanwhile, you can work around this limitation using: countTRUE <- function(x, na.rm = FALSE) { nx <- length(x) if (nx < .Machine$integer.max) return(sum(x, na.rm =
2007 Dec 11
2
range( <dates>, na.rm = TRUE ) (PR#10508)
(Drats! Jitterbug is playing tricks with the PR# again. Attempting to refile so that we can kill PR#10509) Peter Dalgaard wrote: > Kurt.Hornik at wu-wien.ac.at wrote: > =20 >> ------- Start of forwarded message ------- >> Date: Tue, 13 Nov 2007 21:44:57 +0100 >> To: Steve Mongin <sjm at ccbr.umn.edu> >> Cc: cran at r-project.org >> Subject: Re: range(
2018 Jan 27
0
sum() returns NA on a long *logical* vector when nb of TRUE values exceeds 2^31
>>>>> Henrik Bengtsson <henrik.bengtsson at gmail.com> >>>>> on Thu, 25 Jan 2018 09:30:42 -0800 writes: > Just following up on this old thread since matrixStats 0.53.0 is now > out, which supports this use case: >> x <- rep(TRUE, times = 2^31) >> y <- sum(x) >> y > [1] NA > Warning message:
2019 May 10
0
[R] approx with NAs --> new argument 'na.rm=TRUE' ?!
I have now committed a version "fulfilling" your wish, partly at least, to R-devel . In the new approx(*, na.rm=FALSE) cases, the result of how NA's are treated does depend on the 4 different extrapolation rules {1, 2, 1:2, 2:1} The main reason was that I kept the low level code in C to do +- what it did before which automatically was using 'rule' to determine these
2017 Jun 07
1
sum() returns NA on a long *logical* vector when nb of TRUE values exceeds 2^31
>>>>> Martin Maechler <maechler at stat.math.ethz.ch> >>>>> on Tue, 6 Jun 2017 09:45:44 +0200 writes: >>>>> Herv? Pag?s <hpages at fredhutch.org> >>>>> on Fri, 2 Jun 2017 04:05:15 -0700 writes: >> Hi, I have a long numeric vector 'xx' and I want to use >> sum() to count the number of
2018 Feb 01
0
sum() returns NA on a long *logical* vector when nb of TRUE values exceeds 2^31
>>>>> Herv? Pag?s <hpages at fredhutch.org> >>>>> on Tue, 30 Jan 2018 13:30:18 -0800 writes: > Hi Martin, Henrik, > Thanks for the follow up. > @Martin: I vote for 2) without *any* hesitation :-) > (and uniformity could be restored at some point in the > future by having prod(), rowSums(), colSums(), and others >