similar to: Logical inconsistency

Displaying 20 results from an estimated 1000 matches similar to: "Logical inconsistency"

2008 Apr 24
2
problem with "which"
Hi, I'm having trouble with the "which" or the "seq" function, I'm not sure. Here's an example : > lat=seq(1,2,by=0.1) > lat [1] 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2.0 > which(lat==1) [1] 1 > which(lat==1.1) [1] 2 > which(lat==1.2) [1] 3 > which(lat==1.3) [1] 4 > which(lat==1.4) [1] 5 > which(lat==1.5) [1] 6 >
2010 Nov 09
2
Help with Iterator
Dear Experts, The following is my "Iterator". When I try to write a new function with itel, I got error. This is what I have: > supDist<-function(x,y) return(max(abs(x-y))) > > myIterator <- function(xinit,f,data=NULL,eps=1e-6,itmax=5,verbose=FALSE) { + xold<-xinit + itel<-0 + repeat { + xnew<-f(xold,data) + if (verbose) { + cat( +
2008 Feb 12
4
0.45<0.45 = TRUE (PR#10744)
Dear developer, in my version of R (2.4.0) as weel as in a more recent version (2.6.0) on different computers, we found this problem : > a<-(58/40-1) > a [1] 0.45 > b<-(18/40) > b [1] 0.45 > a<b [1] TRUE > a==b [1] FALSE > Something seems wrong here. but if we do > c<-0.45 > d<-0.45 > c<d [1] FALSE then everything is ok. If we use 59
2008 Jun 12
4
problem with function "rep"
To whom it may concern, I am currently writing a program where I need to use function rep. The results I get are quite confusing. Given two vectors A and B, I want to replicate a[1] b[1] times, a[2] b[2] times and so on. All the entries of vector B are positive integers. My problem comes from the fact that if I sum up all the elements of B, I get a certain value x(for example 10000). And if
2005 Dec 29
2
How to fit all points into plot?
Hi, I have a problem when I want to add new points (or a new line) to the graph. Some points (or parts of the line) are not shown on the graph because they lie beyond the scale of the axis. Is there a way to overcome this so all points (or the entire line) are shown on the graph? Here's an example of my problem: colors = c("red", "blue") plot(x=rnorm(100,0,1),
2007 Jul 22
1
Package design, placement of legacy functions
I have a function XOLD() from a nearly verbatim port of legacy FORTRAN in a package. I have remplemented this function as XNEW() using much cleaner native R and built-in functions of R. I have switched the package to the XNEW(), but for historical reasons would like to retain the XOLD() somewhere in the package directory structure. An assertion through a README or other will point to
2008 Mar 03
7
help for the first poster- a simple question
Hi, there, I cannot get accurate value for calculation. for example: ld<-sqrt(1*0.05*0.95*0.05*0.95) 0.05*0.95-ld=-6.938894e-18 0.05*0.95-ld==0 is False. I met this problem in my program, how can I handle it. Thanks. xj.
2009 Aug 01
5
incorrect result (41/10-1/10)%%1 (PR#13863)
Full_Name: jan hattendorf Version: 2.9.0 OS: XP Submission from: (NULL) (213.3.108.185) I get an incorrect result for (41/10-1/10)%%1 [1] 1 The error did not occur with other numbers than 41 (1, 11, 21, 31, 51, ...) test <- rep(NA, 1000) for(i in 1:1000){ test[i] <- i/10-1/10 } test[test%%1==0]
2010 Jul 10
7
Need help on date calculation
Hi all, please see my code: > library(zoo) > a <- as.yearmon("March-2010", "%B-%Y") > b <- as.yearmon("May-2010", "%B-%Y") > > nn <- (b-a)*12 # number of months in between them > nn [1] 2 > as.integer(nn) [1] 1 What is the correct way to find the number of months between "a" and "b", still
2009 Apr 17
3
Modular Arithmetic Error?
Hi, I'm using the '%%' operator in some code, and am running into the following erroneous outcome: > 1.2 %% 0.2 [1] 0.2 Unless I'm very mistaken, the result should be 0 (indeed, 12 %% 2 does result in 0). Furthermore: > 1.20000000000000001 %% 0.2 [1] 0.2 > (1.2+1e17) %% .2 [1] 0 Warning message: probable complete loss of accuracy in modulus (Warning
2017 Jun 07
3
An R question
Hi all, In checking my R codes, I encountered the following problem. Is there a way to fix this? I tried to specify options(digits=). I did not fix the problem. Thanks so much for your help! Hanna > cdf(pmass)[2,2]==pcum[2,2][1] FALSE> cdf(pmass)[2,2][1] 0.9999758> pcum[2,2][1] 0.9999758 [[alternative HTML version deleted]]
2006 Jul 07
2
BUG in " == " ? (PR#9065)
Hello, here is the version of R that I use : > version _ platform i486-pc-linux-gnu arch i486 os linux-gnu system i486, linux-gnu status major 2 minor 3.1 year 2006 month 06 day 01 svn rev 38247 language R version.string Version 2.3.1 (2006-06-01) And here is one of the sequences of
2009 Jun 08
4
seq(...) strange logical value
Do you heve any idea why I get after this instruction everywhere false? > seq (0, 1, by=0.1) == 0.3 [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE But after different step it's ok: > seq(0, 1, by=0.1) == 0.4 [1] FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE FALSE -- View this message in context:
2010 Dec 20
6
sample() issue
> length(sample(25000, 25000*(1-.55))) [1] 11249 > 25000*(1-.55) [1] 11250 > length(sample(25000, 11250)) [1] 11250 > length(sample(25000, 25000*.45)) [1] 11250 So the question is, why do I get 11249 out of the first command and not 11250? I can't figure this one out. Thanks Cory [[alternative HTML version deleted]]
2009 Sep 13
2
How can I get "predict.lm" results with manual calculations ? (a floating point problem)
Hello dear r-help group I am turning for you for help with FAQ number 7.31: "Why doesn't R think these numbers are equal?" http://cran.r-project.org/doc/FAQ/R-FAQ.html#Why-doesn_0027t-R-think-these-numbers-are-equal_003f *My story* is this: I wish to run many lm predictions and need to have them run fast. Using predict.lm is relatively slow, so I tried having it run faster by
2007 May 28
1
off-topic: affine transformation matrix
This may sound like a very naive question, but... give two lists of coordinate pairs (x,y - Cartesian space) is there any simple way to compute the affine transformation matrix in R. I have a set of data which is offset from where i know it should be. I have coordinates of the current data, and matching coordinates of where the data should be. I need to compute the composition of the affine
2009 Jun 19
1
cut with floating point, a bug?
With floating point numbers I'm seeing 'cut' putting values in the wrong bands. An example below places 0.3 in (0.3,0.6] i.e. 0.3 > 0.3. > x = 1:5*.1 > x [1] 0.1 0.2 0.3 0.4 0.5 > cut(x, br=c(0,.3,.6)) [1] (0,0.3] (0,0.3] (0.3,0.6] (0.3,0.6] (0.3,0.6] Levels: (0,0.3] (0.3,0.6] I'm sure this is probably the same issue documented in the FAQ (7.31 Why doesn't R
2017 Jun 07
0
An R question
Hi, Check the FAQ 7.31 https://cran.rstudio.com/doc/FAQ/R-FAQ.html#Why-doesn_0027t-R-think-these-numbers-are-equal_003f And read the posting guide too... https://www.r-project.org/posting-guide.html HTH, Ivan -- Dr. Ivan Calandra TraCEr, Laboratory for Traceology and Controlled Experiments MONREPOS Archaeological Research Centre and Museum for Human Behavioural Evolution Schloss Monrepos 56567
2009 May 25
2
inconsistency in ?factor
In the almost current development version (2009-05-22 r48594) and also in https://svn.r-project.org/R/trunk/src/library/base/man/factor.Rd ?factor contains (compare the formulations marked by ^^^^^^) \section{Warning}{ The interpretation of a factor depends on both the codes and the \code{"levels"} attribute. Be careful only to compare factors with the same set of levels (in
2006 Dec 09
2
Floating point maths in R
Hi, I am not sure if this is just me using R (R-2.3.1 and R-2.4.0) in the wrong way or if there is a more serious bug. I was having problems getting some calculations to add up so I ran the following tests: > (2.34567 - 2.00000) == 0.34567 <------- should be true [1] FALSE > (2.23-2.00) == 0.23 <------- should be true [1] FALSE > 4-2==2 [1] TRUE > (4-2)==2 [1] TRUE >