similar to: Where precision change

Displaying 20 results from an estimated 80000 matches similar to: "Where precision change"

2013 Apr 24
1
Floating point precision causing undesireable behaviour when printing as.POSIXlt times with microseconds?
Dear list, When using as.POSIXlt with times measured down to microseconds the default format.POSIXlt seems to cause some possibly undesirable behaviour: According to the code in format.POSIXlt the maximum accuracy of printing fractional seconds is 1 microsecond, but if I do; options( digits.secs = 6 ) as.POSIXlt( 1.000002 , tz="", origin="1970-01-01") as.POSIXlt( 1.999998 ,
2020 Feb 29
3
dput()
I think Robin knows about FAQ 7.31/floating point (author of 'Brobdingnag', among other numerical packages). I agree that this is surprising (to me). To reframe this question: is there way to get an *exact* ASCII representation of a numeric value (i.e., guaranteeing the restored value is identical() to the original) ? .deparseOpts has ?"digits17"?: Real and finite complex
2010 Oct 27
3
Increase R precision
Hello everyone. When I execute the following in R > (18-46)/(45-93) [1] 0.5833333 I get small precision for what I am trying to deal with . Is it possible to increase the precision for this and for other operations? For example openoffice calc for this operation returns 0.58333333333333300000 I I would like to thank you for your help [[alternative HTML version
2010 Mar 04
2
precision issue?
Hi R Gurus, I am trying to figure out what is going on here. > a <- 68.08 > b <- a-1.55 > a-b [1] 1.55 > a-b == 1.55 [1] FALSE > round(a-b,2) == 1.55 [1] TRUE > round(a-b,15) == 1.55 [1] FALSE Why should (a - b) == 1.55 fail when in fact b has been defined to be a - 1.55? Is this a precision issue? How do i correct this? Alex
2004 Sep 17
1
controlling printing precision in paste()
Rene, Look at ?format. Sean On Sep 17, 2004, at 9:21 AM, RenE J.V. Bertin wrote: > Hello, > > I can't seem to find the way to modify the precision with which > paste() prints its floating point numbers, more precisely the number > of decimal digits printed. This is apparently not controlled by > options( digits= ), and there is no appropriate argument to paste() >
2018 Feb 26
3
Precision in R
Hi, Why sum() on a 10-item vector produces a different value than its counterpart on a 2-item vector? I understand the problems related to the arithmetic precision in storing decimal numbers in binary format, but shouldn't the errors be equal regardless of the method used? See my example: > options(digits=22) > x=rep(.1,10) > x [1] 0.10000000000000001 0.10000000000000001
2011 Nov 17
2
read.table with double precision
Dear all I have a txt file with the following contents 1 50.7906430000000 6.06349800000000 2 50.7907380000000 6.06347100000000 3 50.7910810000000 6.06338000000000 4 50.7911890000000 6.06355200000000 I am usind read.table('myfile.txt',sep=" ") which unfortunately returns only integers and not doubles that are required to store the  50.7906430000000 What can I do to force it
2005 Nov 01
5
Unexpected result from binary greater than operator
Hi All, I recently encountered results that I did not expect, exhibited by the following code snippet: test <- function() { minX <- 4.2 min0 <- 4.1 sigmaG <- 0.1 Diff <- minX-min0 print(c(Diff=Diff,sigmaG=sigmaG)) cat("is Diff > sigmaG?:", Diff > sigmaG,"\n") cat("is (4.2 - 4.1) > 0.1?:",(4.2 - 4.1) >
2010 Jul 29
1
precision of minus operation and if statments
Hi Everyone, as part of a larger script, I need to insert the result of a simple minus operation into an if statement. I have noticed that the precision that appear on the screen is not the precision in which R stores the result of the minus operation, and that this change alters the result of the if statement. For example, when running this simple script:   > a=0.90 > b=0.95 >
2017 May 24
1
precision of do_arith() in arithmetic.c
To the R development team: First of all, thank you so much for maintaining wonderful R software. Perhaps, Dr. Ahn has just reported an error on the wilcox.test() function, and suggesting that an error may arise from abs() and rank(). I just had a quick check that the problem may come from the precision of the results of arithmetic functions. 87.7-89.1+1.4 # > 87.7-89.1+1.4 # [1]
2020 Mar 02
2
dput()
On 02/03/2020 3:24 a.m., Martin Maechler wrote: >>>>>> robin hankin >>>>>> on Sun, 1 Mar 2020 09:26:24 +1300 writes: > > > Thanks guys, I guess I should have referred to FAQ 7.31 > > (which I am indeed very familiar with) to avoid > > misunderstanding. I have always used dput() to clarify > > 7.31-type
2020 Feb 29
2
dput()
Thanks guys, I guess I should have referred to FAQ 7.31 (which I am indeed very familiar with) to avoid misunderstanding. I have always used dput() to clarify 7.31-type issues. The description in ?dput implies [to me at any rate] that there will be no floating-point roundoff in its output. I hadn't realised that 'deparsing' as discussed in dput.Rd includes precision roundoff issues.
2012 Jun 18
6
Inconsistency using seq
Hi all, Is there any problem of precision when using seq?. For example: x<- seq(0,4,0.1) x[4]=0.3 BUT: x[4]-0.3=5.551115e-17 It means when I use this condition within an if clause, it does not find values with 0.3 for x[4] as it is not precisely 0.3. Is there any bug in seq() ? -- View this message in context: http://r.789695.n4.nabble.com/Inconsistency-using-seq-tp4633739.html Sent from
2011 May 25
1
Time and db precision
I have a loop that regularly checks for new data to analyse in my database. In order to facilitate this, the database table has a timestamp column with the time that the data was inserted into the database. Something like this: while (....) { load(timetoken.Rdata) df <- sqlQuery(con, paste("SELECT * FROM tabledf WHERE timestamp > ", timetoken, sep = ""))
2010 May 02
2
Calculation error
Dear Rxperts, Running the following code: ======================================================= twlo=10; twhi=20; wt=154; vd=0.5; cl=0.046; tau=6; t=3; F=1; wtkg <- wt/2.2 # convert lbs to kg vd.pt <- wtkg * vd # compute weight-based vd (L) cl.pt <- wtkg * cl # compute CL (L/hr) k <- cl.pt/vd.pt # compute k (hr^-1) cave <-
2012 Aug 24
3
R minimal calculation error
Hi, I'm doing some easy calculations to normalize some values. This looks like this: x=mean(a+b+c+d ...) a=a-x b=b-x c=c-x d=d-x ... mean(a+b+c+d ...) ---> Should now be 0! However, I'm getting results like -2.315223e-18 This is really near to 0 but not very aesthetic. Can I prevent this? Or is this behaviour desired? Thank you very much! Burtan -- Frederik Bertling Steinhausenstr.
2010 Nov 28
5
unexpected behavior using round to 2 digits on randomly generated numbers
Hello! I stumbled upon something odd that took a while to track down, and I wanted to run it by here to see if I should submit a bug report. For randomly generated numbers (from a variety of distributions) rounding them to specifically 2 digits and then multiplying them by 100 produces strange results on about 8% of cases. The problematic numbers display as I would have expected, but do not
2010 Jan 08
3
Newbie question on precision
Hi all, How can I get R to change the default precision value? For example: > x=0.99999999999999999 > 1-x [1] 0 > Is there a way that I can get a non-zero value using some parameter, or some package? many thanks. [[alternative HTML version deleted]]
2011 Nov 15
2
Controlling the precision of the digits printed
Has anyone come across the right combinations to print a limited number of digits? My trial and error approach is taking too much time. Here is what I have tried: > op <- options() > a <- c(1e-10,1,2,3,.5,.25) > names(a) <- c("A", "B", "C", "D", "E", "F") > # default > a A B C D
2006 Dec 05
1
double precision
Hi, I am attempting to query a data frame from a mysql database. One of the variables is a unique identification number ("numeric") 18 digits long. I am struggling to retrieve this variable exactly without any rounding. The function I am using is sqlQuery(), with an ODBC connection. Querying directly results in the double being rounded towards the end (eg 6527600583317876352 instead of