Displaying 20 results from an estimated 20000 matches similar to: "readBin is much slower for raw input than for a file"
2009 Aug 11
1
readBin() arg check has unnecessary overhead (patch included)
Dear all,
The version of readBin() in R-devel includes a use of match(), through
`%in%`, which can affect its performance significantly. By using
primitives instead of the rather expensive call to match(), I reduce
the time spent inside readBin() by more than 30% in some of my code
(part of the tractor.base package). A simple patch that does this is
given below. This passes "make
2011 Mar 29
2
Reading 64-bit integers
Dear all,
I see from some previous threads that support for 64-bit integers in R
may be an aim for future versions, but in the meantime I'm wondering
whether it is possible to read in integers of greater than 32 bits at
all. Judging from ?readBin, it should be possible to read 8-byte
integers to some degree, but it is clearly limited in practice by R's
internally 32-bit integer type:
2010 Mar 25
1
RODBC : reading binary data from a TXT field belonging to a PostgeSQL table
Dear R-List,
I am working with binary data that I want to store in a PostgreSQL
DataBase. I decided to use a TXT field. I read my binary file with
readBin function, I succeed in my data storage in the database but I
have some trouble to extract the data : the correct amount of bytes is
stored in the TXT field but when I access to the data, the extracted
dataframe is truncated !
2008 Apr 28
4
R 2.7.0, match() and strings containing \0 - bug?
Hi,
A piece of my code that uses readBin() to read a certain file type is
behaving strangely with R 2.7.0. This seems to be because of a failure
to match() strings after using rawToChar() when the original was
terminated with a "\0" character. Direct equality testing with ==
still works as expected. I can reproduce this as follows:
> x <- "foo"
> y <-
2008 Jul 25
1
serialize() to via temporary file is heaps faster than doing it directly (on Windows)
Hi,
FYI, I just notice that on Windows (but not Linux) it is orders of
magnitude (below it's 50x) faster to serialize() and object to a
temporary file and then read it back, than to serialize to an object
directly. This has for instance impact on how fast digest::digest()
can provide a checksum.
Example:
x <- 1:1e7;
t1 <- system.time(raw1 <- serialize(x, connection=NULL));
2006 Jun 28
2
read file with readBin (the file was saved with a C-routine)
Hello!
I have problems with using of "readBin" to read files, which was written in C with "fwrite". In the C-File there is the following Code:
fwrite(MyitINI,sizeof(itINItype),1,outfile);
where MyitINI is a structure of the following form
typedef struct{
int KernelFileSave; /* Determined, if Systemmatrix saved or not.*/
char KernelFileName[200]; /* A-Matrix name
2009 May 18
2
readBin on binary non-blocking connections (Windows & Unix differences/bugs)
R-devel:
I am encountering a consistency issue using socketConnection and
readBin with *non-blocking* connections on Unix and Windows XP (no
Vista to test).
I am a bit confused by the behavior of *non-blocking* connections
under Windows specifically. When calling readBin on a non-blocking
connection when there is no data to read on the socket, the connection
under Unix will return a vector of
2013 May 08
1
getting corrupted data when using readBin() after seek() on a gzfile connection
Hi,
I'm running into more issues when reading data from a gzfile connection.
If I read the data sequentially with successive calls to readBin(), the
data I get looks ok. But if I call seek() between the successive calls
to readBin(), I get corrupted data.
Here is a (hopefully) reproducible example. See my sessionInfo() at the
end (I'm not on Windows, where, according to the man page,
2006 Sep 27
2
Single Precision (4 byte) floats with readBin
I would like to use readBin to read a binary data
file. Most of the data is 4-byte floating point but,
for some reason, only double precision appears to be
offered. I tried
fVariable=readBin(iFile,what=single());
and got 35.87879 which looks believable except that
the correct value is 3.030303. I then tried
fVariable=readBin(iFile,what=single(),4);
and got
[1] 3.831111e+10 6.657199e+10
2019 Nov 18
2
readBin should check that its endian argument is a legal value
I think it would be helpful if readBin checked that its endian argument is
a legal value.
Why? I was reviewing some of our code and noticed that the author had
readBin(..., endian="network") and never having heard of "network", I
looked at the man page for readBin, and it hadn't heard of "network"
either. Not good.
I then looked at the R code for readBin, which
2006 Jun 02
1
Typo fix for readBin.Rd
Hi,
The man page for readBin has a small typo:
--- a/src/library/base/man/readBin.Rd
+++ b/src/library/base/man/readBin.Rd
@@ -58,7 +58,7 @@ writeBin(object, con, size = NA, endian
\code{readBin} and \code{writeBin} read and write C-style
zero-terminated character strings. Input strings are limited to 10000
- characters. \code{\link{readChar}} and \code{\code{writeChar}}
+
2007 Dec 31
1
readBin differences on Windows and Linux/mac
I have been trying to use the gunzip function in the R.utils package. It
opens a connection to a gzfile, uses readBin to read from that connection,
and then uses writeBin to write out the raw data to a new file. This works
as expected under linux/mac, but under Windows, I get:
Error in readBin(inn, what= raw(0), size = 1, n=BFR.SIZE) :
negative length vectors are not allowed
A simple
2002 Mar 05
3
reading 2-byte integers using readBin and connections
Hi folks:
This may be a stupid question, but I cannot seem to find a way to tell
readBin that I want to read 2-byte integers from the connection. The input
file is 150,720 bytes long containing 75,360 short (2-byte) integers. But
specifying "integer" or "int" for what in readBin only returns me a vector
of length 37680, leading me to believe that sizeof(integer) or
2005 Oct 12
1
Questions about readBin function (Was: dec2bin?)
Hi,
The latest version of R had some changes to functions "readbin() and
writeBin() [which] now support raw vectors as well as filenames and
connections.". As a result I am working on retiring "raw2bin" and "bin2raw"
functions from "caTools" package which do exactly the same. Thanks to Prof.
Ripley for bringing this change to my attention.
Which brings me
2002 Nov 29
2
readBin or writeBin adds extra nulls (PR#2333)
Full_Name: Ken Yap
Version: 1.6.1
OS: Linux (SuSE 8.0)
Submission from: (NULL) (129.78.64.5)
I'm trying to copy a file using readBin and writeBin. (The reason is to be able
to pipe PostScript or PDF output to a socket later, this is just an experiment.)
I do:
zz <- file("foo.ps", "rb")
r <- readBin(zz, character(), 1000000)
yy <- file("bar.ps",
2011 Sep 01
4
readBin fails to read large files
Posting for a friend
Begin forwarded message:
From: "Geier, Florian" <florian.geier08@imperial.ac.uk<mailto:florian.geier08@imperial.ac.uk>>
Subject: Fwd: readBin fails to read large files
Date: September 1, 2011 4:10:53 PM GMT+01:00
To:
Begin forwarded message:
Date: 1 September 2011 16:01:45 GMT+01:00
Subject: readBin fails to read large files
Dear all,
I am trying
2009 May 11
3
readBin: read from defined offset TO defined offset?
Hello,
With the help of "seek" I can start "readBin" from any byte offset within my
file that I deem appropriate.
What I would like to do is to be able to define the endpoint of that read as
well. Is there any solution to that already out there?
Thanks for any hints, Joh
2006 Oct 21
2
Possible bugs in 'seek' and 'readBin'
I found that
seek(..., origin = 'current', ...)
and
readBin(..., what = 'integer', ...)
or 'int'
do not work correctly.
Did anyone have the same experience?
2004 Feb 13
2
Readbin and file position
I have a binary file which is an image with multiple bands, arranged in BSQ
format such that R, B and G are all N x M sized matrices (corresponding to
Red, Blue and Green colors respectively). The BSQ file arranges the data as
[R, B, G], so to access the B matrix, I have to read forward N x M + 1
number of samples. Is there a fast way to define a variable as the B matrix
exclusively (e.g. Can I
2012 Feb 15
1
Using readBin to read binary "unformatted" output files from Fortran?
Hello,
I'm wondering if I can get some help with reading Fortran binary "unformatted" output files into R.
The Fortran output files were generated in Ubuntu 10.04 LTS using gfortran4.4, on a 32bit Intel Core 2 Duo 3.16 GHz machine, with little-endian and record marker lengths equal to 4.
The machine I'm currently trying to read this Fortran output file is a Macbook Pro