Displaying 20 results from an estimated 11000 matches similar to: "Modifying a particular column in a tab-delimited file"
2010 Dec 28
4
Reading sas7bdat files into R
Hi All,
I am trying to import a .sas7bdat file into R.
I tried using Hmisc package's sasxport.get() function.
temp <- sasxport.get("path\abcd.sas7bdat")
I get an error that says "Error in lookup.xport(file) : file not in SAS transfer format"
I am not familiar with SAS transfer format.
Could somebody please clarify what is it that I am missing,
Thanks for your
2012 May 04
2
read.table() vs read.delim() any difference??
Hi,
I have a tab seperated file with 206 rows and 30 columns.
I read in the file into R using read.table() function. I checked the dim()
of the data frame created in R, it had only 103 rows (exactly half), 30
columns. Then I tried reading in the file using read.delim() function and
this time the dim() showed to be 206 rows, 30 columns as expected.
Reading the read.table() R-help documentation, I
2010 Dec 09
4
lapply getting names of the list
Hello All,
I have a toy dataframe like this. It has 8 columns separated by tab.
Name SampleID Al1 Al2 X Y R Th
rs191191 A1 A B 0.999 0.09 0.78 0.090
abc928291 A1 B J 0.3838 0.3839 0.028 0.888
abcnab A1 H K 0.3939 0.939 0.3939 0.77
rx82922 B1 J K 0.3838 0.393
2012 Nov 17
1
Strange problem with reading a pipe delimited file
I am trying to read in a pipe delimited file that has rows with varying number of columns, here is my sample data:
A|B|C|D
A|B|C|D|E|F
A|B|C|D|E
A|B|C|D|E|F|G|H|I
A|B|C|D
A|B|C|D|E|F|G|H|I|J
You can see line 6 has 10 columns. Yet, I can't explain why R does like so:
> test <- read.delim("mypaths4.txt", sep="|", quote=NULL, header=F,
2014 Jul 01
1
combining data from multiple read.delim() invocations.
Is there a better way to do the following? I have data in a number of tab
delimited files. I am using read.delim() to read them, in a loop. I am
invoking my code on Linux Fedora 20, from the BASH command line, using
Rscript. The code I'm using looks like:
arguments <- commandArgs(trailingOnly=TRUE);
# initialize the capped_data data.frame
capped_data <- data.frame(lpar="NULL",
2010 Oct 26
3
Reading in a tab delimitated file
Hi all,
I have a total newbie question, but I could really use some help.
I need to read in this file:
SampleID Disease
E-CBIL-28-raw-cel-1435145228.cel 1
E-CBIL-28-raw-cel-1435145451.cel 2
E-CBIL-28-raw-cel-1435145479.cel 2
E-CBIL-28-raw-cel-1435145132.cel 3
E-CBIL-28-raw-cel-1435145417.cel 3
E-CBIL-28-raw-cel-1435145301.cel 2
E-CBIL-28-raw-cel-1435145558.cel 1
2010 Jul 18
2
Import of specific column of many space-delimited text files
Hi,
I have about 300 space-delimited text files and from each file I want to
import one specific column into R to create a data frame where all imported
columns are included.
Is there a smart way to do so?
Thanks!
--
View this message in context: http://r.789695.n4.nabble.com/Import-of-specific-column-of-many-space-delimited-text-files-tp2293273p2293273.html
Sent from the R help mailing list
2005 Apr 24
1
large dataset import, aggregation and reshape
Dear useRs
We have a data-set (comma delimited) with 12Millions of rows, and 5
columns (in fact many more, but we need only 4 of them): id, factor 'a'
(5 levels), factor 'b' (15 levels), date-stamp, numeric measurement. We
run R on suse-linux 9.1 with 2GB RAM, (and a 3.5GB swap file).
on average we have 30 obs. per id. We want to aggregate (eg. sum of the
measuresments under
2008 May 30
1
Skipping columns to save memory
I have a very large tab delimited file (~ 1.97 GB) that I need to read
in to R. The data contain 10 columns and there are millions of rows.
I need all rows of the data, but I only need the first column in the
data. I was looking at the ?read.delim and am trying to see if it is
possible to tell this function only to read in the first column and skip
the others.
The help file says the number of
2009 Sep 23
1
read.delim very slow in reading files with lots of columns
Hi,
I am trying to read a tab-delimited file into R (Ver. 2.8). The machine I am using is 64bit Linux with 16 GB.
The file is basically a matrix(~600x700000) and as large as 3GB.
The read.delim() ran extremely slow (hours) even with a subset of the file (31 MB with 6x700000)
I monitored the memory usage, and found it constantly only took less than 1% of 16GB memory.
Does read.delim()
2010 Oct 09
1
ncdf installation in R
Hi All,
I am trying to install ncdf package on a Linux 64-bit machine.
I successfully installed netcdf using this command,
./configure --prefix=/home/challar/netcdf/ --disable-netcdf4
I then tried to install ncdf package in R
R CMD INSTALL --configure-args="-with-netcdf_incdir=/home/challar/netcdf/include -with-netcdf_libdir=/home/challar/netcdf/lib" ncdf_1.6.3.tar.gz
I get this
2012 Feb 08
2
Problems reading tab-delim files using read.table and read.delim
Hello,
I used read.xlsx to read in Excel files but for large files it turned out to
be not very efficient.
For that reason I use a programme which writes each sheet in an Excel file
into tab-delim txt files.
After that I tried using read.table and read.delim to read in those txt
files. Unfortunately, the results
are not as expected. To show you what I mean I created a tiny Excel sheet
with some
2010 May 04
2
read.table: skipping trailing delimiters
Hi,
I am trying to read a tab-delimited file that has trailing tab
delimiters. It's a simple file with two legitimate fields. I'm using the
first as row.names, and the second should be the only column in the
resulting data frame.
Initially, R was filling the last column with NA's, but I was able to
stop that by setting
2007 May 20
3
Why a multi column, tab delimited file has only one column after reading in with read.table specification sep="\t"
Dear all:
I have a tab delimited file as following
AGE WEIGHT PROTEIN ........
6 20 3 ........
8 39 4 ........
I tried to read it as following:
data <- read.table(file,sep="\t",header=T);
but there is only column for the data after reading in,:
dim(data);
[1] 200 1
the column name is "AGE...WEIGHT...........PROTEIN...."
Any quick suggestion will be appreciated.
2012 Jan 19
2
Reading in tab (and space) delimited data within a script XXXX
Hello everyone,
I use Bob Muenchen's approach for reading in "in-stream" (to use SAS
parlance) delimited data within a script. This works great:
mystring <-
"id,workshop,gender,q1,q2,q3,q4
1,1,f,1,1,5,1
2,2,f,2,1,4,1
3,1,f,2,2,4,3
4,2, ,3,1, ,3
5,1,m,4,5,2,4
6,2,m,5,4,5,5
7,1,m,5,3,4,4
8,2,m,4,5,5,5"
mydata <- read.table( textConnection(mystring),
2012 Apr 04
3
Remove carriage return in writing tab-delimited file.
Having problems with the write.table function. I can write a tab delimited
file just fine, but for each line in my matrix its inputs a carriage return
when i dont want it to.
For example my matrix might be:
ID V1 V2 V3
FARY1004 1 2 3
FARY2067 2 3 1
FARY4587 2 2 2
And I want the written File to be:
FARY1004 1 2 3FARY2067 2 3 1FARY4587 2 2
2
TIA
--
View this
2008 May 25
2
How to create tab-delimited text files
Hello Friends
I want to generate tab-delimited text file of my every users
information in once.
i have a button called EXPORT on my page when i clicked on this button
than i want to generate tab-delimited text file for every users
information.
can anybody give any snippet of code or any idea for this
Thanks
--
Posted via http://www.ruby-forum.com/.
2003 Mar 05
3
reading in tab delimited data in a loop
Dear all,
I need to read in 4 sets of tab delimited data in a loop. The 4 data sets are called "simu1.dat", "simu2.dat" and so on. I know what I need on the righthand side of the read.table expression but I can't the left hand side of it to work (see the line in bold below). Can you kindly help? Many thanks.
simu1 <- matrix(0,30,3)
simu2 <- matrix(0,30,3)
simu3
2010 Mar 23
2
Saving tab/csv delimited data with NaN's
Hello,
I am working multiple simulated data sets with missing values, I would
like to store these data sets in either tab delimited format for .csv
format with missing values marked as NaN's instead of NA's.
I read the import/export document which mentions that write.table
command converts NaN's to NA. Is there any other way I can store the
NaN's. I tried the write syntax
2007 Dec 06
1
Building package - tab delimited example data issue
Hello,
I'm trying to integrate example data in the shape of a tab delimited ASCII
file into my package and therefore dropped it into the data subdirectory.
The build works out just fine, but when I attempt to install I get:
** building package indices ...
Error in scan(file, what, nmax, sep, dec, quote, skip, nlines,
na.strings, :
line 1 did not have 500 elements
Calls: <Anonymous>