similar to: problems with read.csv

Displaying 20 results from an estimated 3000 matches similar to: "problems with read.csv"

2009 Oct 23
2
extract day or month as in Splus
Dear all, I am writing to ask for help to find R code to do the same thing as the following Splus code: dates <- c("02/27/1992", "02/27/1992", "01/14/1992", "02/28/1992", "02/01/1992") timeDate(as.character(dates),in.format="%m/%d/%Y","%a") [1] Thu Thu Tue Fri Sat Could anyone give me some R codes to get the
2010 Feb 05
2
sum a particular column by group
Dear all, I have a table like this: > eds R.ID Region Gender Agegr Time nvisits 1 1 A F 60--64 1:00 1 2 2 O F 55--59 1:20 1 3 3 O F 55--59 3:45 3 4 4 S M 60--64 1:10 3 5 5 W F 55--59 12:30 1 6
2012 Jul 22
2
Frame Column to List (conversion)
Hi, Input Format: excel file (XLS) Column 1: Gene ID (alphanumeric) Column 2 - 10 : (numeric data). inData = read.xls ( <fileName>) geneLabel = inData [ , 1] - column 1 stored in geneLabel tempData = inData [ , 2: 10] expValues = data.matrix (tempData) - convert frame into Matrix format expValues has the matrix format needed for analysis. I need to bind gene labels as . I
2008 May 13
2
array dimension changes with assignment
Why does the assignment of a 3178x93 object to another 3178x93 object remove the dimension attribute? > GT <- array(dim = c(6,nrow(InData),ncol(InSNPs))) > dim(GT) [1] 6 3178 93 > SNP1 <- InSNPs[InData[,"C1"],] > dim(SNP1) [1] 3178 93 > SNP2 <- InSNPs[InData[,"C2"],] > dim(SNP2) [1] 3178 93 > dim(pmin(SNP1,SNP2)) [1] 3178 93
2012 Feb 03
1
incomplete final line found on <name of my sourced function file>
Dear R-ers, I hope there is a really simple solution to my problem. I've written a function that I saved in an .r file. I source this file in my code. For a while it worked fine. But then when I run the line: source("F mylineplot.r") I started getting a warning: In readLines(file) : incomplete final line found on 'F mylineplot.r' I have no idea why - I tried to check and
2003 Nov 04
5
read.spss Error reading system-file header
Is there any documentation on what kind of SPSS file can and cannot be read by read.spss? Alternatively, how can one modify or "clean" an SPSS file to make it readable by read.spss? What properties must a *.sav file before read.spss can read it? The file in this example is 270KB, with 5 rows and 173 columns. I have no trouble reading larger files with read.spss, so it's not
2006 Nov 20
2
problem with loop to put data into array with missing data for some files
Dear R-help community, My main goal of this message is to find a way of skipping a file of a month/year in a loop that does not exist (and making it's output into an data.out array would be NA) and moving onto the next year/month in the loop to carry on filling data.out with real precipitation data. The situation so far: I downloaded 50 years worth of GRIB data files from the NCEP data
2004 Apr 29
1
I'm trying to use package ts (decompose). How do you set up the data/ See attached. thanks
InDATA <-read.table("C:/Data/May 2004/season.txt",header=T) X <- decompose(InDATA) print(X) Period Connections Q1 67519 Q2 69713 Q3 68920 Q4 69452 Q1 70015 Q2 59273 Q3 57063 Q4 65596 Q1 73527 Q2 58586 Q3 69522 Q4 60091 Q1 51686 Q2 63490 Q3 55702 Q4 53200 Q1 51033 Q2 48175 Q3 52709 Q4 50106 Q1 50855 Q2 43466 Q3 48190 Q4 41702 Q1 48747 Q2 51441 Q3 42537
2006 Nov 20
3
problem with loop to put data into array with missing data forsome files
Hi Jenny If you want a general solution I understand. However I just downloaded the file fine (as far as I can tell) so you are welcome to have a copy. I can email it to you if you want. I do not think your test for NA is valid. i.e if(test != "NA"){ } I think you should use if(is.na(test)){ } Or something similar. J --- John Seers Institute of Food Research Norwich
2009 Nov 24
1
keep empty subsets using aggregate
Dear all, I am struggling with a small problem. By using aggregate, the empty subsets are removed. I need each empty subset to be 0. Any suggestions will be appreciated. Code: edref = aggregate(rep(1,times=dim(eds)[1]),list(eds[,11], eds[,7], eds[,27]), sum) Thanks in advance, Betty [[alternative HTML version deleted]]
2006 Sep 07
2
Getting subframe type=verbatim on 16 bit files
Here's how I set up the data for processing: // For moving data into 32 bit shape uint8_t *buffer8 = NULL; uint16_t *buffer16 = NULL; uint32_t *buffer32 = NULL; unsigned sample32; unsigned sample, channel; uint32_t bitsPerSample = this->get_bits_per_sample(); numFrames = inData.GetSize();
2006 Dec 22
1
Powercom BNT-1200AP driver
Hello. What about driver for my UPS? I have changed the powercom driver for myself, but I have made it not absolutely correctly. However, somehow this driver works. Now i have protocol spec for powercom BNT series and attach it to this email. It can be interesting to authors? -- ? ?????????, ??????? ??????? mailto:alex@reutman.ru JID: alex@reutman.ru ICQ: 5052225 -------------- next part
2004 Sep 20
1
rsync version 2.6.3pre1 protocol version 28
Hi, this is possibly a bug report (I'm not sure if this is a feature). It's related to the --keep-dirlinks option, when combined with --delete . I have the following directory structure on server A: ls -lR software software: total 238 drwxr-xr-x 2 biolord bioinf 1024 Sep 20 10:49 EMBOSS/ lrwxrwxrwx 1 biolord bioinf 6 Feb 5 2003 MSE -> EMBOSS/ lrwxrwxrwx 1
2004 Mar 11
0
Function OPTIM()
Hello I'm trying to reformat my problem below so there is less data entry each time the package is run. This is the 'inefficient' version and below in blue is what I would like it to look like but can't get it to work. InDATA<- read.table ("C:/Data/March 2004/DATA2.txt",header=T) WO=dim(InDATA)[1] DI=dim(InDATA)[2]-1 B <- matrix(rep(0,WO*DI), c(WO,DI)) j=1
2010 Oct 06
0
multiple record types from a single file efficiently?
The current population survey march supplements contain records on households, families and individuals, each with distinct record types all in the same file. I'm trying to efficiently read these files, the following function reads the data file "indata", the records are described in lists contained in "dd_by_type" and flag_pos gives the character position in the data
2011 Sep 28
1
using the system command
Hi, I started playing around with a function for using StatTransfer (version 10) for importing data. This started as a simple task but it's not working and so now I'm very frustrated. I'm using R version 2.13 on Windows 7. The function, called fn.importData, is: function(file = NULL, type = NULL){ ## ## create statTransfer command file -
2010 Oct 03
2
Read file
Dear R-users, I would like to know how could I read a file with different lines lengths. I need read this file and create an output to feed my database. So after reading I'll need create an output like this "INSERT INTO TEMP (DATA,STATION,VAR1,VAR2) VALUES (20100910,837460, 39,390)" I mean, each line should be read. But I don`t how to do this when these lines have different
2012 Nov 26
2
puzzling RODBC error
Dear all, I'm trying to connect to an MSAccess database (ArcGIS personal geodatabase). I keep getting an error about the channel when using sqlQuery(). However, sqlTables() does not complain about the channel and lists all tables in the database. If I try sqlFetch(), then R crashes. I'm happy to hear suggestions on how to solve this. Best regards, Thierry > MDB <-
2007 Jan 28
2
help with RandomForest classwt option
Hello there, I am working on an extremely unbalanced two class classification problems. I wanna use "classwt" with "down sampling" together. By checking the rfNews() in R, it looks that classwt is not working yet. Then I looked at the software from Salford. I did not find the down sampling option. I am wondering if you have any experience to deal with this problem. Do you
2017 Sep 19
3
what do you think about write.table(... qmethod = "excel")?
Last week one of our clients reported trouble with a csv file I generated with write.table. He said that columns with quotes for character variables were rejected by their data importer, which was revised to match the way Microsoft Excel uses quotation marks in character variables. I explained to them that quoted character variables are virtuous and wise, of course, but they say Microsoft Excel