similar to: Merging vector data into one file

Displaying 20 results from an estimated 50000 matches similar to: "Merging vector data into one file"

2009 May 01
1
question on aggregate
Hi, I am trying to sum column information in a list with 3 instances. For example: ID Traversed ID Traversed ID Traversed 1 5 1 7 1 8 2 8 2 11 2 7 3 11 3 22 3 16 What I want to do is sum the
2008 Aug 12
2
Parsing array data
Hi, I read in csv files with the following code: res <- vector(mode="list",length=3) for(i in 1: length(res)) res[[i]]<-read.csv(file=paste("/Users/markaltaweel/Desktop/Output/HydroDataOutput",i,".csv",sep=""),header=T,sep=",") This allows me to load the data into an array of length 3, with the res array containing my data from the csv
2008 Aug 20
2
Line of best
Hi, I have a scatter plot, with an equation that best fits the scatter plot expressed as: 1/x^.6. I know for normal linear regression lines you can use the abline() command; however, since my best fit line is not linear, how can I draw my line on the scatter plot in a similar fashion to abline(). Thanks for everyone's help again. I appreciate this board's advice. Mark
2008 Aug 13
3
Conditional statement used in sapply()
Hi, I have data stored in a list that I would like to aggregate and perform some basic stats. However, I would like to apply conditional statements so that not all the data are used. Basically, I want to get a specific variable, do some basic functions (such as a mean), but only get the data in each element's data that match the condition. The code I used is below: >
2010 Feb 01
2
Loading data from folder
Hi, I am trying to load csv type data from a folder. However, rather than syntax that simply loads one file at a time I was wondering if there is a method that loads all data from a specific folder. For instance, I have the following data files in a folder nebraskaStats: 10-1-2009_10-7-2009.txt 10-2-2009_10-8-2009.txt 10-3-2009_10-9-2009.txt ....etc. (245 total files in folder) Each file
2009 Sep 27
2
zoo: merging aggregated zoo-objects fails
Dear all, I have several text files looking like this: 9063032 19700201 22:00 174.067 9063032 19700201 23:00 174.076 9063032 19700202 00:00 174.085 9063032 19700202 01:00 174.091 9063032 19700202 02:00 174.094 9063032 19700202 03:00 174.091 9063032 19700202 04:00 174.082 9063032 19700202 05:00 174.079 And I run this loop: for (j in 1:nr.of.files) { #Import: DF <-
2009 Nov 23
3
FUN argument to return a vector in aggregate function
Hi All, I am currently doing the following to compute summary statistics of aggregated data: a = aggregate(warpbreaks$breaks, warpbreaks[,-1], mean) b = aggregate(warpbreaks$breaks, warpbreaks[,-1], sum) c = aggregate(warpbreaks$breaks, warpbreaks[,-1], length) ans = cbind(a, b[,3], c[,3]) This seems unnecessarily complex to me so I tried > aggregate(warpbreaks$breaks, warpbreaks[,-1],
2010 Feb 08
1
Follow-up Question: data frames; matching/merging
Wow.. thanks for the deluge of responses! Aggregate seems like the way to go here. But, suppose that instead of integers in column V2, I actually have dates (and instead of keeping the minimum integer, I want to keep the earliest date): > df =
2011 Jul 14
2
cbind in aggregate formula - based on an existing object (vector)
Hello! I am aggregating using a formula in aggregate - of the type: aggregate(cbind(var1,var2,var3)~factor1+factor2,sum,data=mydata) However, I actually have an object (vector of my variables to be aggregated): myvars<-c("var1","var2","var3") I'd like my aggregate formula (its "cbind" part) to be able to use my "myvars" object. Is it
2010 Apr 16
4
score counts in an aggregate function
Dear R-Users, I have a big data set "mydata" with repeated observation and some missing values. It looks like the format below: userid sex item score1 score2 1 0 1 1 1 1 0 2 0 1 1 0 3 NA 1 1 0 4 1 0 2 1 1 0 1 2 1 2 NA 1 2 1 3 1
2011 May 17
1
Zero counts in an aggregate function
Dear R-users, I've searched for an answer to my question, but so far haven't been able to find a solution. Its likely a simple issue, but have no idea how to do this. A simplified version my (very large) data set looks like this: > > bugs FRUIT SEED_ID SURVIVE 1 1 A 1 2 1 B 1 3 1 C 1 4 1 D 0 5 1 E
2008 Nov 13
2
(no subject)
Hi, Browse[1]> d4 EVDO_Rev Session_Setup FCA bin counts 50 NA 0 5 1 1 51 NA 0 5 2 1 52 NA 0 5 3 1 53 NA 0 5 4 1 54 NA 0 5 5 1 55 NA 0 5 6 1 56 NA 0 5 7 1 57 NA 0 5 8
2006 Apr 28
3
aggregating columns in a data frame in different ways
I would like to use aggregate() to combine statistics for several days in a data frame. My data frame looks similar to this: date type count value 1 2006-04-01 A 10 99.6 2 2006-04-01 B 4 33.2 3 2006-04-02 A 22 43.2 4 2006-04-02 B 8 44.9 5 2006-04-03 A 12 12.4 6 2006-04-03 B 14 18.5 ('date' is a factor, and my
2010 Jan 26
2
Large dataset importing, columns merging and splitting
Dear All, I have a large data set that looks like this: CVX 20070201 9 30 51 73.25 81400 0 CVX 20070201 9 30 51 73.25 100 0 CVX 20070201 9 30 51 73.25 100 0 CVX 20070201 9 30 51 73.25 300 0 First, I would like to import it by merging column 3 4 and 5, since that is the timestamp. Then, I would like to aggregate the data by splitting them in bins of 5 minutes size, therefore from 93000 up to
2009 Aug 18
1
aggregating values at discreet irregular time intervals into hourly values
Hello R users, I'm a newby to R (and programming software at large) and I would need some help to sum up event data at discreet time and irregular time interval into a hourly frequency. Here is an example of my time series frame (irregular time-serie object - irts in the tseries package): time value 2008-12-19 19:11:03 GMT 1 2008-12-19 19:12:00 GMT 0 2008-12-19
2009 Dec 08
6
conditionally merging adjacent rows in a data frame
Hi, I have a data frame and want to merge adjacent rows if some condition is met. There's an obvious solution using a loop but it is prohibitively slow because my data frame is large. Is there an efficient canonical solution for that? > head(d) rt dur tid mood roi x 55 5523 200 4 subj 9 5 56 5523 52 4 subj 7 31 57 5523 209 4 subj 4 9 58 5523 188 4 subj 4 7
2011 Dec 15
1
Trouble converting hourly data into daily data
Hello, I have a data frame with hourly or sub-hourly weather records that span several years, and from that data frame I'm trying to select only the records taken closest to noon for each day. Here's what I've done so far: #Add a column to the data frame showing the difference between noon and the observation time (I converted time to a 0-1 scale so 0.5 represents noon):
2008 Jun 17
1
re sultant column names from reshape::cast, with a fun.aggregate vector
try this: scores.melt = data.frame(grade = floor(runif(100, 1,10)), variable = 'score', value = rnorm(100)); cast(scores.melt, grade ~ variable, fun.aggregate = c(mean, length)) it has the nice column names of: grade score_mean score_length 1 1 0.08788535 8 2 2 0.16720313 15 3 3 0.41046299 7 4 4 0.13928356 13 ... but
2013 Jan 22
4
Simple use of dcast (reshape2 package)
Suppose I have a small dataframe > aa Target Eaten ID 50 TPP 0 1 51 TPP 1 2 52 TPP 3 3 53 TPP 1 4 54 TPP 2 5 50.1 GPA 9 1 51.1 GPA 11 2 52.1 GPA 8 3 53.1 GPA 8 4 54.1 GPA 10 5 And I want to reshape it into ID TPP GPA 1 1 0 9 2 2 1 11 3 3 3 8 4 4 1 8 5 5 2 10 I realise that
2004 May 12
1
summary table newbie question
I've got a newbie question and I got a little lost in the "table helps". I've got a data.frame I would like to summarize as a (and pardon for the lack of correct vernacular) data collection matrix. My data looks like, stand siteindex age acres pct.acres 1 232 116 45 8477.3105 0.56159458 2 234 121 25 11120.1530 0.73667441 3 235 132 25