similar to: R combining many vectors of predictable name into one date frame

Displaying 20 results from an estimated 100 matches similar to: "R combining many vectors of predictable name into one date frame"

2012 Apr 03
2
Grouping and/or splitting
I have a dataframe imported from csv file below: Houseid,Personid,Tripid,taz 1,1,1,4 1,1,2,7 2,1,1,96 2,1,2,4 2,1,3,2 2,2,1,58 There are three groups identified based on the combination of first and second columns. How do I split this data frame? I tried aa <- split(inpfil, inpfil[,1:2]) but it has problems. Output desired is aa[1] Houseid,Personid,Tripid,taz 1,1,1,4 1,1,2,7 aa[2]
2006 May 10
3
Unique?
Hello, I have sample data set that looks like: YEAR MONTH DAY CONTINUE SPL TIMEFISH TIMEUNIT AREA COUNTY DEPTH DEPUNIT GEAR TRIPID CONVUNIT 1992 1 26 1 SP0073928 8 H 7 25 4 NA 1000000 02163399054 161 1992 1 26 1 SP0073928 8 H 7 25 4 NA 1000000 02163399054 8 1992 1 26 2 SP0004228 8 H 7 25 4 NA 1000000 02163399054 161 1992 1 26 2 SP0004228 8 H 7 25 4 NA 1000000 02163399054 8 1992
2006 May 03
4
Aggregate?
Hello, I have a data set with a grouping variable (TRIPID) and several other variables. TRIPID is repeated in some areas and I would like to use a function like aggregate to sum the variable UNITS according to TRIPID. However I would also like to retain the other variables as they are in the data set with the new summed TRIPID. So what I have is something like this: YEAR MONTH DAY
2006 May 03
4
Aggregate?
Hello, I have a data set with a grouping variable (TRIPID) and several other variables. TRIPID is repeated in some areas and I would like to use a function like aggregate to sum the variable UNITS according to TRIPID. However I would also like to retain the other variables as they are in the data set with the new summed TRIPID. So what I have is something like this: YEAR MONTH DAY
2012 Jun 08
1
noob requesting help
I'm fairly new to R and still learning how to use it. I could really use some help with the following problem. I have a huge .csv file containing thousands of measurements on 34 different birds. Measurements include longitude, latitude, altitude, speed, time, etc. All birds have a different number (ranging from 121 to 542). All measurements have a tripID (1 for the first trip of every bird, 2
2008 Oct 30
1
Trying to "expand" some data - Newbie needs help
I want to calculate "expansion factors" for elements in my dataframe based on a 2-d cross classification. Since I'll have "missing values" (many combinations will have no record) I'll need a second "expansion factor" for each "row". I've included my "work to date" below, but I'm not very close to getting this right. My
2007 Dec 21
0
FW: faking IB multi-rail with multihomed clients
Guys, For those of you not party to the original email exchange, this is about how we can aggregate bandwidth across both rails of a dual-rail IB cluster using current lustre/LNET (i.e. before we have implemented transparant LNET support for failover and bandwidth aggregation across multiple networks). The following 2 points are fundamental - everything below is a direct consequence... 1. LNET
2012 Apr 03
1
Compare by row and insert previous row value (Or non Time Series Lag)
I have the following sample dataset (CSV input here:http://goo.gl/YR8LP. CSV output here: http://goo.gl/EFCC8) which I want to transform as follows. For each person in a household I want to create two new variables OrigTAZ and DestTAZ. It should take the value in TripendTAZ and put that in DestTAZ. For OrigTAZ it should put value of TripendTAZ from the previous row. For the first trip of every
2001 Nov 17
1
rsync hangs or exists without copying anything
I am trying to mirror a file system using rsync. The command I am using is of the form: rsync -a /fs/home/6/ /usr/fs/home/6 /fs/home/6/ is an NFS file system, /usr/fs/home/6 is a local disk. With versions 2.4.6 and 2.4.7pre1, rsync hangs at random places during the building file list phase. I tried with and without -v option(s) and tried breaking the file system down into smaller chunks with no
2013 Mar 13
3
Assign the number to each group of multiple rows
Dear R users, My data have repeating "beh" parameter : 1 or 2 - type of animal behavior in subsequent locations. I need to assign unique number to each sequence of locations. My data is: >data=data.frame(row=seq(1:10),beh=c(1,1,1,2,2,2,1,1,2,2)) >attach(data) >data row beh 1 1 1 2 2 1 3 3 1 4 4 2 5 5 2 6 6 2 7 7 1 8
2012 Oct 03
1
Retraction: Protocol stacking: gluster over NFS
Hi All, Well, it <http://goo.gl/hzxyw> was too good to be true. Under extreme, extended IO on a 48core node, some part of the the NFS stack collapses and leads to an IO lockup thru NFS. We've replicated it on 48core and 64 core nodes, but don't know yet whether it acts similarly on lower-core-count nodes. Tho I haven't had time to figure out exactly /how/ it collapses, I
2006 May 16
3
subset
Hello everyone, I have a large dataset (x) with some rows that have duplicate variables that I would like to remove. I find which rows are the duplicates with X1<-which(duplicated(x)). That gives me the rows with duplicated variables. Now, how can I remove just those rose from the original data frame. I think I can create a new data frame without the duplicates using subset. I have tried:
2002 Oct 16
0
rsync hangs/stalls on long filenames
Hi! I'm using (the excellent) rsync utility to do backup on several large www-servers. The backup machine runs in daemon mode. However, once in a while the backup hangs/stalls eternally. Using lsof I've been able to determine that this happens only when rsync has reached some file with an unusually long path. Details: ### General ### - Both machines involved run linux, kernel 2.4.19 -
2009 Jun 30
1
v1.2.rc8 released
http://dovecot.org/releases/1.2/rc/dovecot-1.2.rc8.tar.gz http://dovecot.org/releases/1.2/rc/dovecot-1.2.rc8.tar.gz.sig Last few fixes before tomorrow's v1.2.0 release. Also this release was built in dovecot.org to make sure I can make a usable Dovecot release while not at work/home. :) - Fixed building LDAP as plugin - Fixed starting up in OS X
2009 Jun 30
1
v1.2.rc8 released
http://dovecot.org/releases/1.2/rc/dovecot-1.2.rc8.tar.gz http://dovecot.org/releases/1.2/rc/dovecot-1.2.rc8.tar.gz.sig Last few fixes before tomorrow's v1.2.0 release. Also this release was built in dovecot.org to make sure I can make a usable Dovecot release while not at work/home. :) - Fixed building LDAP as plugin - Fixed starting up in OS X
2012 Aug 21
0
IP over IB support.
Hello, I'm trying to set up a lxc container with access to a Infiniband card with IP over IB. I have scripts written to create the device in the container and add all the proper cgroup permissions on the host. I can access the card from the container, but there is no Infiniband network device present. I can't create the ib0 and ib1, and I have come to believe this is because they
2019 Apr 12
2
[PATCH] drm: remove redundant 'default n' from Kconfig
'default n' is the default value for any bool or tristate Kconfig setting so there is no need to write it explicitly. Also since commit f467c5640c29 ("kconfig: only write '# CONFIG_FOO is not set' for visible symbols") the Kconfig behavior is the same regardless of 'default n' being present or not: ... One side effect of (and the main motivation for)
2019 Apr 12
0
[PATCH] drm: remove redundant 'default n' from Kconfig
On Fri, 12 Apr 2019, Bartlomiej Zolnierkiewicz <b.zolnierkie at samsung.com> wrote: > 'default n' is the default value for any bool or tristate Kconfig > setting so there is no need to write it explicitly. > > Also since commit f467c5640c29 ("kconfig: only write '# CONFIG_FOO > is not set' for visible symbols") the Kconfig behavior is the same >
2012 Jul 26
2
kernel parameters for improving gluster writes on millions of small writes (long)
This is a continuation of my previous posts about improving write perf when trapping millions of small writes to a gluster filesystem. I was able to improve write perf by ~30x by running STDOUT thru gzip to consolidate and reduce the output stream. Today, another similar problem, having to do with yet another bioinformatics program (which these days typically handle the 'short reads' that
2013 Sep 02
1
R dataframe and looping help
HI, You may try this: dat1<- read.table(text=" CustID TripDate Store Bread Butter Milk Eggs 1 2-Jan-12 a 2 0 2 1 1 6-Jan-12 c 0 3 3 0 1 9-Jan-12 a 3 3 0 0 1 31-Mar-13 a 3 0 0 0 2 31-Aug-12 a 0 3 3 0 2 24-Sep-12 a 3 3 0 0 2 25-Sep-12 b 3 0 0 0 ",sep="",header=TRUE,stringsAsFactors=FALSE) dat2<- dat1[,-c(1:3)] res<- lapply(seq_len(ncol(dat2)),function(i)