similar to: Aggregating an its series

Displaying 20 results from an estimated 110 matches similar to: "Aggregating an its series"

2006 Apr 09
0
(IT WAS) Aggregating an its series
Just strip off the hours component of the dates, then take a subset of the data where the hour is <= 12. I did not execute this, so you might need to change it a bit: hours <- as.integer(format(dates(base),"%H")) new.data <- base[hours <= 12,] aggregate(new.data,by=list(as.factor(format(dates(new.data),"%Y%m%d"))),mean,na.rm=T) -----Original Message-----
2007 Feb 28
1
vfs_shadow and [homes]
hi, i was able to successfully run vfs_shadow on a samba share with win xp shadow copy client. but i think it?s currently not implemented that one could export [homes] with this vfs object, because AFAIK each @GMT-snap has to resist directly under the samba share. but [homes] is a virtual share representing different shares (depending on users). has anybody hints about using vfs_shadow with
2002 Jun 01
1
Compile Problem
Since Version 1.73, I have not be able to compile syslinux using the following command line make syslinux It gives me this error message make: *** No rule to make target `/usr/lib/gcc-lib/i386-redhat-linux/egcs-2.91.66/include/stddef.h', needed by `syslinux.o'. Stop. The last version that worked for me was 1.72 -------------- next part -------------- An HTML attachment was scrubbed...
2002 Jun 03
0
Very Nice!
Hey Peter! Great work on the SYSLINUX project! I spent three days noodling over how to accomplish *exactly* what your ISOLINUX/MEMDISK combination does! Even a blind hog finds an acorn from time to time. I have it working great using the crpack utility suite from http://www.nu2.nu and one of Bart Lagerweij's techniques employs your SYSLINUX utilities - you probably already know all about
2002 Jun 02
1
(fwd from t216@zkb.ch) IBM Mainframe port for rsync
How cool! -- Martin -------------- next part -------------- An embedded message was scrubbed... From: Hartmut Schaefer <t216@zkb.ch> Subject: IBM Mainframe port for rsync Date: Fri, 17 May 2002 09:28:27 +0200 Size: 41998 Url: http://lists.samba.org/archive/rsync/attachments/20020602/1e7c0e5a/attachment.eml
2004 Aug 06
1
bug in cvs version of icecast2?
Hi! I found out that icecast will crash when trying to stream a title or artist with % in the name. The cause seems to be in stats.c, line 158 where the text is sent as a format string to vsnprintf. This could possibly be used for an exploit too. The solution I came up with is to call stats_event instead of stats_event_args from format_vorbis_get_buffer in format_vorbis.c. I've included a
2002 Jun 08
1
[Bug 269] New: OpenSSH doesn't compile with dynamic OpenSSL libraries
http://bugzilla.mindrot.org/show_bug.cgi?id=269 Summary: OpenSSH doesn't compile with dynamic OpenSSL libraries Product: Portable OpenSSH Version: -current Platform: UltraSparc OS/Version: Solaris Status: NEW Severity: normal Priority: P2 Component: Build system AssignedTo: openssh-unix-dev at
2002 May 28
2
rsync 2.5.4 (probably 2.5.5 too) server handles SIGPIPE very poorly
(I am not on the rsync mailing list, so if you send a response to this message to the list, please be sure to CC me.) I first reported this bug go Red Hat in <URL:https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=65350>. If you run rsync with a subshell through ssh.com's ssh and sshd and then kill the client with ctrl-C, the rsync server process running on the remote machine grows
2017 Nov 14
0
Aggregating Data
R-Help I created a "shortdate" for the purpose of aggregating each var (S72 .S119) by daily sum , but not sure how to handle using a POSIXlt object. > myData$shortdate <- strftime(myData$time, format="%Y/%m/%d") > head(myData) time s72 s79 s82 s83 s116 s119 shortdate 1 2016-10-03 00:00:00 0 0 1 0 0 0 2016/10/03 2 2016-10-03 01:00:00
2017 Nov 14
0
Aggregating Data
R-Help Please disregard as I figure something out, unless there is a more elegant way ... myData.sum <- aggregate(x = myData[c("s72","s79","s82","s83","s116","s119")], FUN = sum, by = list(Group.date = myData$shortdate)) > head(myData.sum) Group.date s72 s79 s82 s83 s116 s119 1
2002 Nov 06
1
Aggregating a List
Hi all, There must be a really obvious R solution to this, but I can't figure out how to aggregate a list. For instance, if I read.table the following from a file: Val1 Val2 A 3 4 A 5 6 B 4 4 I would like to take the mean (or median) across any/all rows of type "A" to end up with the structure: Val1 Val2 A 4 5 B 4 4 in this case. How would I go about doign that w/o doing a
2006 Jan 19
0
aggregating variables with pca
hello R_team having perfomed a PCA on my fitted model with the function: data<- na.omit(dataset) data.pca<-prcomp(data,scale =TRUE), I´ve decided to aggregate two variables that are highly correlated. My first question is: How can I combine the two variables into one new predictor? and secondly: How can I predict with the newly created variable in a new dataset? Guess I need the
2007 Mar 28
2
aggregating data with Zoo
Is there a way of aggregating 'zoo' daily data according to day of week? eg all Thursdays I came across the 'nextfri' function in the documentation but am unsure how to change this so any day of week can be aggregated. I have used POSIX to arrange the data (not as 'zoo' series) according to day of week, but am curious if I've missed if a similar option available
2010 Aug 01
1
aggregating a daily zoo object to a weekly zoo object
Dear R People: I'm trying to convert a daily zoo object to a weekly zoo object: xdate <- seq(as.Date("2002-01-01"),as.Date("2010-07-10"),by="day") library(zoo) length(xdate) xt <- zoo(rnorm(3113),order=xdate) xdat2 <- seq(index(xt)[1],index(xt)[3113],by="week") xt.w <- aggregate(xt,by=xdat2,mean) Error: length(time(x)) ==
2011 Oct 17
0
Aggregating Survey responses for weighting
I have about 27,000 survey responses from across about 150 Bus Routes, each with potentially 100 stops. I've recorded the total Ons and Offs for each stop on each bus run, as well as the stop pair each survey response corresponds to. I wish to create weights based on the On and Off stop for each line and direction. This will create a very sparse "half table" (observations by
2005 Oct 24
0
aggregating using several functions
Dear R users, I would like to aggregate a data frame using several functions at once (e.g., mean plus standard error). How can I make this work using aggregate()? The help file says scalar functions are needed; can anyone help? Below is the code for my "meanse" function which I??d like to use like this: aggregate(dataframe, list(factorA,factoB),meanse) Thanks for your help!
2006 Oct 18
0
Aggregating a data frame (was: Re: new R-user needs help)
Please use an informative subject for sake of the archives. Here are several solutions: aggregate(DF[4:8], DF[2], mean) library(doBy) summaryBy(x1 + x2 + x3 + x4 + x5 ~ name, DF, FUN = mean) # if Exp, name and id columns are factors then this can be reduced to library(doBy) summaryBy(. ~ name, DF, FUN = mean) library(reshape) cast(melt(DF, id = 1:3), name ~ variable, fun = mean) On
2009 Jul 28
2
aggregating strings
I am currently summarising a data set by collapsing data based on common identifiers in a column. I am using the 'aggregate' function to summarise numeric columns, i.e. "aggregate(dat[,3], list(dat$gene), mean)". I also wish to summarise text columns e.g. by concatenating values in a comma separated list, but the aggregate function can only return scalar values and so something
2011 Aug 05
2
Aggregating data
I aggregated my data: aggresults <-aggregate(results, by=list(results$a, results$b, results$c), FUN=mean, na.rm=TRUE) results has about 8000 lines of data, and aggresults has about 80 lines. I would like to create a separate variable for each of the 80 aggregates, each containing the 100 lines that were aggregated. I would also like to create plots for each of those 80 datasets. Is
2010 Sep 08
1
Aggregating data from two data frames
Dear all, I'm working with two data frames. The first frame (agg_data) consists of two columns. agg_data[,1] is a unique ID for each row and agg_data[,2] contains a continuous variable. The second data frame (geo_data) consists of several columns. One of these columns (geo_data$ZCTA) corresponds to the unique ID in the first data frame. The problem is that only a subset of the unique ID