similar to: scaling to multiple data files

Displaying 20 results from an estimated 1000 matches similar to: "scaling to multiple data files"

2018 Dec 18
2
should we do this time-consuming transform in InstCombine?
Hi, There is an opportunity in instCombine for following instruction pattern: %mul = mul nsw i32 %b, %a %cmp = icmp sgt i32 %mul, -1 %sub = sub i32 0, %a %mul2 = mul nsw i32 %sub, %b %cond = select i1 %cmp, i32 %mul, i32 %mul2 Source code for above pattern: return (a*b) >=0 ? (a*b) : -a*b; Currently, llvm(-O3) can not recognize this as abs(a*b). I initially think we could do this in
2009 Jun 07
1
Must be a better way to collate sequenced data
I have data that looks like this time_stamp (seconds) user_id The data is (partial) ordered by time - in that sometimes transactions occur at the same timestamp. The output I want is collated by transaction time on a per user basis, normalized by the maximum number of transactions per user, and aggregated over each day. So, if the users have 50 transactions in the first day and 20 transactions
2018 Dec 18
2
should we do this time-consuming transform in InstCombine?
Hi Roman, Thanks for your good idea. I think it can solve the abs issue very well. I can continue with my work now^-^. But if it is not abs and there is no select, %res = OP i32 %b, %a %sub = sub i32 0, %b %res2 = OP i32 %sub, %a theoretically, we can still do the following transform for the above pattern: %res2 = OP i32 %sub, %a ==> %res2 = sub i32 0, %res Not sure whether we can do it
2012 Jul 02
2
using "na.locf" from package zoo to fill NA gaps
Hi everybody, I have a small question about the function "na.locf" from the package "zoo". I saw in the help that this function is able to fill NA gaps with the last value before the NA gap (or with the next value). But it is possible to fill my NA gaps according to the last AND the next value at the same time? Actually, I want R to fill my gaps with the method of
2011 Sep 30
1
last observation carried forward +1
Hi R-helpers I'm looking for a vectorised function which does missing value replacement as in last observation carried forward in the zoo package but instead of a locf, I would like the locf function to add +1 to each time a missing value occurred. See below for an example. > require(zoo) > x <- 5:15 > x[4:7] <- NA > coredata(na.locf(zoo(x))) [1] 5 6 7 7 7 7 7 12 13
2011 Jun 15
0
specifying interactions in a gam model with "by"
I?m confused by the difference in the fit of a gam model (in package mgcv) when I specify an interaction in different ways. I would appreciate it if someone could explain the cause of these differences. For example: x <- c(105, 124, 124, 124, 144, 144, 150, 176, 178, 178, 206, 206, 212, 215, 215, 227, 229, 229, 229, 234, 234, 254, 254, 290, 290, 303, 334, 334, 334, 344,
2013 Mar 18
2
data.frame with NA
I have this little data.frame http://dl.dropbox.com/u/102669/nanotna.rdata Two column contains NA, so the best thing to do is use na.locf function (with fromLast = T) But locf function doesn't work because NA in my data.frame are not recognized as real NA. Is there a way to substitute fake NA with real NA? In this case na.locf function should work Thank you
2006 Feb 06
5
lme4: Error in getResponseFormula(form) : "Form" must be a two sided formula
I'm sure I'm being stupid so flame away... R2.2.1 on Windoze (boohoo) latest updates of packages. I'm exploring a dataset (land) with three variables looking at an narrowly unbalanced two group (GROUP) ANCOVA of a randomised controlled trial analysing endpoint score (SFQ.LOCF.ENDPOINT) entering the baseline score (SFQ.BASELINE) as covariate and the following work fine: > res.same
2010 Feb 22
2
Creating regularly spaced time series from irregular one
Hello, I have a series of intraday (high-frequency) price data in the form of POSIX timestamp followed by the value. I sucesfuly loaded that into "its" package object. I would like to create from it a regularly spaced time series of prices (for example 1min, 5min, etc apart) so i could calcualte returns. There is an interpolation function locf() that for timestamp with value NA uses last
2011 Feb 01
1
sqlsave and mysql database with autoincremental column
Hello, I'm trying to modify my r-script to use RODBC instead of DBI/RMySQL (no more ready-to-use package for windows). I would like to copy a data.frame of 44 columns to a table of 45 columns (the 45th is an autoincremental column). With the following commands, colnames(df)<- a vector with the names of the 44 columns
2013 Apr 29
1
how to add new rows in a dataframe?
Hi, dat1<- read.table(text=" id??????????????? t???????????????????? scores 2???????????????? 0??????????????????????? 1.2 2???????????????? 2???????????????????????? 2.3 2???????????????? 3??????????????????????? 3.6 2???????????????? 4??????????????????????? 5.6 2???????????????? 6??????????????????????? 7.8 3???????????????? 0??????????????????????? 1.6 3????????????????
2003 Nov 14
4
LOCF - Last Observation Carried Forward
Hi! Is there a possibilty in R to carry out LOCF (Last Observation Carried Forward) analysis or to create a new data frame (array, matrix) with LOCF? Or some helpful functions, packages? Karl --------------------------------- Gesendet von http://mail.yahoo.de Schneller als Mail - der neue Yahoo! Messenger. [[alternative HTML version deleted]]
2004 Dec 23
0
zoo 0.9-1
Dear useRs, a new and much improved version of the zoo package for indexed totally ordered observations (such as irregular time series) is available from CRAN. It allows indexing observations with time/index vectors of arbitrary class and extends many of the standard generic functions also available for "ts" objects. Additionally, it allows conversion from/to other (irregular) time
2004 Dec 23
0
zoo 0.9-1
Dear useRs, a new and much improved version of the zoo package for indexed totally ordered observations (such as irregular time series) is available from CRAN. It allows indexing observations with time/index vectors of arbitrary class and extends many of the standard generic functions also available for "ts" objects. Additionally, it allows conversion from/to other (irregular) time
2010 Apr 12
1
N'th of month working day problem
Dear Gabor, Thanks for your reply. however: > tail(DJd) ^DJI.Close 2010-04-01 10927.07 2010-04-05 10973.55 2010-04-06 10969.99 2010-04-07 10897.52 2010-04-08 10927.07 *2010-04-09 10997.35* > tail(ag) 2009-11-30 10344.84 2009-12-31 10428.05 2010-01-31 10067.33 2010-02-28 10325.26 2010-03-31 10856.63 *2010-04-30 10997.35 * It seems the script "makes up"
2010 Jun 30
1
merge.zoo and fill
Hello again, I merge different zoo time series with prices at different dates. This returns a multivariate zoo object with NA's at various points i.e., 2010-02-28 NA NA NA NA 850.2 2444.4 NA NA NA NA NA NA NA 2010-03-01 61.1 55.3 61.5 81.24 NA NA 1712.2 3.3 11139.3 163.7 2242.4 9015.6 109.791 2010-03-31
2013 Mar 10
0
max row
HI, Using c11<- 0.01 c12<- 0.01 c1<- 0.10 c2<- 0.10 One possible problem is that: dim(res5) #[1] 513? 20 res6<-aggregate(.~m1+n1+m+n,data=res5[,c(1:6,9:12,21:24)] ,max) #Error in `[.data.frame`(res5, , c(1:6, 9:12, 21:24)) : ?# undefined columns selected A.K. ________________________________ From: Joanna Zhang <zjoanna2013 at gmail.com> To: arun <smartpink111 at
2010 Jun 30
2
merging and adding time series
Hello I have two series (that can have with different frequencies or with missing values). I merge them and use na.locf, getting a zoo objet with a common index and two core columns. How can I add this columns getting a new zoo series? Any other way of adding two asynchronou series? regards -- View this message in context:
2010 Sep 20
2
Substitute NAs by zero
Hello How can I substitute all NA values by zero in a R zoo series? I've been reading about na.locf and na.omit but I think none of them do what I need. thanks. -- View this message in context: http://r.789695.n4.nabble.com/Substitute-NAs-by-zero-tp2546715p2546715.html Sent from the R help mailing list archive at Nabble.com.
2013 Mar 26
1
Shifting cells and removing blanks
Hi , I've been struggling with this problem. Initially I thought something like a na.locf would help but I'm at a dead end. I have a data set like this: ID Prod1 Prod2 Prod3 Prod4 Prod5 01 A - B - C 02 - F - G - 03 H - - - J And I would like to remove all