similar to: (Newbie) Aggregate for NA values

Displaying 20 results from an estimated 200 matches similar to: "(Newbie) Aggregate for NA values"

2018 Feb 02
1
R-gui sessions end when executing C-code
Hi I'm trying to develop some C code to find the fixpoint of a contraction mapping, the code compiles and gives the right results when executed in R. However R-gui session is frequently terminated. I'm suspecting some access violation error due to the exception code 0xc0000005 In the error report windows 10 gives me. It is the first time I'm writing any C-code so I'm guessing I
2016 Apr 23
2
if-conversion
Hi, > On Apr 22, 2016, at 8:27 PM, Hal Finkel via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Hi Rob, > > The problem here is that the d[i] array is only conditionally accessed, and so we can't if-convert the loop body. The compiler does not know that d[i] is actually dereferenceable for all i from 0 to 15 (the array might be shorter and p[i] is 0 for i past the end
2006 Feb 24
2
Minor documentation improvement
Gentlemen, In the documentation for reshape, in the function signature, the argument "direction" is not listed. However, it is explained in the explanation of parameters below. I am using R 2.2.1. Out of curiosity: Is the R core team still an all-male affair? I don't think I have seen a single lady's name. -- -- Vivek Satsangi Student, Rochester, NY USA
2006 Feb 17
3
(Newbie) Functions on vectors
Folks, I want to make the following function more efficient, by vectorizing it: getCriterionDecisionDate <- function (quarter , year) { if (length(quarter) != length(year)) stop ("Quarter and year vectors of unequal length!"); ret <- character(0); for (i in 1:length(quarter)) { currQuarter <- quarter[i]; currYear <- year[i]; if ((currQuarter < 1) |
2013 Apr 30
3
Line similarity
Folks, This is probably a "help me google this properly, please"-type of question. In TIBCO Spotfire, there is a procedure called "line similarity". I use this to determine which observations show a growing, stable or declining pattern... sort of like a mini-regression on the time-line for each observation. So of the input is
2005 Dec 08
2
Commented version of the home page graphics code
Folks, I was drawn to R, like many others, partly for the opportunity to draw nice, colorful graphs (occasionally ones with meaning, too :-) ). I am still quite a newbie to R. As such, I have been trying to understand the code for the graphics on the home page (the ones from the 2004 contest -- the dendrogram, the cluster plot with different coloured circles, etc.) I was wondering whether anyone
2006 Jan 18
3
Possible improvement in lm
Folks, I do a series of regressions (one for each quarter in the dataset) and then go and extract the residuals from each stored lm object that is returned as follows: vResiduals <- as.vector(unlist(resid(lQuarterlyRegressions[[i]]))); Here lQuarterlyRegressions is a vector of objects returned by lm(). Next, I may go find outliers using identify() on a plot or do some other analysis which
2006 Mar 07
2
(newbie) Accessing the pieces of a 'by' object
Folks, I know that I can do the following using a loop. That's been a lot easier for me to write and understand. But I am trying to force myself to use more vectorized / matrixed code so that eventually I will become a better R programmer. I have a dataframe that has some values by Year, Quarter and Ranking. The variable of interest is the return (F3MRet), to be weighted averaged within the
2009 Nov 18
2
Median on Aggregated data
Folks, I have the following code, that works fine on smaller data sets. For larger datasets, it runs out of memory and runs way too slow because we are essentially creating large vectors with rep() and then calling median() on it. (I learned this approach from a post on the web). Below that, I have written the corresponding SAS code. The SAS code works fast because I can just tell the proc
2005 Nov 24
1
Suggested add to the documentation for the identify() function
Folks, 1. Is there a more appropriate list (r-devel?) for posting such suggestions? I am a newbie to R, and doubtless will have some suggestions for the documentation -- some good, others not quite so. I would actually like to help give back to the community (I was motivated by Prof. Ripley's 2001 talk in which he had commented that open source software users rarely give back anything.) --
2005 Nov 21
1
Cacheing in read.table/ attached data?
Disclaimer/Apology: I am an R newbie I am seeing some behaviour that seems to me to be the result of some cacheing going on at some level, and perhaps this is expected behaviour. I would just like to understand the basic rules. What I have is a file with some data. I read it in and then do a summary on the resulting dataframe. I find the some values are completely outside the expected range,
2016 Apr 22
2
if-conversion
Hi. I'm trying to vectorize the following piece of code with Loop Vectorizer (from LLVM distribution Nov 2015), but no vectorization takes place: int *Test(int *res, int *c, int *d, int *p) { int i; for (i = 0; i < 16; i++) { //res[i] = (p[i] == 0) ? c[i] : d[i]; res[i] = (p[i] == 0) ? res[i] : res[i] + d[i];
2006 Mar 15
1
(newbie) Weighted qqplot?
Folks, Normally, in a data frame, one observation counts as one observation of the distribution. Thus one can easily produce a CDF and (in Splus atleast) use cdf.compare to compare the CDF (BTW: what is the R equivalent of the SPlus cdf.compare() function, if any?) However, if each point should not count equally, how can I weight the points before comparing the distributions? I was thinking of
2006 Jan 15
8
/ Operator not meaningful for factors
Folks, I have a very basic question. The solution eludes me perhaps because of my own lack of creativity. I am not attaching a fully reproducible session because the issue may well be becuase of the way the data file is, and the data file is large (and I don't know whether I can legally distribute it). If people can suggest things that might be wrong in my data or the way that I am reading it,
2006 Feb 13
2
R-help, specifying the places to decimal
Hello - R-experts, Is there any way with which we can specify the number after decimal point to take. Like I have a situation where the values are comming 0.160325923 but I only want 4 place to decimal say 0.1603. Is there any way for that. I am no expert in R- and this may sound simple to many.sorry Thanks for any help. With Regards Subhabrata
2024 Jul 25
1
please help generate a square correlation matrix
?s 20:47 de 25/07/2024, Yuan Chun Ding escreveu: > Hi Rui, > > You are always very helpful!! Thank you, > > I just modified your R codes to remove a row with zero values in both column pair as below for my real data. > > Ding > > dat<-gene22mut.coded > r <- P <- matrix(NA, nrow = 22L, ncol = 22L, > dimnames = list(names(dat),
2024 Jul 26
1
please help generate a square correlation matrix
If I have understood the request, I'm not sure that omitting all 0 pairs for each pair of columns makes much sense, but be that as it may, here's another way to do it by using the 'FUN' argument of combn to encapsulate any calculations that you do. I just use cor() as the calculation -- you can use anything you like that takes two vectors of 0's and 1's and produces fixed
2024 Jul 27
1
please help generate a square correlation matrix
Let's go back to the original posting. > > > >> in each column, less than 10% values are 1, most of them are 0; > > > > > > > >> so I want to remove a row with value of zero in both columns when calculate correlation between two columns. > > So we're talking about correlations between binary variables. Suppose we have two 0-1-valued
2009 Jun 08
14
script help - '3rd last field'
Hi I need some logic to work out a value for me - this value is _always_ the 3rd last field in a string seperated by '.' but the string could be 5 or 6 fields long, e.g foo.bar.VALUE.baz.lala foor.bar.gigi.VALUE.baz.lala I need to find VALUE - if this were python or something i could do it but this has to be in shell - Any clues? thanks
2012 Oct 22
3
Remove records from a large dataframe
Hi, I am trying to remove a series of records from a large dataframe. The script I have written works fine but takes a long time to run. Can anyone suggest a quicker way to do this? Here is an example of the code I've written. The end result of this bit of code would be a dataframe with any records relating to ID 1 or ID 4 removed: #dataframe id <- c(1,1,1,1,2,2,2,2,2, 3,3,3, 4,4)