similar to: replacing segments of vector by their averages

Displaying 20 results from an estimated 5000 matches similar to: "replacing segments of vector by their averages"

2008 Jun 19
4
Any simple way to subset a vector of strings that do contain a particular substring ?
For example, strings <- c("aaaa", "bbbb","ccba"). How to get "aaaa", "bbbb" that do not contain "ba" ? _________________________________________________________________ [[alternative HTML version deleted]]
2008 Jul 31
4
Identifying common prefixes from a vector of words, and delete those prefixes
For example, c("dog.is.an.animal", "cat.is.an.animal", "rat.is.an.animal"). How can I identify the common prefix is ".is.an.animal" and delete it to give c("dog", "cat", "rat") ? Thanks _________________________________________________________________ [[alternative HTML version deleted]]
2008 Jul 15
5
counting number of "G" in "TCGGGGGACAATCGGTAACCCGTCT"
Any better solution than this ? sum(strsplit("TCGGGGGACAATCGGTAACCCGTCT", "")[[1]] == "G") _________________________________________________________________ [[alternative HTML version deleted]]
2013 Mar 06
3
About basic logical operators
Hello everyone,           I have a basic question regarding logical operators. > x<-seq(-1,1,by=0.02) > x   [1] -1.00 -0.98 -0.96 -0.94 -0.92 -0.90 -0.88 -0.86 -0.84 -0.82 -0.80 -0.78  [13] -0.76 -0.74 -0.72 -0.70 -0.68 -0.66 -0.64 -0.62 -0.60 -0.58 -0.56 -0.54  [25] -0.52 -0.50 -0.48 -0.46 -0.44 -0.42 -0.40 -0.38 -0.36 -0.34 -0.32 -0.30  [37] -0.28 -0.26 -0.24 -0.22 -0.20 -0.18 -0.16
2008 Sep 13
3
Beautify R scripts in microsoft word
I am generating a report containing several R scripts in the appendix. Is there any way to "beautify" the R source codes in microsoft word, similar to what we see in tinn-R ? Thanks _________________________________________________________________ [[alternative HTML version deleted]]
2008 Jun 10
2
Fast method to compute average values of duplicated IDs
Hi, How do I collapse (average in the simplest case) the values of those duplicated ids (i.e., 2, 5, 6, 9) to give a table of unique ids ? t <- cbind(id=c(1:10, 2,5,6,9), value=rnorm(14)) _________________________________________________________________ [[alternative HTML version deleted]]
2008 Jul 08
4
Can R do this ?
I have a folder full of pngs and jpgs, and would like to consolidate them into a pdf with appropriate title and labels. Can this be done via R ? _________________________________________________________________ Easily publish your photos to your Spaces with Photo Gallery. [[alternative HTML version deleted]]
2009 Jan 06
2
Generating GUI for r-scripts
Hi, I have developed some scripts that basically ask for input tab-limited format files, do some processing, and output several pictures or csv. Now I need to have some gui to wrap on top of the scripts, so that end-users can select their input files, adjust some parameters for processing, and select output folder or filenames. Please advice me if there is any tools or project suitable for
2013 May 27
1
Question about subsetting S4 object in ROCR
Dear list I'm testing a predictor and I produced nice performance plots with ROCR package utilizing the 3 standard command pred <- prediction(predictions, labels) perf <- performance(pred, measure = "tpr", x.measure = "fpr") plot(perf, col=rainbow(10)) The pred object and the perfo object are S4 with the following slots An object of class "performance"
2006 Sep 17
2
histogram frequency weighing
Fellow R-helpers, Suppose we create a histogram as follows (although it could be any vector with zeroes in it): R> lenh <- hist(iris$Sepal.Length, br=seq(4, 8, 0.05)) R> lenh$counts [1] 0 0 0 0 0 1 0 3 0 1 0 4 0 2 0 5 0 6 0 10 0 9 0 4 0 [26] 1 0 6 0 7 0 6 0 8 0 7 0 3 0 6 0 6 0 4 0 9 0 7 0 5 [51] 0 2 0 8 0 3 0 4 0 1 0 1 0 3
2008 Jun 23
2
grouping values
I tried aggregate, apply etc, but can't get the right result. For example, m <- cbind(c(LETTERS[1:5]), c("aa", "bb", "cc", "aa", "cc")) [,1] [,2][1,] "A" "aa"[2,] "B" "bb"[3,] "C" "cc"[4,] "D" "aa"[5,] "E" "cc" how to obtain
2008 Oct 30
2
how to convert data from long to wide format ?
Given a dataframe m > m X Y V3 V4 1 1 A 0.5 1.2 2 1 B 0.2 1.4 3 2 A 0.1 0.9 How do I convert m to this with V4 as the cell values ? A B 1 1.2 1.4 2 0.9 NA
2008 Oct 01
1
Please help me to produce smoothed contour plots
Please help me to produce smoothed contour plots. I have dependent data generated at regular intervals of two independent variables and would like to produce smoothed contour plots - I cannot get interp (alima) to produce cubic interpolations of the data, only linear ones. I'm interested in smoothing as the data generation process is stochastic and produces small variations which I'd
2008 Dec 07
5
How to force aggregate to exclude NA ?
The aggregate function does "almost" all that I need to summarize a datasets, except that I can't specify exclusion of NAs without a little bit of hassle. > set.seed(143) > m <- data.frame(A=sample(LETTERS[1:5], 20, T), B=sample(LETTERS[1:10], 20, T), C=sample(c(NA, 1:4), 20, T), D=sample(c(NA,1:4), 20, T)) > m A B C D 1 E I 1 NA 2 A C NA NA 3 D I NA 3 4 C I
2008 Dec 03
2
Speeding up casting a dataframe from long to wide format
Hi, I am casting a dataframe from long to wide format. The same codes that works for a smaller dataframe would take a long time (more than two hours and still running) for a longer dataframe of 2495227 rows and ten different predictors. How to make it more efficient ? wer <- data.frame(Name=c(1:5, 4:5), Type=c(letters[1:5], letters[4:5]), Predictor=c("A", "A",
2008 Nov 26
2
Very slow: using double apply and cor.test to compute correlation p.values for 2 matrices
My two matrices are roughly the sizes of m1 and m2. I tried using two apply and cor.test to compute the correlation p.values. More than an hour, and the codes are still running. Please help to make it more efficient. m1 <- matrix(rnorm(100000), ncol=100) m2 <- matrix(rnorm(10000000), ncol=100) cor.pvalues <- apply(m1, 1, function(x) { apply(m2, 1, function(y) { cor.test(x,y)$p.value
2009 Jul 21
1
problem with heatmap.2 in package gplots generating non-finite breaks
I have written a wrapper for heatmap.2 called heatmap.w.row.and.col.clust which auto-generates breaks using breaks<-round((c(seq(from=(-20 * stddev), to=(20 * stddev))))/20, digits = 2) #(stddev in this case = 2.5) This has always worked well in the past but now I am getting an error that non-finite breaks are being generated. Drilling down, it seems that my wrapper is generating finite
2008 Dec 04
2
How to optimize this codes ?
How to optimize the for-loop to be reasonably fast for sample.size=100000000 ? You may want to change sample.size=1000 to have an idea what I am achieving. set.seed(143) A <- matrix(sample(0:1, sample.size, TRUE), ncol=10, dimnames=list(NULL, LETTERS[1:10])) B <- list() for(i in 1:10) { B[[i]] <- apply(combn(LETTERS[1:10], i), 2, function(x) { sum(apply(data.frame(A[,x]), 1,
2009 Apr 15
2
Split string
> (FICB[,"temp"]) [1] "0.30" "0.55" "0.45" "2.30" "0.45" "0.30" "0.25" "0.30" "0.30" "1.05" "1.00" "1.00" [13] "0.30" "0.30" "0.30" "0.55" "0.30" "0.30" "0.30" "0.25" "1.00"
2008 Jun 25
3
selecting values that are unique, instead of selecting unique values
unique(c(1:10,1)) gives 1:10 (i.e. unique values), is there any method to get only 2:10 (i.e. values that are unique) ? _________________________________________________________________ Easily edit your photos like a pro with Photo Gallery. [[alternative HTML version deleted]]