Displaying 7 results from an estimated 7 matches for "rowttests".
2008 Sep 12
1
subsetting of factor
...wo groups each time for t-test, a vs. c or b vs. c, but I
dont know how to write correct codes.
Below is my codes, the last two lines are needed to be corrected....
library("genefilter")
ef <- exprs(esetsub)
kk <- factor(esetsub$genotype == c("a", "c"))
tt <- rowttests(ef[,c(1,2,5,6)], kk)
ps. column 1-6 is a,a,b,b,c,c
depending on the document, the kk should be a factor..
Any suggestions are really appreciated!!
Best regards,
Hui-Yi
[[alternative HTML version deleted]]
2009 May 14
1
"Fast" correlation algorithm
...000 rows and 100 columns. I
want to count correlation (p-value and cor.coef) between each row of dataset
and some vector (of course length of this vector is equal to number of
columns of dataset).
In short words:
For t-test we have:
"normal" algorithm - t.test
"fast" algorithm - rowttests
For correlation:
"normal" algorithm - cor.test
"fast" algorithm - ???
Thank's for help
--
View this message in context: http://www.nabble.com/%22Fast%22-correlation-algorithm-tp23548016p23548016.html
Sent from the R help mailing list archive at Nabble.com.
2011 Jul 04
2
clustering based on most significant pvalues does not separate the groups!
Hi all,
I have some microarray data on 40 samples that fall into two groups. I have
a value for 480k probes for each of those samples. I performed a t test
(rowttests) on each row(giving the indices of the columns for each group)
then used p.adjust() to adjust the pvalues for the number of tests
performed. I then selected only the probes with adj-p.value<=0.05. I end up
with roughly 2000 probes to do the clustering on but using pvclust, and
hclust, the sample...
2008 Jun 07
1
strange (to me) p-value distribution
I'm working with a genomic data-set with ~31k end-points and have
performed an F-test across 5 groups for each end-point. The QA
measurments on the individual micro-arrays all look good. One of the
first things I do in my work-flow is take a look at the p-valued
distribution. it is my understanding that, if the findings are due to
chance alone, the p-value distribution should be uniform. In
2010 Sep 13
2
post
Hello,
I have a question regarding how to speed up the t.test on large dataset. For example, I have a table "tab" which looks like:
a b c d e f g h....
1
2
3
4
5
...
100000
dim(tab) is 100000 x 100
I need to do the t.test for each row on the two subsets of columns, ie to compare a b d group against e f g group at each row.
subset 1:
a b d
1
2
3
4
5
...
100000
2008 Nov 30
2
Snow and multi-processing
Dear R gurus,
I have a very embarrassingly parallelizable job that I am trying to speed up with snow on our local cluster. Basically, I am doing ~50,000 t.test for a series of micro-array experiments, one gene at a time. Thus, I can easily spread the load across multiple processors and nodes.
So, I have a master list object that tells me what rows to pick up for each genes to do the t.test from
2008 Mar 03
3
Calculating the t-test for each row
Hi Everyone,
I need some simple help.
Here are my codes
##########will give me 10000 probesets####################
data.sub = data.matrix[order(variableprobe,decreasing=TRUE),][1:10000,]
dim(data.sub)
data_output<-write.table(data.sub, file = "c://data_output.csv", sep = ",",
col.names = NA)
When i export to excel, it shows me this. This is just a short version.
There