similar to: FDR in p.adjust

Displaying 20 results from an estimated 6000 matches similar to: "FDR in p.adjust"

2006 Sep 15
3
graphics and 'layout' question
Hello, I got stuck with a graphics question: I've 3 figures that I present on a single page (window) via 'layout'. The layout is layout(matrix(c(1,1,2,3), 2, 2, byrow=TRUE)); so that the frst plot spans the both columns in row one. Now I'd like to magnify the fist figure so that it takes 20% more vertical space (i.e. more space for the y-axis). How would I do this in R?
2005 Jul 01
1
p-values for classification
Dear All, I'm classifying some data with various methods (binary classification). I'm interpreting the results via a confusion matrix from which I calculate the sensitifity and the fdr. The classifiers are trained on 575 data points and my test set has 50 data points. I'd like to calculate p-values for obtaining <=fdr and >=sensitifity for each classifier. I was thinking about
2007 Feb 28
2
topTable function from LIMMA
Dear R-Help, I am using the function "topTable" from the LIMMA package. To estimate adjusted P-values there are several options (adjust="fdr" , adjust="BH") as shown below: topTable(fit, number = 10, adjust = "BH", fit$Name) I guess any of these options (fdr, BH, etc.) is using a default of FDR=0.05 which is quite conservative (i.e., very
2004 Dec 20
1
[BioC] limma, FDR, and p.adjust
You asked the same question on the Bioconductor mailing list back in August. At that time, you suggested yourself a solution for how the adjusted p-values should be interpreted. I answered your query and told you that your interpretation was correct. So I'm not sure what more can be said, except that you should read the article Wright (1992), which is cited in the help entry for p.adjust(),
2010 Aug 08
1
p.adjust( , fdr)
Hello, I am not sure about the p.adjust( , fdr). How do these adjusted p-values get? I have read papers of BH method. For independent case, we compare the ordered p-values with the alfa*i/m, where m is the number of tests. But I have checked that result based on the adjusted p-values is different with that by using the independent case method. Then how do the result of p.adjust( , fdr) come? And
2004 Dec 19
1
limma, FDR, and p.adjust
I am posting this to both R and BioC communities because I believe there is a lot of confusion on this topic in both communities (having searched the mail archives of both) and I am hoping that someone will have information that can be shared with both communities. I have seen countless questions on the BioC list regarding limma (Bioconductor) and its calculation of FDR. Some of them involved
2004 Dec 19
1
limma, FDR, and p.adjust
I am posting this to both R and BioC communities because I believe there is a lot of confusion on this topic in both communities (having searched the mail archives of both) and I am hoping that someone will have information that can be shared with both communities. I have seen countless questions on the BioC list regarding limma (Bioconductor) and its calculation of FDR. Some of them involved
2004 Dec 20
1
Re: [BioC] limma, FDR, and p.adjust
Mark, there is a fdr website link via Yoav Benjamini's homepage which is: http://www.math.tau.ac.il/%7Eroee/index.htm On it you can download an S-Plus function (under the downloads link) which calculates the false discovery rate threshold alpha level using stepup, stepdown, dependence methods etc. Some changes are required to the plotting code when porting it to R. I removed the
2003 Oct 07
2
R-1.8.0 memory.limit()
Using R-1.8.0 (d/l and compiled on 2003-10-01) on WinXP, I seem to be unable to determine the maximum memory allocated to R. The help still says to use memory.limit(size=NA), but this returns the value NA. In addition, I have set --max-mem-size=2G but I run out of memory somewhere around 500Mb (which is why I am trying to find out how much memory is allocated). I don't have any other programs
2004 May 13
3
storage of lm objects in a database
Hello, I'd like to use DBI to store lm objects in a database. I've to analyze many of linear models and I cannot store them in a single R-session (not enough memory). Also it'd be nice to have them persistent. Maybe it's possible to create a compact binary representation of the object (the kind of format created created by "save"), so that one doesn't need to write
2005 Jul 14
2
Partek has Dunn-Sidak Multiple Test Correction. Is this the same/similar to any of R's p.adjust.methods?
The Partek package (www.partek.com) allows only two selections for Multiple Test Correction: Bonferroni and Dunn-Sidak. Can anyone suggest why Partek implemented Dunn-Sidak and not the other methods that R has? Is there any particular advantage to the Dunn-Sidak method? R knows about these methods (in R 2.1.1): > p.adjust.methods [1] "holm" "hochberg" "hommel"
2003 Oct 23
6
repeating colors in graph 2
I've tried looking at ?colors and ?palette and if I'm understanding it correctly, I'm supposed to type in (for example) palette(rainbow(13)) before I type in my plot (of 13 lines) if I want 13 different colors. But this does not work. Other things that i have tried besides rainbow give me errors. Am I just doing something completely wrong? Anna
2005 Jan 16
1
p.adjust(<NA>s), was "Re: [BioC] limma and p-values"
I append below a suggested update for p.adjust(). 1. A new method "yh" for control of FDR is included which is valid for any dependency structure. Reference is Benjamini, Y., and Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. Annals of Statistics 29, 1165-1188. 2. I've re-named the "fdr" method to "bh" but
2003 Oct 17
4
sub data frame by expression
Hi All, I've the following data frame with 54 rows and 4 colums: > x Ratio Dose Time Batch R.010mM.04h.NEW 0.02 010mM 04h NEW R.010mM.04h.NEW.1 0.07 010mM 04h NEW ... R.010mM.24h.NEW.2 0.06 010mM 24h NEW R.010mM.04h.OLD 0.19 010mM 04h OLD ... R.010mM.04h.OLD.1 0.49 010mM 04h OLD R.100mM.24h.OLD 0.40 100mM 24h OLD I'd
2003 Sep 05
3
all values from a data frame
Hello, I've a data frame with 15 colums and 6000 rows, and I need the data in a single vector of size 90000 for ttest. Is there such a conversion function in R, or would I have to write my own loop over the colums? thanks for your help + kind regards Arne
2004 May 10
5
R versus SAS: lm performance
Hello, A collegue of mine has compared the runtime of a linear model + anova in SAS and S+. He got the same results, but SAS took a bit more than a minute whereas S+ took 17 minutes. I've tried it in R (1.9.0) and it took 15 min. Neither machine run out of memory, and I assume that all machines have similar hardware, but the S+ and SAS machines are on windows whereas the R machine is Redhat
2006 Mar 08
5
data import problem
Dear All, I'm trying to read a text data file that contains several records separated by a blank line. Each record starts with a row that contains it's ID and the number of rows for the records (two columns), then the data table itself, e.g. 123 5 89.1791 1.1024 90.5735 1.1024 92.5666 1.1024 95.0725 1.1024 101.2070 1.1024 321 3 60.1601 1.1024 64.8023 1.1024 70.0593
2010 Feb 07
1
p.adjust.Rd sugggestion
L.S. In the current version of ?p.adjust.Rd, one needs to scroll down to the examples section to find confirmation of one's guess that "fdr" is an alias of "BH". Please find a patch in attachment which mentions this explicitly. Best, Tobias -------------- next part -------------- A non-text attachment was scrubbed... Name: p.adjust.Rd.patch Type: text/x-patch Size: 633
2004 Jul 26
5
binning a vector
Hello, I was wondering wether there's a function in R that takes two vectors (of same length) as input and computes mean values for bins (intervals) or even a sliding window over these vectros. I've several x/y data set (input/response) that I'd like plot together. Say the x-data for one data set goes from -5 to 14 with 12,000 values, then I'd like to bin the x-vector in steps of
2004 Jun 28
1
unbalanced design for anova with low number of replicates
Hello, I'm wondering what's the best way to analyse an unbalanced design with a low number of replicates. I'm not a statistician, and I'm looking for some direction for this problem. I've a 2 factor design: Factor batch with 3 levels, and factor dose within each batch with 5 levels. Dose level 1 in batch one is replicated 4 times, level 3 is replicated only 2 times. all