similar to: Partek has Dunn-Sidak Multiple Test Correction. Is this the same/similar to any of R's p.adjust.methods?

Displaying 20 results from an estimated 1000 matches similar to: "Partek has Dunn-Sidak Multiple Test Correction. Is this the same/similar to any of R's p.adjust.methods?"

2010 Feb 07
1
p.adjust.Rd sugggestion
L.S. In the current version of ?p.adjust.Rd, one needs to scroll down to the examples section to find confirmation of one's guess that "fdr" is an alias of "BH". Please find a patch in attachment which mentions this explicitly. Best, Tobias -------------- next part -------------- A non-text attachment was scrubbed... Name: p.adjust.Rd.patch Type: text/x-patch Size: 633
2005 Jan 16
1
p.adjust(<NA>s), was "Re: [BioC] limma and p-values"
I append below a suggested update for p.adjust(). 1. A new method "yh" for control of FDR is included which is valid for any dependency structure. Reference is Benjamini, Y., and Yekutieli, D. (2001). The control of the false discovery rate in multiple testing under dependency. Annals of Statistics 29, 1165-1188. 2. I've re-named the "fdr" method to "bh" but
2010 Aug 08
1
p.adjust( , fdr)
Hello, I am not sure about the p.adjust( , fdr). How do these adjusted p-values get? I have read papers of BH method. For independent case, we compare the ordered p-values with the alfa*i/m, where m is the number of tests. But I have checked that result based on the adjusted p-values is different with that by using the independent case method. Then how do the result of p.adjust( , fdr) come? And
2011 Oct 04
1
a question about sort and BH
Hi, I have two questions want to ask. 1. If I have a matrix like this, and I want to figure out the rows whose value in the 3rd column are less than 0.05. How can I do it with R. hsa-let-7a--MBTD1 0.528239197 2.41E-05 hsa-let-7a--APOBEC1 0.507869409 5.51E-05 hsa-let-7a--PAPOLA 0.470451884 0.000221774 hsa-let-7a--NF2 0.469280186 0.000231065 hsa-let-7a--SLC17A5
2004 Dec 20
1
[BioC] limma, FDR, and p.adjust
You asked the same question on the Bioconductor mailing list back in August. At that time, you suggested yourself a solution for how the adjusted p-values should be interpreted. I answered your query and told you that your interpretation was correct. So I'm not sure what more can be said, except that you should read the article Wright (1992), which is cited in the help entry for p.adjust(),
2018 Jul 23
1
Suggestion for updating `p.adjust` with new method (BKY 2006)
Dear R contributors, I suggest adding a new method to `p.adjust` ("Adjust P-values for Multiple Comparisons", https://stat.ethz.ch/R-manual/R-devel/library/stats/html/p.adjust.html). This new method is published in Benjamini, Krieger, Yekutieli 2016 Adaptive linear step-up procedures that control the false discovery rate (Biometrika). https://doi.org/10.1093/biomet/93.3.491 This paper
1998 Jul 14
1
Are post-hoc tests being developed for R?
Hi- Is anyone working on multiple comparisons of means or post-hoc tests (ie: Tukey, Bonferroni) for R? I saw in the winter 98' archives of the R mailing lists that these tests had not been implemented yet, I was just wondering if I could look foward to having them. ;-) I also looked through the contributed packages and didn't see anything that offered such tests. I guess I could check
1998 Jul 16
1
R-beta: Re: Post-hoc tests
Matt, Here's a Bonferroni-corrected multiple one-sample t-test that I wrote some years ago. It took a while to get it into R, as na.omit doesn't seem to handle vectors and I had to write a quick kludge (na.remove). Another more general point was that I discovered that the help page for t.test gives the name "parameters" for the degrees of freedom, as in S. However, the name
2011 Sep 30
1
Hi
Hi, There is a question that I am confused. I have a set of data like this: hsa-miR-205--GATA3 0.797882767 1.08E-13 hsa-miR-205--ITGB4 0.750217593 1.85E-11 hsa-miR-187--PGF 0.797604155 3.24E-11 hsa-miR-205--SERPINB5 0.744124886 3.28E-11 hsa-miR-205--PBX1 0.734487224 7.89E-11 hsa-miR-205--MCC 0.72499934 1.80E-10 hsa-miR-205--WNT5B 0.717705259 3.33E-10 hsa-miR-200c--PKN2 0.721746815
2005 Jun 15
1
2 LDA
Hi, I am using Partek for LDA analysis. For a binary response variable, it generates 2 discriminant functions, one for each of the 2 levels of the response variable. And I can simply calculate 2 discriminant scores (say d1 and d2) for each sampples using the 2 discriminant functions, then I can use the following formula to compute the posterior probability for the sample:
2004 Dec 19
1
limma, FDR, and p.adjust
I am posting this to both R and BioC communities because I believe there is a lot of confusion on this topic in both communities (having searched the mail archives of both) and I am hoping that someone will have information that can be shared with both communities. I have seen countless questions on the BioC list regarding limma (Bioconductor) and its calculation of FDR. Some of them involved
2004 Dec 19
1
limma, FDR, and p.adjust
I am posting this to both R and BioC communities because I believe there is a lot of confusion on this topic in both communities (having searched the mail archives of both) and I am hoping that someone will have information that can be shared with both communities. I have seen countless questions on the BioC list regarding limma (Bioconductor) and its calculation of FDR. Some of them involved
2010 Jul 13
6
permutation-based FDR
Hola a todos, Tengo un pequeño problemilla... Tengo unas 9000 variables que he contrastado con 1 en concreto con el test de wilcoxon. He calculado el p-valor, y queria corregirlo con el permutation-based FDR. He encontrado una funcion con R comp.fdr()que hace esta corrección, pero te pide que le pongas las variables con las observaciones y te hace el test (según he entendido). Yo solo quiero
2008 Mar 01
2
Newbie: Incorrect number of dimensions
> dim(data.sub) [1] 10000 140 #####extracting all differentially express genes########## library(multtest) two_side<- (1-pt(abs(data.sub),50))*2 diff<- mt.rawp2adjp(two_side) all_differ<-diff[[1]][37211:10000,] all_differ #####list of differentially expressed genes########## > probe.names<- + all_differ[[2]][all_differ[[1]][,"BY"]<=0.01] Error in
2009 Feb 11
1
p.adjust; n > length(p) (PR#13519)
Full_Name: Ludo Pagie Version: 2.8.1 OS: linux Submission from: (NULL) (194.171.7.39) p.adjust in stats seems to have a bug in handling n>length(p) for (at least) the methods 'holm' and 'hochberg'. For method 'holm' the relevant code: i <- 1:n o <- order(p) ro <- order(o) pmin(1, cummax((n - i + 1) * p[o]))[ro] where p is the
2013 Feb 13
5
off topic ¿comparaciones múltiples?
Estimados herreros, Sabéis si hay algún sitio donde pueda hacer una pregunta sobre estadística, en concreto un diseño experimental. Alguna lista, foros, etc. Ya se que este no es el sitio adecuado para preguntas de estadística, pero por si alguien se pica os la pongo a continuación. Dos tratamientos C y H. Se aplican a una serie de especies (p.e. 10) Se observa la respuesta de las especies a
2005 May 15
3
adjusted p-values with TukeyHSD?
hi list, i have to ask you again, having tried and searched for several days... i want to do a TukeyHSD after an Anova, and want to get the adjusted p-values after the Tukey Correction. i found the p.adjust function, but it can only correct for "holm", "hochberg", bonferroni", but not "Tukey". Is it not possbile to get adjusted p-values after
2004 Dec 20
1
Re: [BioC] limma, FDR, and p.adjust
Mark, there is a fdr website link via Yoav Benjamini's homepage which is: http://www.math.tau.ac.il/%7Eroee/index.htm On it you can download an S-Plus function (under the downloads link) which calculates the false discovery rate threshold alpha level using stepup, stepdown, dependence methods etc. Some changes are required to the plotting code when porting it to R. I removed the
2009 Mar 18
0
p.adjust(p, n) for n>length(p)
Hi all, I am having a problem with the function "p.adjust" in stats. I have looked at the manuals and searched the R site, but didn't get anything that seems directly relevant. Can anybody throw any light on it or confirm my suspicion that this might be a bug? I am trying to use the p.adjust() function to do Benjamini/Hochberg FDR control on a vector of p-values that are the
2018 Mar 09
0
Package gamlss used inside foreach() and %dopar% fails to find an object
If the code you are running in parallel is complicated, maybe foreach is not sophisticated enough to find all the variables you refer to. Maybe use parallel::clusterExport yourself? But be a aware that passing parameters is much safer than directly accessing globals in parallel processing, so this might just be your warning to not do that anyway. -- Sent from my phone. Please excuse my brevity.