This article is rated Start-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||||||||||||
|
Merge
editThis page doesnt seem to add anything over Type I and type II errors. I don't think having a separate page for this subject is helpful Purple Post-its (talk) 14:41, 12 July 2011 (UTC)
I agree with removing this page or editing it substantially. As it stands, I think the definition on this page is simply wrong, or at a minimum is super nonstandard. The numerator is not part of the numerator in this definition, so it makes no sense. HarveyMotulsky (talk) 22:52, 3 June 2017 (UTC)
Aug 2012 expansion
editHello everyone, as part of my "selective inference" MSC coursework I have just finished expanding this term. Regarding the merge proposal above please see the new section "difference between type I error rate and false positive rate".
Miron.Avidan (talk) 04 Aug 2012
A different definition of false positive rate
editA more obvious definition of false poaitive rate (or false discovery rate) is (in the notation used here) is TP/(TP +FP), That is the probability that a positive test is a false positive. It has been used in this sense by Colquhoun 2014 [1] and 2017[2]. This article needs to be rewritten, I think.
References
- ^ Colquhoun, David. "An investigation of the false discovery rate and the misinterpretation of p-values". Royal Society Open Science. Retrieved 19 Nov 2014.
- ^ Colquhoun, David. "The Reproducibility Of Research And The Misinterpretation Of P Values". bioRxiv. Retrieved 3 June 2017.