In the statistical decision theory, where one is faced with making decisions in the presence of statistical knowledge, Γ-minimax inference is a minimax approach used to deal with partial prior information. It works with applications of Γ-minimax to statistical estimation, and contains Γ-minimax theory, used to pick applicable decision rules to use when given partial prior information about the distribution of an unknown parameter. The decision rule selected must be the one that minimizes the supremum of the payoff over the priors in Γ, with Bayes and regret risk prioritized in a frequentist approach, and posterior expected loss and regret prioritized in a Bayesian one.
History
editThe Γ-minimax principle has been discussed and proposed before by Herbert Robbins.[1][2] and I. J. Good[3] to deal with instances of partial prior information that can arise from the minimax approach pioneered by Abraham Wald.[4]
References
edit- ^ Robbins, H. (1951). "Asymptotically Subminimac Solutions to Compound Statistical Decision Problems". Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, Vol. 1. University of California Press, Berkeley, Calif., pp 131-148
- ^ Robbins, H. (1964). The empirical Bayes approach to statistical decision problems. Ann. Math. Stat., 35, 1-20
- ^ Good, I.J. (1952). Rational decisions. J. R. Stat. Soc. Ser. B, 35, 43-56
- ^ Wald, A. (1950). Statistical Decision Functions. Wiley, New York