Talk:Admissible decision rule

Latest comment: 7 years ago by 128.100.73.178 in topic numerous issues

Untitled

edit
... an admissible decision rule is a rule for making a decision that is better in some sense than any other rule that may compete with it.

This sentence seems highly inaccurate to me. First, "in some sense" can be read to mean "in a vague sense," while the intended meaning here is "in a specific sense discussed below."

Second, what do we mean by "better"? Sure, they are better for at least one value of  , but this doesn't make them universally better.

I have changed the sentence to read:

... an admissible decision rule is a rule for making a decision that is "better" than any other rule that may compete with it, in a specific sense defined below.

--Zvika 14:28, 18 February 2006 (UTC)Reply

This helps a little - but not much. As you point out, just because something is better for one value doesn't mean it is universally better. What is stated below in the article is that admissible rules are better than inadmissible ones. Even if this is true, (I think this is not always true), the natural way to read this sentence is that an admissible rule is better than ANY other rule - as if there couldn't be more than one admissible rule. Obviously if there is more than one admissible rule, then it is false that any admissible rule is preferable to any other rule.
As for "An admissible rule should be preferred over an inadmissible rule..." this makes it sound like ANY admissible rule should be preferable to ANY inadmissible rule. Personally, I think this is wrong. For example, here is an admissible rule for estimating  : say "seven". This rule cannot be dominated because when theta really is seven, it has zero error. Here is an inadmissible rule: use maximum likelihood estimation and then subtract epsilon from your estimate. This will be better than the constant rule in almost every case. The comparison to Pareto optimality is apt - just because you can't make a Pareto improvement doesn't mean you can't make an improvement. I am changing these but I would be happy to see other uses make further edits (or corrections with explanations if they think I am wrong). --Jdvelasc (talk) 23:30, 5 September 2009 (UTC)Reply

example please

edit

Is it possible to provide a simple example? Or some words about the history of the concept? Bo Jacoby (talk) 03:59, 4 July 2010 (UTC).Reply

I wrote the following in the "Bayesian probability" article:

Wald's result also established the Bayesian formalism as a fundamental technique in such areas of frequentist statistics as point estimation, hypothesis testing, and confidence intervals.

Wald characterized admissible procedures as Bayesian procedures (and limits of Bayesian procedures), making the Bayesian formalism a central technique in such areas of frequentist statistics as parameter estimation, hypothesis testing, and computing confidence intervals, as established by the following articles and then textbooks:

  • Kiefer, J. and Schwartz, R. (1965). "Admissible Bayes character of T2-, R2-, and other fully invariant tests for multivariate normal problems". Annals of Mathematical Statistics. 36: 747–770.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  • Schwartz, R. (1969). "Invariant proper Bayes tests for exponential families". Annals of Mathematical Statistics. 40: 270–283.
  • Hwang, J. T. and Casella, George (1982). "Minimax confidence sets for the mean of a multivariate normal distribution". Annals of Statistics. 10: 868–881.{{cite journal}}: CS1 maint: multiple names: authors list (link)

These examples could be expanded here, but I lack the time (for some months), alas. Best regards, Kiefer.Wolfowitz (talk) 18:01, 4 July 2010 (UTC)Reply

numerous issues

edit

The relationship between the class of normal-form and extensive-form bayes procedures (i.e., those that minimize the average risk and those that minimize the posterior risk almost everywhere) is subtle. These two classes are definitely not equal in general, as claimed in this article.

Next, the "mild" conditions relating admissibility and generalized Bayes optimality are not so mild. They generally rule out semi- and non-parametric inference problems. There are fewer restrictions relating admissibility and the class of limits of Bayes procedures, but still, usually the action space is required to be a convex subset of a FINITE dimensional Euclidean space, ruling out nonparametric problems, generally. It seems very much open how to close these significant gaps. — Preceding unsigned comment added by 128.100.73.178 (talk) 20:22, 29 July 2017 (UTC)Reply