Talk:Cramér–Rao bound

Latest comment: 4 months ago by 128.243.2.30 in topic Condition for efficient estimator

Alternative expression for CRLB

edit

For the scalar unbiased case two definitions are given, namely:

 

Now, according to the page on Fisher information the second definition (the one with the second derivative) only holds when certain regularity conditions are met. I think it should be mentioned on this page too, so I added it. I came by this by calculating CRB for my own estimator for which the regularity conditions do not hold and ended up with a negative fisher information. — Preceding unsigned comment added by Scloudt (talkcontribs) 12:28, 24 May 2020 (UTC)Reply

Stupid question: The n strikes me as unintroduced in this article, and also not correct. Where does it come from, or am I missing something? Marcusmueller ettus (talk) 08:43, 20 January 2023 (UTC)Reply

Lower/Upper bound

edit

"Upper bound" is correct in the first paragraph, where we speak of "accuracy". The Cramer-Rao inequality gives the maximum accuracy that can be achieved. Later, we speak about variance and there it is in fact a lower bound. High variance means low accuracy and vice versa.

This was changed recently, I have changed it back. -BB

I didn't read carefully the first time, if we're talking about variance then the correct concept is precision not accuracy. Accuracy ties in with unbiasness. I feel we should not bring in accuracy or precision here since traditionally the CRLB is used in direct relation to the variance.
I like it as you have put it now, using precision instead of accuracy. The first sentence now expresses well the basic message of the CRB: It tells you how good any estimator can be, thus limiting the "goodness" by giving its maximum value. -BB

Too Technical

edit

I have a science/engineering background, but can't begin to understand this. If I could suggest a change, I would. —BenFrantzDale


Hi BenFranz. I'm happy to improve the article. What bit don't you understand? Robinh 14:32, 6 January 2006 (UTC)Reply
For starters, a sentence or two on how this inequality is used (i.e., in what field does it come up) would be helpful. I don't know how specialized this topic is, but I like to think I have most of the background needed to get a rough understanding of it. A list of prerequisites, as described on Wikipedia:Make technical articles accessible would be helpful. —BenFrantzDale 16:25, 6 January 2006 (UTC)Reply

It's used where statistical estimation is used. Read the article on Fisher information. Michael Hardy 22:54, 9 January 2006 (UTC)Reply

I added an image about its implications from stack overflow. It will help people a lot to understand this article   Biggerj1 (talk) 21:26, 25 May 2023 (UTC)Reply

a correction for the example for the cramer-rao ineqality

edit

My name is Roey and I am a student in my third year for industrial engineering in T.A.U, I have a correction for the example given here for cramer rao inequality: the normal distibution formula is incorrect and for some reason I can`t insert it here, in the example the formula has not divided the (x-m)^2 by 2*teta, instead, it is devided by teta. thus the final result is incorrect. The right result should be: 1/2*teta^2 (and not 3 times this number) I have continued this example correctly by I can`t get it to here.... Have a good day and tell me how can I put the correct answer here please...

Roey

Error in Example

edit

As Roey I found that the example for the Gaussian case was mistaken. I have corrected it. Indeed this example achieves the CR bound. Anyway I'm not too much familiar with this math. I would acknowledge if the original author or other reader will confirm the change.

Viroslav

what log means

edit

Hi, shouldn't what log means here be defined, or better yet Ln is used instead of log?

MrKeuner

more example suggestions

edit

Pursuant to the example, it's common to think of the information as the variance of the score (by definition) or the negative expectation of the second derivative of the log likelihood (when it exists); it threw me for a minute to think of the information as the negative expectation of the derivative of the score. In my experience, one generally choses one or the other representation and not a mix of both.

What "log" means for likelihood problems is generally not an issue, (it should be natural log to negate the antilog in the gaussian example) but the standard book notation uses plain "log" with the understanding that it means natural log.

Cramer Rao lower bound is not just for unbiassed estimators

edit

My understanding is that CRLB exists for biased estimators too. However it is considerably simpler in the unbiased case where the form of the bound does not depend upon the estimator used (it's the same for all unbiased estimators). Might it not be worth stating this somewhere so as not give give the impression that CRLB ONLY exists for unbiased estimators?

MB

86.144.38.235 15:15, 11 April 2007 (UTC)Reply

Sounds like a good idea, go ahead and be bold! --Zvika 12:40, 12 April 2007 (UTC)Reply

Some major changes

edit

I've made some significant changes to the article. I hope you like them. The main difference is that I changed the order so that the statement of the bound, in its various forms, appears before examples and proofs. This seems to me the correct order for an encyclopedia. I also added a version of the bound for biased estimators, as requested by User:86.144.38.235. Finally, I tried to simplify the mathematical expressions a little bit, especially in the vector case. --Zvika 17:32, 19 May 2007 (UTC)Reply

Aitken and Silverstone had previously discovered this bound. I am thinking of adding a reference as soon as I figure out how to cite it properly. "On the Estimation of Statistical Parameters", Proceedings of the Royal Society of Edinburgh, 1942, vol. 61, pp. 186-194. —Preceding unsigned comment added by 218.101.112.13 (talk) 20:54, 7 November 2008 (UTC)Reply

Error and Clarity Suggestion:

"This in this case, V = derivative of Log PDF..." A) It is unclear whether d/dsigma^2 is the 2nd derivative of sigma or the 1st derivative of sigma squared. B) Taking logs and the derivative in one step is hard for a reader to follow, so break up the calculation in multiple steps. C) (X-mu)^2/2*sigma^2 implies that sigma^2 is in the numerator but sigma^2 belongs in the denominator. CreateW (talk) 23:51, 12 February 2011 (UTC)Reply

redirect from Information inequality

edit

I find the redirect from Information inequality a bit misleading; I was looking for Inequalities in information theory... - Saibod (talk) 17:37, 20 October 2011 (UTC)Reply

I have replaced the redirect by a disambig page. However, note that presently no articles use a wikilink to Information inequality. Melcombe (talk) 08:47, 21 October 2011 (UTC)Reply
Thanks! I didn't feel "bold" enough to do it myself, so it encourages me to see that someone did it. Is it a problem that no articles use a wikilink to information inequality? - Saibod (talk) 08:46, 26 October 2011 (UTC)Reply
Not particularly, as it provides a route in for anyone searching for the term "information inequality". But it may be worth checking articles for the phrase ""information inequality" to see if there needs to be a wikilink either to one of the aticles or to the disambig page. Melcombe (talk) 08:53, 27 October 2011 (UTC)Reply

Definition of function ψ(θ) is confusing.

edit

θ — Preceding unsigned comment added by Ttenraba (talkcontribs) 12:28, 29 May 2017 (UTC)Reply

edit

Hello fellow Wikipedians,

I have just modified one external link on Cramér–Rao bound. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 04:26, 14 August 2017 (UTC)Reply

General scalar case

edit

In the general scalar case, in my opinion, it is not a matter of T being a biased estimator of  , but being an unbiased estimator of  . Madyno (talk) 12:14, 18 February 2021 (UTC)Reply

Very interesting connection to the uncertainty principle and Fourier transform

edit

https://link.springer.com/article/10.1007/s11760-019-01571-9 Generalized Cramér–Rao inequality and uncertainty relation for fisher information on Fractional Fourier Transform (2019) Biggerj1 (talk) 21:25, 25 May 2023 (UTC)Reply

Biased estimators

edit

Can somebody find a source for the equations on this website? https://theoryandpractice.org/stats-ds-book/statistics/cramer-rao-bound.html if the formula is correct, it would be worth adding it to the article Biggerj1 (talk) 07:00, 8 June 2023 (UTC)Reply

Condition for efficient estimator

edit

THe Kay book mentions that efficiency of one can only be achieved when

dlnp/dtheta can be written as I(theta) (g(x)-theta), with arbitrary functions I and g.

He also gives a counter-example with phase estimation.

Can someone confirm? It is just slightly above my expertise. 128.243.2.30 (talk) 17:20, 19 June 2024 (UTC)Reply