Talk:Informant (statistics)

(Redirected from Talk:Score (statistics))
Latest comment: 2 months ago by Bigsnoopyguyhuh in topic Machine Learning subsection


Insight

edit

Could someone write a few words on what the significance of the score is?--Adoniscik (talk) 03:21, 10 January 2008 (UTC)Reply

Statistic?

edit

There is an apparent contradiction between the present article and cited article "Sufficiency (statistics)". The present article says, "Note that V is a function of θ and the observation X. The score V is a sufficient statistic for θ." However, "Sufficiency (statistics)" states, "A quantity T(X) that depends on the (observable) random variable X but not on the (unobservable) parameter θ is called a statistic." This implies that V (being a function of both X and θ) is not a statistic. Thus, it also follows that V cannot be a sufficient statistic for θ. Is there a resolution to this apparent contradiction? PLP Haire 22:13, 4 August 2006 (UTC)Reply

You're right. It's really not clearly explained here. I'll be back soon... Michael Hardy 22:22, 4 August 2006 (UTC)Reply
I agree, you're right. Having calculated a few scores recently, the score is clearly not necessarily a statistic, and so cannot be a sufficient statistic. The sentence ought to be removed (or at least replaced with one saying that if it is a statistic, then it is sufficient if anyone can prove that). — Preceding unsigned comment added by 89.240.198.146 (talk) 17:43, 23 May 2007 (UTC)Reply

You guys are right. I have removed this clearly erroneous statement. --Zvika 08:12, 24 May 2007 (UTC)Reply

Likelihood maximization?

edit

It seems like the score is the derivative of the cost function for a likelihood maximization, e.g., if you are applying a nonlinear optimization algorithm to find an MLE. Is that right? Should it be said? 71.184.37.150 (talk) 00:57, 8 May 2009 (UTC)Reply

Yes, it is. This is called Fisher scoring. I will add a link. --Zvika (talk) 07:06, 8 May 2009 (UTC)Reply

Division by zero?

edit

Maybe I'm mising something here but should it be stated what happens if  ? I assume the score is defined to be zero in such cases? Saraedum (talk) 01:33, 12 July 2009 (UTC)Reply

Regularity conditions?

edit

Does the property that the expected score is zero hold only under the regularity conditions of the Cramer-Rao bound? —Preceding unsigned comment added by 74.205.127.225 (talk) 05:14, 20 October 2009 (UTC)Reply

I think it requires similar regularity conditions but I don't know if they're exactly the same. --Zvika (talk) 10:58, 20 October 2009 (UTC)Reply

Bernoulli Example

edit

Can someone please double check the Bernoulli example? In particular the second equality. I feel like it may need additional explanation. Thanks. 82.51.68.234 (talk) 15:58, 20 September 2011 (UTC)Reply

Parameter vector

edit

Bender235 has been revising some of the statistics articles. I admit to learning things in following some of his/her work. (slight However), I want to ask about the first sentence in the lede (which is related to a sentence in the body of the article):

"In statistics, the score (or informant) is the gradient of the log-likelihood function with respect to the parameter vector."

Is this use of "with respect to the parameter vector" standard? It kind of threw me at first, so I wanted to ask. Attic Salt (talk) 19:07, 19 June 2019 (UTC) One might ask, what else would the gradient be with respect to? Attic Salt (talk) 19:19, 19 June 2019 (UTC)Reply

The way I understand your question is whether the "w.r.t the parameters" is superfluous. In some sense it is, the likelihood function is taken as a function of the parameters only. And some textbooks aren't that explicit with their definition of the score, for instance Hayashi (2000, p. 50), while others are, for instance Cramer (1986, p. 16). I'm in favor of mentioning it, since the definition of the likelihood function is stated on another article, not in the paragraph above (like in a textbook), so being precise doesn't hurt. --bender235 (talk) 19:37, 19 June 2019 (UTC)Reply
Okay, I'm okay with leaving it, but I've never seen the words "gradient with respect to parameter vector". I could grow to like it, however. Attic Salt (talk) 19:39, 19 June 2019 (UTC)Reply

Machine Learning subsection

edit

The machine learning subsection under Applications does not reference modern, score-based machine learning methods or literature. Given recent growth in the area, I'd like to expand the section. The current content seems to only be an explanation on why the score function is named such, and if so I'd think it would make sense to move that content into another section (maybe Definition?) rather than appending or prepending new content. tiral (talk) 15:40, 12 July 2023 (UTC)Reply

I propose that the section be removed altogether. This article is about the Fisher score function, where the gradient of the potential is taken with respect to the parameter vector, where as score-based generative modeling is the Stein score, where the gradient of the potential is taken with respect the spatial input. It is confusing to include information on applications of the Stein score to machine learning in an article about the Fisher score. Bigsnoopyguyhuh (talk) 13:10, 6 September 2024 (UTC)Reply