Talk:Mean signed deviation
This article is rated Stub-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
Comment
editThis is the most clumsily written article I've seen in a long time, on lots of levels. Michael Hardy (talk) 01:51, 3 May 2010 (UTC)
- "the difference between the estimator $\hat{\theta}$ and the estimated variable $\theta$" presupposes that the reader has been told that something is to be estimated. But that's not there.
- "divided by the size of the vectors $n$." presupposes that there are some vectors involved, but the article hasn't said anything about that.
- \sum^{n}_{i=0}
- Should that have said \sum^{n}_{i=1}?
- Here's a guess (but we shouldn't have to guess; we should have been told): these are the n components of a vector that is being estimated, as opposed to n observations in a sample.
- This also neglects Wikipedia's conventions in a lot of obvious ways and doesn't correctly use TeX.
Michael Hardy (talk) 01:58, 3 May 2010 (UTC)
- I have tried to improve for some of the above. There does still remain the question of some approprite citation. A quick google shows that the term is used quite extensively but I didn't spot a suitable basic source. Melcombe (talk) 16:37, 5 July 2010 (UTC)
"definition"
editIf the mean signed difference is a mean of differences between estimators and things to be estimated, then how can it be a sample statistic? It would be unobservable; thus not a statistic. Michael Hardy (talk) 15:31, 6 May 2015 (UTC)
Hmmmm, not sure about this
editI did a quick check of Google for the term "Mean signed deviation" before the date of the article creation (April 2010). Total hits: 30. Then did a search for the term in the five years since the article was created: 1,436 hits. Without getting into the statistical theory, I wonder if this is a coincidence? Maybe Google is indexing lots more and this has always been an important statistic, maybe "Mean signed deviation" is a hot new topic among statisticians. OR, maybe this article should be pulled until there is an authoritative source. The term in not in any of the statistics books on my shelves (admittedly not advanced). I checked Wolfram, and the term is not a defined one; they simply note the obvious: the mean of signed deviations will always be zero. How useful is that? Every-leaf-that-trembles (talk) 19:29, 25 January 2016 (UTC)
Unraveling deviation, difference and error
editUser:Uwe Lange, 2017-08-15
The statement "mean signed difference, deviation, or error" sounds weird. I wonder whether the statistical definitions in wikipedia generally should/could follow a certain categorisation. Using the term distance as an umbrella term, covering all three terms to be defined, just to avoid confusion, I propose:
- Error corresponds to a distance between a value and a value which represents the "true" value. If an independent variable x is assumed to be the (ground) truth, it's: yi - xi.
- Deviation corresponds to a distance between a value and a defined value, e. g. xi - mean or xi - median
- Difference is a distance between any two values, e. g. yi - xi
In other words:
- If it's an error, call it error. If it's a deviation, call it deviation. Otherwise call it difference.
- A deviation is also a difference. An error is also a difference. But not vice versa.
- Conclusively, mean signed difference != mean signed deviation != mean signed error.
Interestingly, the three definitions proposed above are confirmed by the links under "See also".
This article, as it is now, does not provide significant information about statistical measures, but provokes mis-understandings.
— Preceding unsigned comment added by Uwe Lange (talk • contribs) 08:54, 15 August 2017 (UTC)
Error definition discrepancy
editIn this article, the error is seen as the estimated/predicted value minus the actual value. In the article about the tracking signal, the prediction error is defined as the actual value minus the estimated/predicted value: [[1]]. This seems contradictory.
In the article "Errors and residuals", an error is defined as the amount an observation differs from its expected value (observation - expected value): [[2]]. In the case of prediction however, what is the observation and what is the expected value? I predict a certain value, so that is what I expect, but I observe another value. Or, I expect the value to be the actual value and I observe my prediction. Is this properly defined somewhere? — Preceding unsigned comment added by 31Noah (talk • contribs) 19:50, 23 January 2019 (UTC)