Talk:Weighted arithmetic mean
This article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
Merge Convex combination into Weighted mean?
editConvex combinations and weighted means are, on a fundamental level, precisely the same things. The major difference between them seems to be the way they are written. In a weighted sum, a normalizing factor is written outside the sum sign; in a convex combination, it is absorbed into the coefficients, which then sum to 1. Is this notational distinction a big enough deal to warrant separate articles? Melchoir 02:12, 19 April 2006 (UTC)
While I wait, I'll add these articles to each others' categories. Melchoir 02:13, 19 April 2006 (UTC)
- It sounds to me like a "convex combination" is an application of the weighted mean. If this is true, it should contain a link to the weighted mean article for examples on how to do the calculations, but should remain a separate article (since the average person reading about the weighted mean will have absolutely no interest in the "convex combination" application). StuRat 03:51, 19 April 2006 (UTC)
- Actually, the more I think about it, the more I think they should stay separate. Partly because of the reasons you bring up, and partly because the formalism can actually be important at times. Well, I'll remove the tags in a bit if no one else speaks up. Thanks for commenting! Melchoir 06:14, 19 April 2006 (UTC)
- You're welcome. StuRat 21:15, 19 April 2006 (UTC)
- ...done. Melchoir 08:44, 30 April 2006 (UTC)
- Also there is a case when you don't want to normalize the weights (when you use "repeat"-type weights), because you want to be able to perform a Bessel correction to do an unbiased computation, because if you normalize you then can't compute the total sample size N anymore. Lrq3000 (talk) 11:11, 13 June 2013 (UTC)
Confusing Lead Sentance?
edit"The weighted mean, or weighted average, of a non-empty list of data
with corresponding non-negative weights
at least one of which is positive, is..."
So at least one of which is positive? I assume it can't be the weights, since they are stated to always be non-negative, is it the data that always has to be positive, and why is that the case? CallipygianSchoolGirl (talk) 15:15, 21 January 2008 (UTC)
- It is the weights. There is a small difference between nonnegative and positive: zero is nonnegative but it is not positive. So what the text says is that not all the weights can be zero. However, it is a bit confusing so I reformulated it. -- Jitse Niesen (talk) 17:13, 21 January 2008 (UTC)
Covariance
editThe covariance section is confusing to me. It seems like the covariance of the mean of two independent random variables, X, and Y, should be
Generalizing this, it seems like the covariance of a mean should be
- .
Is this right? It seems different than the result of
listed on this page. —Ben FrantzDale (talk) 15:02, 17 September 2008 (UTC)
- That is the variance of the mean. Since cov(X, X) is the same as var(X), we have
- and
-
- So what you've shown is the standard result that
-
- Michael Hardy (talk) 18:05, 17 September 2008 (UTC)
- .... oh: you assumed independence, but of course the matrix result says clearly that that's a case where they're NOT independent. You relied on cov(X, Y) = 0. But obviously cov(X, Y) would be the corresponding entry in the given covariance matrix. Michael Hardy (talk) 18:06, 17 September 2008 (UTC)
"Dealing with variance" and "Weighted sample variance"
editThank you for these and the preceding very useful sections. I have a few suggestions and a question. First, the starting sentence under "Weighted sample variance" is misleading, as this section is not about the uncertainty or error in the weighted mean, but is giving the formula for weighted variance of the sample. Second, the formula for the weights at the start of "Dealing with variance" assumes that the are normalized already. I suggest instead to make this plainer. Third, the reference at http://pygsl.sourceforge.net/reference/pygsl/node36.html is not formatted correctly -- I suspect it is old and should be http://pygsl.sourceforge.net/reference/pygsl/node52.html . Finally, my question: the last formula in "Dealing with variance" has a factor of in it. Given the use of elsewhere in the page, I am wondering if if the should be modified to use in some way? Stgaser45 (talk) 13:36, 21 June 2009 (UTC)
- I also found the "weighted sample variance" section misleading. The equation:
assumes normalized weights. It should be replaced with:
Drdan14 (talk) 19:18, 30 October 2009 (UTC)
"Correcting for Over/Under Dispersion"
editDoes anyone have a reference for this section or a derivation of why the scaling by is correct? —Preceding unsigned comment added by 64.22.160.1 (talk) 13:08, 7 April 2010 (UTC)
References, literature, further reading
editI think this article is brilliant, it addresses an important topic with much detail. However, it does neither supply (citeable) references nor hints to literature. Does anybody know a good book on this topic? (I am specifically interested in dealing with variance etc.) --Hokanomono ✉ 17:07, 24 September 2009 (UTC)
I have revamped the weighted variance section, added some references and added a new section about weighted covariance, you might take a look at it. And feel free to correct it if you find better equations! --89.83.73.89 (talk) 18:19, 10 June 2013 (UTC)
Exponentially decreasing weights
editThis is shaping up to be a good article with treatment of a wide range of related problems. However, I can't get my head around this sentence in the last section: "at step , the weight approximately equals , the tail area the value , the head area ". It seems to simply be missing some verbs, but I don't know enough on the topic to confidently correct it myself. Anyone know what's going on? Tomatoman (talk) 22:56, 30 September 2009 (UTC)
Statistical properties
editMid of that para: For uncorrelated observations with standard deviations σi, the weighted sample mean has standard deviation \sigma(\bar x)= \sqrt {\sum_{i=1}^n {w_i^2 \sigma^2_i}}
would imply stdev of mean going up with n (simply assume w=sig=1)! missing a 1./(sum(wi))^2 under the sqrt? —Preceding unsigned comment added by 87.174.74.56 (talk) 11:58, 23 January 2010 (UTC)
- The weights here wi sum up to 1. Dmcq (talk) 16:02, 23 January 2010 (UTC)
Notation for multiplication of specific numbers
editAs far as I can recall, I have never seen the notation (a)b before for multiplying specific numbers. I think the example at the top of the article would be more accessible to our readers if we instead used the more standard notation a × b, when a and b are specific numbers. (I've used Unicode there, which might not display properly on all browsers, because I don't know LaTeX very well. However, the LaTeX version should work on all browsers.)--greenrd (talk) 08:36, 1 October 2010 (UTC)
- Good point. I've changed the LaTeX accordingly. StuRat (talk) 04:25, 7 June 2012 (UTC)
Unreadable for the Everyman
editIf your a mathematician then I'm sure this Wiki page is great. If your not, then this Wiki page leaves much to be desired. It gives absolutely no lay answer or examples, ergo, you need a math degree for it to be relevent. How about one of you Good Will Huntings start off the article by adding in a dumbed down section so that those who arent Math gods can actual understand what Weighted Mean actually means because this article in it's current state doesn't help me, nor I'm sure most people who find it. Thanks. :) — Preceding unsigned comment added by Deepintexas (talk • contribs) 07:24, 27 December 2010 (UTC)
- That's why I added the Weighted_mean#Examples section, which requires no math skills beyond division, and is enough material for the math newbie. Are you saying you can't understand that ? (As for the rest, it really does get complex, so probably shouldn't be attempted by non-mathematicians.) StuRat (talk) 03:37, 7 June 2012 (UTC)
Reverse calculate
editSo, if you know the original data points, and you know the data points produced by some method of weighted averaging, can you easily discover the method by which the weighted average data points were made?
I'm sure my question doesn't make sense, so I'll give an example. I have the following original data points: 98.2, 97.8, 97.7, 97.1, 97.5, 97.4, 97.6 97.3, 97.0.
The weighted average data points are as follows: 98.93276, 98.78197, 98.63793, 98.4332, 98.30897, 98.187965, 98.109695, 98.00191, 97.86853.
Is it possible to discover what weight I gave to each original data point to produce the weighted average data point? If so, how? And if it is possible, let's include it in the article.
- Am I missing something ? Just divide the weighted data points by the unweighted ones to get the weights. StuRat (talk) 03:33, 7 June 2012 (UTC)
Biased vs. Unbiased
editThe Weighted Sample Variance section emphasizes that the variance is "biased" without explaining. While the 'unbiased estimator' is linked and briefly mentioned, It seems like the concept of bias should be addressed explicitly.--All Clues Key (talk) 23:41, 6 June 2012 (UTC)
- I had the same feeling as you and thus I revamped the section a bit. You will now find the two unbiased estimators (one for "reliability"-like weights, and one for "repeats"-like weights). — Preceding unsigned comment added by 89.83.73.89 (talk) 18:21, 10 June 2013 (UTC)
Explicitly describe the different types of weights
editThe article does not account for the different types of weights out there, which substantially change the equation one must use in order to take in account those weights.
For example, there are the "repeats"-type weights, which are integer values and represent the number of times an observation was observed. They can be normalized (which then represent the "frequency) or not, but basically the idea is that bigger the weight is, more importance will be given to this observation.
There are also the "reliability"-type weights, which are float values which represent the variance for each measurement for an observation. This is important in some control applications to better estimate which observations are more reliable and which aren't. Here, basically the idea is that smaller the weight is (smaller is the variance), more importance will be given to this observation.
As we can see, the type of the weights used in the dataset will drastically change the equation used to compute the mean, variance and covariance.
Maybe this should be better described in the introduction of this article?
--Lrq3000 (talk) 18:28, 10 June 2013 (UTC)
I reinstated frequency (aka repeats) type weights for covariance matrix calculation as another way to calculate an unbiased weighted estimator. Please refrain from removing it in the future before reading this discussion, and discussing it here (or using good references), thanks. Lrq3000 (talk) 03:42, 8 January 2017 (UTC)
Unclear passage
editThe section Weighted sample covariance currently says
- the biased weighted covariance matrix is
- Note: is not an element-wise multiplication, but a matrix multiplication.
First, shouldn't the xi be boldfaced to conform with its previous vector notation, so it is conformable for subtraction with the vector μ*?
Second, I think the notation is non-standard given that the expression in parentheses is a vector, and the "Note" following it is unclear and would not be needed anyway if standard notation were used. Is this meant to contain something like (with x bolded)? Duoduoduo (talk) 22:44, 10 June 2013 (UTC)
- I am the one who added this part of the article.
- Yes exactly Duoduoduo, although I'm not sure if the transpose is to be put on the first or second term (it depends on the way X and mu vectors are shaped, column or line vector, and I don't know if there's a standard for this).
- Feel free to edit the equations to standardize them, I'm quite new to LaTeX so that would be better than what I could do anyway :)
- Lrq3000 (talk) 11:02, 13 June 2013 (UTC)
- Thank's Duoduoduo, I saw your changes and they are perfect.
- Lrq3000 (talk) 23:22, 18 June 2013 (UTC)
Correcting for over- or under-dispersion
editI have a question about your second formula in the paragraph "Correcting for over- or under-dispersion". You weight the formula with 1/n-1, which I find not compatible with the following statement that the case for equal weights leads to sigma/sqrt(n).
Should the 1/n-1 be dropped? or replaced by something like n/n-1?
Also, a textbook reference would help here.
Thanks!
Vector-valued estimates
editThe "Accounting for correlations" section should be outside the "Vector-valued estimates" section, since it doesn't address vector-valued estimates.
Normalized Weights vs Reliability Weights Merge?
editThe section "Mathematical definition" introduces the idea of a normalized weights, w'. The section "Weighted sample variance" introduces an equivalent idea but uses the term "reliability weights". The same variable, w, is used for both frequency weights and reliability weights. At present, the presentation is confusing. I recommending some editing to make these sections consistent. I also suggest using separate variables, with w indicating arbitrary weights (e.g., frequency weights, variance-based weighting), and w' to indicate normalized weights (aka reliability weights). Also it might be useful to find some agreed on terminology for the types of weighting. I found the following source, http://www.cpc.unc.edu/research/tools/data_analysis/statatutorial/sample_surveys/weight_syntax, useful in this regard. ----MuTau (talk) 16:55, 26 May 2018 (UTC)
Title: Weighted arithmetic mean --> Weighted Estimators
editThe content of this wiki page is much broader than the title "Weighted arithmetic mean". A more appropriate title might be "Weighted Estimators". ----MuTau (talk) 17:51, 26 May 2018 (UTC)
Consensus
edit@Dave.p.hoffman: The paper you're citing states that "Several expressions for SEM, have been proposed and used in varying degrees in the atmospheric chemistry literature. (...) There is no general consensus among precipitation chemistry researchers on which expression is preferable..." Wouldn't you think that statisticians might know better? You can't came to a statistics article saying everybody is wrong wagging a non-specialist paper. fgnievinski (talk) 16:00, 2 October 2018 (UTC)
- @Fgnievinski: If the "specialists" have citations to support their claims I'd be happy to accept it, but the previous equations were offered without proof of any kind.
Now on a more constructive tone. I've tried to merge your contribution for what it is new, which is the bootstrapping validation. I kept the notation consistent with the rest of the article. At the end you give a simplified expression. Could you try to relate it to one of the previous formulations, please? fgnievinski (talk) 16:00, 2 October 2018 (UTC)
Is the formula wrong for the variance?
editGiven data then I'm pretty sure that an error estimate for the weighted mean of is going to be a function of , whereas in the text it is some undefined . Maybe means ?
But then it is still wrong. For example, suppose that all of the are zero except for the first because it is known that the experiment went screwy after that first measurement, then why would the error estimate of depend on all the discarded data points?
So maybe means the subset of corresponding to nonzero . Dunno. Either way, something is odd. Microsiliconinc (talk) 06:04, 20 August 2022 (UTC)