Talk:Unbiased estimation of standard deviation

Latest comment: 9 months ago by 2A01:E0A:5F2:A790:FD89:E87E:FB2E:FF82 in topic Helmert's distribution

Only linear functions commute with taking expectations...

edit

I use and teach statistics in psychological science and have a PhD, and despite that I cannot understand the sentence, "only linear functions commute with taking expectations..." Could someone simplify this sentence so that it's easy to comprehend, perhaps splitting it into multiple sentences? Chris (talk) 19:18, 16 September 2022 (UTC)Reply

In my opinion, this sentence is as simple as it can get and does not require any changes. Of course, some mathematical background is required to understand what "commute" means, and how the logic behind the proof, but this article is not the place to explain it. Perhaps a verified link to a textbook with a page number would be the most suitable amendment. AVM2019 (talk) 17:30, 7 August 2023 (UTC)Reply

Added section on autocorrelated data

edit

I have a pretty good background in applied stat, and lots of reference books, and I'm a member of ASA and can do online ASA journal searches, but with all of that I have never seen these bias equations anywhere other than in Law and Kelton. To derive them from Anderson isn't that tough, but finding which expressions in Anderson to use isn't so simple (which is why I put the equation numbers in the references).

What I'm not clear about, and deliberately slid over in this effort, is what is the effect of taking the square root of these bias expressions. Is the resulting PDF still chi? I've been doing some sims lately in support of some ANSI/IEEE nuclear standards development, and there is still a bit of bias that these expressions don't take out. I was already aware of the chi PDF and the small-N correction, but I could use some help in seeing how to apply that sort of transformation to the autocorr case. Any info would be appreciated, and of course should be added to this article.

Given that intro texts don't deal at all with autocorr data, and that such data is common, there needs to be some treatment of the subject somewhere in Wikipedia. Rb88guy (talk) 20:46, 12 January 2009 (UTC)Reply

There are a number of points here:
  • This stuff might eventually be better located separately that would be more obviously relevant to time-series, particularly as the problem starts at the stage of estimating the variance not the standard deviation.
  • The material presently here does not rely on the assumption of a normal distribution and none is stated. If a chi-squared dist were appropriate to the autocorrelated case this would be a necessary requirement, so care would be needed in specifying assumptions. However, I believe the chi-squared dist does not hold, even if the true correlations were known and certainly not if they are estimated from the data. It looks possible the get a formula for the variance of the estimated variance (at least of a normal distribution is assumed) which would be some guide to whether a chi-squared works.
  • There are other ways of estimating the variance of the sample mean which don't start with the ordinary sample variance see for example Moran, P. A. P. 1975. The estimation of standard errors in Monte Carlo simulation experiments. Biometrika, 62:1-4. Also, I think an estimate can be found via spectral analysis, by estimating the spectral density at a frequency of zero.
Melcombe (talk) 16:16, 14 January 2009 (UTC)Reply

A few minor tweaks

edit

Sorry, forgot to mark a couple of these as minor. Adjusted equation spacing, indents. Changed my "N" to "n" as used in previous material. Rb88guy (talk) 15:49, 13 January 2009 (UTC)Reply

Added plot of c4 vs n

edit

And added a caption on earlier graph. Rb88guy (talk) 18:23, 13 January 2009 (UTC)Reply

Added calcs for variance of mean

edit

Using the observed (sample) variance. Thanks to Melcombe for catching my error in the Var[x-bar] expression, and fixing it. I have some nice R graphics that show how this stuff works only in the mean, that is, that the expected-behavior curve(s) pass through the mean values of many thousands of replicates of the various calculations. (There's a lot of scatter.) But it takes a lot of words to describe what's going on in the graphs, so I didn't think that was appropriate. On the other hand, when I do the same sims using the std dev, not the variance, there is still a bit of bias left. As a practical matter for someone trying to calibrate an instrument, removing almost all the bias in the std dev is presumably better than being off by a factor of two... Anyway, that part of this needs more work, and when something sensible is available it should be added here, to complete this section.

Also, the only thing about moving this to a TSA section is that lots of folks who need to be aware of this autocorr bias problem wouldn't think to look at TSA. They might think "How is calibrating this instrument a time series problem? It's just a pile of numbers." Assuming they even know what TSA is or what it can do for (or to) them. Rb88guy (talk) 20:24, 16 January 2009 (UTC)Reply

This is somewhat in danger of becoming original research which is not allowed here. However you have put in refs for the results quoted so that should be OK. To go further one would need to consider that estimating the standard deviation unbiasedly is not central to the usual run of statistical theory and that there may well be good reason for this. Would a better way of treating your "trying to calibrate an instrument" example be to say that what is wanted is a good interval estimate for the mean (ie. a confidence interval or whatever terminology is appropriate). Looking directly at how to define the limits for the CI would combine the idea of getting the "right" estimate for the variance or standard deviation with making an adjustment to the "adjustment for sample size" entailed in the use of limits derived from the Student-t distribution. Indeed, in the uncorrelated case, the use of the Student-t distribution, instead of the normal distribution, might itself be thought of as making a correction for bias in the estimated standard deviation. If the real use of the standard deviation is to construct such CIs, you may be better off aiming your simulations at the properties of the CIs rather than the estimated standrd deviations. If the CI is the real use, then a better home for this stuff might be in an article about CIs for the mean. Don't forget it is possible to put in links from several other articles to point to the right place, wherever it is. Melcombe (talk) 11:34, 19 January 2009 (UTC)Reply
I agree about the research thing; I was hoping someone would know of a reference that takes this from variance to std dev, in the presence of autocorr. If that doesn't exist, I have to say that deriving it is, most likely, beyond my capabilities in stat, but if I did come up with something, then certainly that would be publishable (in a journal, not here). The measurement context that I'm thinking of is calibration of monitoring instruments, particularly for detection limits. There, the std dev of a mean isn't the issue, it's the std dev of the population of filtered, hence autocorrelated, measurements themselves. That std dev is used in Min Detectable Conc calcs. If autocorr isn't accounted for, then the calculated MDC will appear to be way smaller (better) than it really is. So, the remaining issue is that E[s] is not equal to SQRT( E[s^2] ), otherwise we could just take the square root of the "s^2" (and "Var[x-bar]") expressions in the article and everyone could live happily ever after...;) Rb88guy (talk) 16:17, 19 January 2009 (UTC)Reply

Added material on estimating std devs

edit

Well, I just felt that something needed to be added to bring this stuff back to the std dev from the variance, and it also ties back into the first (original) section (c4). Yes, I suppose some of this is "OR" but it must exist somewhere in the stat literature- this cannot possibly be novel. I'm hoping someone will know a reference for what I called   here, or maybe, if it actually doesn't already exist, someone will research it and publish it, and then that can be referenced here. In other words I won't struggle over the exact stuff I put here, but I do think there needs to be some discussion of the issue (that E[s] <> sqrt( E[s^2] ). Consider my addition a strawman...Rb88guy (talk) 20:57, 30 January 2009 (UTC)Reply

Small stuff changed, however

edit

I think the tone of the intro is too negative- why not just delete the entire article, that, apparently, is full of stuff no one even uses? I could see that, maybe, for the small c4 (and c2, which should also be put in here, along with some material on the Helmert PDF) correction, but the autocorr correction is significant. Anyway, as time permits I may try to come up with a more hopeful intro that might even encourage someone to read this article...PS I've been dabbling with an article on where the c2, c4 factors come from; you might want to take a peek at the raw, far-from-finished stuff I have in my sandbox Rb88guy (talk) 20:33, 25 February 2009 (UTC)Reply

Your sandbox article looks great, though I would suggest that you make it explicitly clear that (using your notation)  , rather than having the reader have to deduce this. In fact why not do away with the subscript entirely and just use  ? Btyner (talk) 01:17, 26 February 2009 (UTC)Reply
Thanks, yep, that needs fixin' along with lots of other stuff. I was thinking of making this an article with a title including in some manner "Helmert's distribution of s", following Deming. Incidentally, there is a TON of useful stuff in that book! I don't usually do anything with sampling (as in survey sampling) so I hadn't even looked at it until recently. Rb88guy (talk) 02:22, 26 February 2009 (UTC)Reply
You may be right about it being too negative, but it does say that it is an important theoretical problem which makes it of interest to a moderately large group of individuals. However, if you can find some useful citations to real applications, then do include them later in the article, with a brief mention in the intro, remembering that it is meant to be short and readable. In your sandbox you are citing the first edition of John&Kotz ... have you seen the second edition, as referenced in this article presently, as it may contain material you haven't seen. Also, regarding your sandbox, you may want to make use of the existing article chi distribution(not presently mentioned), both because it is related and because you may be able abbreviate some of what you want to say. Melcombe (talk) 10:22, 26 February 2009 (UTC)Reply
I remember I noticed the bias of standard deviation for small samples a couple of years ago and was quite disappointed not to find anything on wikipedia. This article which was added not long after would have prevented me from wasting my time reinventing the wheel ;-). So I agree with the intro being too negative. This is true for the autocorrelation stuff, but c2 and c4 are relevant as well. I was processing test results with sample sizes ranging from 2 to 8 so the correction factors were significantly different from 1. Also, not to sound too skinflint, but sometime, even a 1% point of margin can represent a lot of money in some industries. -- Ryk V (talk) 00:23, 18 October 2009 (UTC)Reply
Agreed "a 1% point of margin can represent a lot of money in some industries". But do they rely on having an unbiased estimate of standard deviation? Seems unlikely to me. Perhaps they need an estimate of a percentage point, but these problems are not equivalent unless they are prepared to accept some strong assumptions about distributional form. They would certainly be better off trying to solve the problem they actually face rather than some over-simplistic version chosen because it looks mathematically nice or because it looks similar to problems whose solution is known in simple form. Melcombe (talk) 16:39, 28 October 2009 (UTC)Reply

Added a table of values for c4

edit

This is one of my first edit so don't hesitate to modify the table in anyway you see fit. I just thought adding it was relevant, because calculating c4 with the main formula is not straightforward (you need to go to Particular values of the Gamma function to get a correct value (and you can't go very far). Adding some external sources from the web could also be a good idea but I don't know which one are acceptable. -- Ryk V (talk) 00:28, 18 October 2009 (UTC)Reply

An alternative correction formula has been set down on Talk:Standard deviation but it has not made its way into this article yet. Melcombe (talk) 16:42, 28 October 2009 (UTC)Reply
I corrected the values for c4(4), c4(6) and c4(100), for which the last digit was off by one. I can post the program I used, but I don't know if this is the place.. Ver Greeneyes (talk) 23:38, 27 September 2010 (UTC)Reply
A first question is what are your calculations calculating? A first reading of the present text is that the table contains values calculated using the first few terms of the series expansion immediately above; but it would be possible to calculate "exact" values using routines for the (log) gamma function. Clearly these would be numerically different. I think we started off with values with fewer decimal places taken from published sources that may have corresponded to the first interpretation. The article needs to be clearer about exactly what the values represent. Melcombe (talk) 08:52, 28 September 2010 (UTC)Reply
mpfr (an arbitrary precision floating point library) has a version of the Gamma function built into it, so I used the formula itself, but also compared it to the even/odd functions at the bottom of the table. They appear to be identical, but I didn't do the math to compare them. Ver Greeneyes (talk) 09:27, 28 September 2010 (UTC)Reply
I have reworded the text to hopefully reflect the fact that the table contains "exact" values. This leaves open the question of whether an additional column containing values from the 4-term approximation is needed. Melcombe (talk) 10:51, 28 September 2010 (UTC)Reply

Readers aren't expected to be educated in your field

edit

Seems too many articles on Wikipedia suffer from the obscurity of knowledge problem. Where wiz-kids who "know how things work" plop down "the perfect" analysis of the topic, using language and jargon that succinctly and professionally speaks to the ideas. Unfortunately, the end result is NOBODY ELSE CAN UNDERSTAND WHAT YOU JUST WROTE. You need to step back and ask yourself, does this topic read in any way as understandable to the uninitiated? If I had to explain this topic to my 8 year old niece, what words would I use to keep it simple, and still make my point? Please add this much at least the introduction. Thanks!99.125.92.30 (talk) —Preceding undated comment added 04:58, 21 January 2014 (UTC)Reply

Correction

edit

The conclusion from Cochran's theorem is that the square of the expression appearing at the beginning of "bias correction" has a chi-SQUARED distribution, not a chi distribution.

The expression itself has a chi distribution (which is what's relevant here).

13:55, 24 February 2014 (UTC) — Preceding unsigned comment added by 134.191.232.70 (talk)

Right, also also states in Cochran's_theorem]. Corrected --Wiso (talk) 14:14, 7 January 2016 (UTC)Reply

Incorrect citation

edit

Citation [1] does not contain the equation for which it is cited. The page may be found here: https://www.tandfonline.com/doi/abs/10.1080/00031305.1968.10480476?journalCode=utas20 2601:246:0:1F70:D5FE:DCF3:2C37:906B (talk) 05:54, 13 January 2021 (UTC)Reply

Helmert's distribution

edit

The single article on c2 and c4 coefficients I found is in User:Rb88guy/sandbox, but just a draft. c4 coefficient is used in statistical process control (coeff B3 / B4 and B5 / B6 on X and S control charts). So it should be interesting and important to have a complete article on them. On the same way d2 and d3 coefficients (R control charts) should be explained. So, if some artist can do that it will be very useful for a lot of users of SPC. 2A01:E0A:5F2:A790:FD89:E87E:FB2E:FF82 (talk) 16:25, 14 February 2024 (UTC)Reply