Comment by David A. Spitzley

edit

Hey, I just added the blurb about calculating the F-test statistic. I don't have my textbook handy or I'd have noted what conditions needed to apply for this particular technique to be usable. I'm not entirely happy with the layout, so hopefully somebody else will nudge the fraction around until things look nicer. - David A. Spitzley, 12/1/04 10:52 am EST

image in the One-way ANOVA example is misleading!

edit

Fcrit(2,15) = 3.68 at α = 0.05 belongs to the f-cdf --> f-cdf(3.68, 2, 15) = 1 - 0.05 and f-cdf(9.27, 2, 15) = 1 - 0.00239 --> f-pdf(3.68, 2, 15) = 0.0336 and f-pdf(9.27, 2, 15) = 0.0011 The difference is small and the description that the shaded area is meant is easy to overlook. Better example would be displaying cdf.

helpful example but all necessary??

edit

Yeah, working through an example is great, but I'm not sure why all the calculations are needed. In particular calculating the column means (\overline{Y}) leads me into being very confused about calculating [Y] since they were the only previous Y's. Clearly I wasn't paying enough attention to the overline, but then I don't see why they are needed at all; \overline{Y}_T also seems unnecessary. Please correct me if I'm wrong.

I also added a few sentences near the beginning in response to the many requests for simpler explanation. I hope they help a bit, and more importantly, are accurate. Regrettably, they also rely on the "factor-group" assumption, which Melcombe has already criticized (below in "Improvement required"). Does this need to be generalized, or is the categorical assumption correct for anovas? Obviously, my own understanding is still not strong enough here.

Also, perhaps these lines are too redundant with other attempts at simple clarity further below. Probably be nice to combine and get into the best spot, but trying to keep my modifications minimal given my own amateurness.


—Preceding unsigned comment added by Potamites (talkcontribs) 19:01, 25 April 2009 (UTC)Reply

History: F stands for what?

edit

I think it stands for Fisher, so-named in his honor by Snedecor. Don't quote me. Check it out. The first anovas were done by Fisher.

[A suggestion]

edit

I think it would be nice, if one can proved the standard form of reporting the F score: F(within group, between group)= ...

Not clear Dfarrar 12:53, 18 April 2007 (UTC)Reply

This article is almost completely inpenetrable to the average reader

edit

Needs simpler explanation including examples of use.GraemeLeggett 08:13, 7 September 2007 (UTC)Reply

[Error in Formula?]

edit

I think the RSS in the Numerator should be RSS_2, not RSS_1, see e.g.: http://www.graphpad.com/help/Prism5/prism5help.html?howtheftestworks.htm -- user withouth account (sorry)

 


I also think it is wrong. I am getting negative F values. — Preceding unsigned comment added by 165.123.243.100 (talk) 17:22, 11 October 2013 (UTC)Reply

Improvement required

edit

I am unhappy about the statement

"The general formula for an F- test is: F = (between-group variability) / (within-group variability)"

... since F-tests are used when there are no obvious "groups". Melcombe (talk) 09:24, 10 April 2008 (UTC)Reply


This sentence is totally unclear:

"The value of the test statistic used in an F-test consists of the ratio two different estimates of quantities which are the same according to the null hypothesis being tested" —Preceding unsigned comment added by 128.40.231.243 (talk) 11:59, 21 April 2009 (UTC)Reply

Call for Help

edit

Can someone put out a call for help to the stat email lists or the statistics teaching email lists? Surely this does not have to be painful. WikiStat (talk) 18:56, 14 April 2008 (UTC) WikiStatReply

General Formula

edit

I had the chance to look up the general formuela again I'd stand by it for two reasons:1) SPSS output uses this notation for one way ANOVA 2) I looked this up in my stats book from college and it is was as I remembered it. 143.109.134.157 (talk) 04:23, 22 April 2008 (UTC)Reply

What is the F test for?

edit

I am very disapointed by this article for one reason: I run my regression and I got the F statistics equal to 1500 and probability F staticist equal to 0. So I came here to wikipedia to find out, if that is good or bad sign. But all I can find here is some blah blah about F distribution, normal distribution, and so on. I agree this is important, but still not very helpful for someone whith little knowledge in statistics.

PLEASE: CAN SOMEONE WHO KNOWS; ADD A SIMPLE PARAGRAPH TO THE ARTICLE ABOUT HOW THOSE F STAT SHOULD LOOOK LIKE AND WHEN IT IS A GOOD SIGN AND WHEN IT IS A BAD SIGN?

(this article is like manual to how to drive a car, that instead of explaining how to drive the car, explains how the engine is constracted)

Sensitivity to non-normality

edit

The F-test for equality of variances is now characterized as being "extremely sensitive to non-normality". I'm generally opposed to such characterizations as it is hard for a casual reader to judge what "extreme" means. It could easily be misinterpreted as meaning something stronger than is actually warranted. But I don't have any direct experience with this rather obscure F-test (which is not the much more familiar ANOVA F-test). Skbkekas (talk) 04:55, 16 December 2010 (UTC)Reply


Linearity of models

edit

In the text it says:

"In order for the statistic to follow the F-distribution under the null hypothesis, the sums of squares should be statistically independent, and each should follow a scaled chi-squared distribution. The latter condition is guaranteed if the data values are independent and normally distributed with a common variance."

That statement is incorrect in that generality. Normality and common variance guarantee the chi-squared distribution if and only if the model either had no free parameters at all or the model is purely linear. If the model has nonlinear parameters that have been fitted by least squares, then the chi-square distribution does not apply anymore. (You can easily construct nonlinear models with 3 free parameters that can fit any data set perfectly, i.e., it produces a chi-square of zero always and its chi-square distribution is a Dirac delta function centred at zero.) In fact, the F distribution requires knowledge about the degrees of freedom. However, if nonlinear models are fit to data, the common concept of "degrees of freedom" breaks down. (If you interpret DOFs as being defined by the free parameter in the chi-square distribution, this break-down is obvious as the chi-square distribution does not apply to nonlinear models.)
That restriction to linear models only should also be mentioned in the text, alongside the restriction to nested models. It is very important to explicitly name the limitations of applicability. Otherwise, people will use the F test in inappropriate situations producing results (and maybe publishing them in scientific journals) that are not trustworthy.
-- R. Andrae, 10:49, 21 August 2011 (CEST) — Preceding unsigned comment added by 87.160.192.246 (talk)

Error in Nested F-test section?

edit
  Resolved

I think the 1-rss2 in the denominator of the test statistic doesn't make sense becuase it can produce negative values. I think it should be simply rss2. — Preceding unsigned comment added by 68.149.167.250 (talk) 16:05, 2 September 2011 (UTC)Reply

Agreed. The "1-" was added on 31 August by an IP editor. I've just undone that edit. Qwfp (talk) 07:48, 3 September 2011 (UTC)Reply

One-way ANOVA example standard error

edit

At the end of the section on one-way anova in the "post-hoc" analysis, it is not clear where the standard error formula comes from; should it not be something like:   (where   is the mean)? — Preceding unsigned comment added by 93.172.96.24 (talk) 15:33, 21 September 2011 (UTC)Reply

One-way ANOVA example - copy to One-way ANOVA page?

edit

Does any one think that the One-way ANOVA example should be copied to the One-way ANOVA page? --IcyEd (talk) 12:07, 3 June 2013 (UTC)Reply

Missing testing null hypothesis that all betas are equal to zero

edit

Seems kind of important but appears to be missing from the page... — Preceding unsigned comment added by 74.72.229.113 (talk) 03:04, 17 January 2014 (UTC)Reply

Possible error in the F-test formula for the regression

edit

The Formula for the F-test in the regression section was incoherent with the degrees of freedom used for the F-test after the formula. Could someone with a statistics backgroud check the validity of the fomula in its current state? 79.60.157.114 (talk) 03:47, 7 March 2018 (UTC)Reply