Talk:G factor (psychometrics)
This article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||||||||||||
|
Factor Indeterminacy
editThis page is not discussing g'
There is a fundamental difference between g (as Spearman, who had coined the term, had defined it), and a first principal component (PC1) of a positive correlation matrix. Spearman's g was defined as a latent (implied) 1-dimensional variable which accounts for all correlations among any intelligence tests. His tetrad difference equation states a necessary condition for such a g to exist.
The important proviso for Spearman's claim that such a g qualifies as an "objective definition" of "intelligence", is that all correlation matrices of "intelligence tests" must satisfies this necessary condition, not just one or two, because they are all samples of a universe of tests subject to the same g. It is now generally acknowledged (and easily verified empirically) (Guttman, 1992; Schonemann, 1997; Kempthorne, 1997; Garnett, 1919) that this condition is routinely violated by all correlation matrices of reasonable size. Hence, such a g does not exist any more than odd numbers divisible by 4 exist.
I recommend that somebody address this problem. I can not understand why this page is called “General Intelligence factor” if it does not discuss g'? There is not even one reference to Charles Spearman (the inventor) or factor analysis! I suspect this page was put together by somebody with no technical experience what-so-ever. —Preceding unsigned comment added by 70.68.179.142 (talk • contribs)
- Looking at the history of this discussion page, it is clear that the above unsigned comment was written on August 27, 2007 by the late Professor Peter Schonemann, since at one point ([1]) while editing his comment he refers to page 194 of his own article ("Psychometrics of Intelligence" in Encyclopedia of Social Measurement, K. Kemp-Leonard (ed.), 3, 193-201: [2]). Probably not familiar with Wikipedia standards, he inserted this at the top of the Discussion page rather than at the bottom. Eric Kvaalen (talk) 13:24, 7 November 2010 (UTC)
- If you are suggesting that Spearman's definition of g, the "tetrad difference equation," is not supported by correlation matrices, that's fine. However, if you mean that the g factor does not exist, you should probably state your case a little better. The existence of g is an empirical question. Is the question of "odd numbers divisible by 4" also an empirical question?
Neutrality
edit- I think the preceding information (with appropriate citations) needs to be fleshed out and added to the article as a part of Challenges to g, perhaps with a subheading Factor indeterminacy. I have removed the neutrality template pending addition of this information to the article, or further discussion here. Ward3001 21:33, 2 September 2007 (UTC)
Channel capacity
editThe material on channel capacity seems distinctly out of place, or at least a disproportionate part of the article. It is not even technically g theory.
The article in general has little to recommend it. I propose the following structure to both point out shortcomings and recommend how they may be filled:
- Introduction (what is g)
- History (discovery of g by Spearman, development of the field, present-day understanding)
- Mental testing and g (cognitive tests all reflect g and derive most of their validity from g)
- Biological correlates of g
- Challenges to g
By necessity, much of this material will overlap with Intelligence (trait). My point of view is that g-specific material should be located here (definition of g loading; crystallized and fluid g; importance of g in explaining cognitive ability test results), while intelligence-related findings (not restricted to g, though perhaps best explained by g) should be located at Intelligence (trait).
Thoughts? --DAD 00:52, 18 Dec 2004 (UTC)
I've rewritten the article based on your suggested headings, plus one for the social correlates of g. I'll try to remember to add references and things next time I'm flipping through my Jensen books. I'm not entirely sure what the best way to go about handling the overlap between here, Intelligence (trait), and IQ is. I'll have to give it a little more thought. Oh, and welcome to Wikipedia. It's nice to see another person with an interest in psychometrics. -- Schaefer 03:42, 18 Dec 2004 (UTC)
I really like what you've done. Thanks for the welcome. Looks like there's plenty for us to do. -- DAD 02:05, 19 Dec 2004 (UTC)
Broad-sense and narrow-sense heritability?
editThe article talks about the "broad-sense" and "narrow-sense" heritability of g. But these terms are not defined, and heritability contains no explanation of what the difference is. Can anyone with a knowledge of psychometrics clear this one up? grendel|khan 21:35, 2005 Mar 4 (UTC)
Whoa..."general factor"?
editPlease comment on why this page has been renamed "general factor." I cannot find any scholarly references that introduce g as "the general factor". I'm holding a review from Scientific American called "The General Intelligence Factor" (Gottfredson 1998), and that seems to me a far better (clearer and more accurate) title. --DAD 30 June 2005 16:42 (UTC)
- Somebody on this talk page called it a "general factor" (see below :-) Uncle Ed July 6, 2005 02:58 (UTC)
Is it really real?
editThe factor was:
- identified
- defined
Choice #1 implies that it's real, and somebody discovered it. Choice #2 leaves it as a theory. Uncle Ed July 5, 2005 23:56 (UTC)
- g -- that is, a single, dominant general factor underlying cognitive ability test scores -- was discovered by Spearman. As the Scientific American review points out, a general factor is not inevitable (or even necessarily likely). How "real" g is depends on whether you're an adherent of Gould and others who argue against g as a "reified thing", or most researchers, who regard g as an abstract quantity like energy and gravity. It's really an epistemological question. As the biological basis of g becomes clearer, it seems likely that g will be fragmented up into as many biological components as are necessary to describe it. --DAD T 6 July 2005 02:32 (UTC)
- So I would answer identified, as we might say Newton identified the laws of motion. --Rikurzhen July 8, 2005 07:03 (UTC)
The factor and the theory
editWould someone please make the article distinguish between the real, observed "general factor" (g) and the "g Theory" I keep hearing so much about. I made a stub article for Jensen's hard-to-get book, The g Factor, but I still don't get it, and that's embarrassing for me. I'm way over to the right (not politically, I mean on the bell curve), so why is this so hard for me? Uncle Ed July 6, 2005 03:02 (UTC)
- Done. It's basically the same relationship the evolution has to "evolutionary theory". The latter is used to encompass the former routinely (much to the dismay of scientists, when various of the other kind of "way over to the right" people jump on it and say, "Aha! Theory!") --DAD T 6 July 2005 03:15 (UTC)
- I'm glad we can maintain a light-hearted attitude toward this, and that we can cooperate even though I may have come across as a bit heavy-handed at first. I have no desire to impose my own POV on any of the articles. All I want is for the readers to know what (a) scientists did and do think about these matters; and (b) how opinion leaders in the lay public feel about it all.
- We need to make the "science" accessible to the layman. We also need to distinguish between (1) what scientists know, (2) what some scientists suppose, and (3) all the other major opinions are.
- For example, a mere 200 years ago, some writers opined that black slaves in the US were no more intelligent than apes, and I suppose they used they argument they can't even read to bolster this claim. (Check me if I'm wrong, this is off the top of my head.) But there was also a law against teaching slaves to read so abolitionists asserted that this was a circular argument.
- I'm interested in the lingering effects of slavery and racism. Assuming (for the sake of argument) that all people are pretty much born with equal potential, how long would it take for an oppressed group to catch up with the others once the others start saying "we shouldn't oppress these guys any more"? One generation, two, or what?
- And have you heard about the "blue eyes brown eyes" experiment? Or the anecdote about the teacher who confused locker numbers with IQ scores? I'm not trying to prove any points; just wondering aloud what ought to go in the articles. Let's work together on this, and also try to get ZM to cooperate, too. Uncle Ed July 6, 2005 11:06 (UTC)
- I feel similarly about cooperation and making the science accessible. Regarding "known" versus "supposed," it's a rare hypothesis in science that's purely supposed. Almost all hypotheses have some base of support, and it is the size and quality of that base, not the hypothesis itself, that determines credibility. For example, it may be a hypothesis that genes significantly determine intelligence, but the results supporting that hypothesis are overwhelming; that's why the scientific consensus rests there. The hypothesis that specific racially segregating genes shape intelligence has varying bases of support, from small but suggestive (Cochran's analysis of Ashkenazim gene clusters) to circumstantial (Asian, Hispanic and Black groups). I hope our content reflects the degree of support attending particular hypotheses.
- As you may have gathered, my approach, given the intense controversy, is to try to suppress my own thoughts and hypotheses, and attempt to avoid saying anything I can't back up with a citation. This applies to results, to hypotheses, and to characterizations of the consensus. WP favors that approach in general, and it helps keep the discussion focused on content, so I have tried to hold others to the same standards, with little success in certain cases. Still, I'll keep trying.
- Regarding the "legacy of slavery" question, I don't know. Love to see some real results on this, as it's important.
- Remind me about the blue-eyes/brown-eyes experiment?
- Regarding ZM, I've tried almost every approach I can think of (kindness, browbeating, humor, taking him seriously, WP "law enforcement", walking the talk, losing it); nothing I've done, or anyone else has done, has had any noticeable (to me) effect on the quality of his contributions, with one exception: he's responded to direct disciplinary action, and removed "racist" and "Nazi" from his vocabulary. It's beyond exhausting. --DAD T 6 July 2005 17:55 (UTC)
Added to the article:
- The phrase "g theory" refers to hypotheses and results regarding g 's biological nature, stability/malleability, relevance to real-world tasks, and other inquiries.
I'm wondering, will the present article have enough info regarding these hypotheses, et al., to warrant a second article called g theory? If not, we can just add another section or two to g. Uncle Ed July 7, 2005 14:54 (UTC)
Finding versus proposal
editSpearman found, or discovered, or noted, or identified, the influence of a general factor. He then proposed his model. The continued insistence by some editors that Spearman simply hypothesized or proposed the existence of a factor alters history: while he may have hypothesized it, he also found that it was true, and that was his major contribution. The article must reflect this. --DAD T 18:38, 14 July 2005 (UTC)
- This entire article needs a major NPOV revision. Spearman and others believed that it was true, but that does not mean it necessarily is. This article fails to note the controversy inherent in these claims. Jokestress 18:51, 14 July 2005 (UTC)
- The article now notes the controversy. The claim that belief, rather than empirical reality, can somehow insert a general factor into correlation matrices has no support of which I am aware; Cite your sources. Spearman's data showed a general factor regardless of what he believed, and he was the first to note (and develop a quantitative test, vanishing tetrad differences, for) a general factor. The g-based factor hierarchy is, as the article notes based on the APA's consensus statement, the most widely accepted model for cognitive abilities. Claims of NPOV must be accompanied by citations. --DAD T 01:24, 15 July 2005 (UTC)
- From the same source you cite: "Some theorists regard it as misleading (Ceci, 1990). Moreover, as noted in Section 1, a wide range of human abilities, including many that seem to have intellectual components, are outside the domain of standard psychometric tests." [3]. And remember that we are talking only about consensus within a field of study, and one that has been known to be wrong once or twice before... When I have time, I will write up the various authors like Chris Brand, who exemplify the unvarnished sentiment behind this worldview. Jokestress 05:55, 15 July 2005 (UTC)
- Yes, I (and the article) acknowledge the controversy; "some theorists" are outside the mainstream, as the quoted sentence says. Chris Brand (who I've just Googled) is an abuser of the science, not a scientist, and even the most hated actual scientists (Jensen, who I respect, and Rushton, who I'm still fairly ambivalent on) don't partake of Brand & ilk's venom. Even guilt by association should require someone to snuggle up to Brand, not the other way around. --DAD T 06:15, 15 July 2005 (UTC)
- When I have some time, I will lay out the reasons why Brand's de-published "G Factor" book sums up the problems with the concept nicely (he is the most egregious abuser of the science; other "g factor" proponents are just more cagey), and I will add various criticisms of factor analysis by Goodall, etc. and more on Thurstone. Jokestress 07:06, 15 July 2005 (UTC)
- Great. Incidentally, I've yet to see a critique of factor analysis that doesn't collapse before ever really getting off the ground. In its stripped-down form, it's PCA; what could be simpler? --DAD T 07:16, 15 July 2005 (UTC)
- When I have some time, I will lay out the reasons why Brand's de-published "G Factor" book sums up the problems with the concept nicely (he is the most egregious abuser of the science; other "g factor" proponents are just more cagey), and I will add various criticisms of factor analysis by Goodall, etc. and more on Thurstone. Jokestress 07:06, 15 July 2005 (UTC)
- Yes, I (and the article) acknowledge the controversy; "some theorists" are outside the mainstream, as the quoted sentence says. Chris Brand (who I've just Googled) is an abuser of the science, not a scientist, and even the most hated actual scientists (Jensen, who I respect, and Rushton, who I'm still fairly ambivalent on) don't partake of Brand & ilk's venom. Even guilt by association should require someone to snuggle up to Brand, not the other way around. --DAD T 06:15, 15 July 2005 (UTC)
- From the same source you cite: "Some theorists regard it as misleading (Ceci, 1990). Moreover, as noted in Section 1, a wide range of human abilities, including many that seem to have intellectual components, are outside the domain of standard psychometric tests." [3]. And remember that we are talking only about consensus within a field of study, and one that has been known to be wrong once or twice before... When I have time, I will write up the various authors like Chris Brand, who exemplify the unvarnished sentiment behind this worldview. Jokestress 05:55, 15 July 2005 (UTC)
- The article now notes the controversy. The claim that belief, rather than empirical reality, can somehow insert a general factor into correlation matrices has no support of which I am aware; Cite your sources. Spearman's data showed a general factor regardless of what he believed, and he was the first to note (and develop a quantitative test, vanishing tetrad differences, for) a general factor. The g-based factor hierarchy is, as the article notes based on the APA's consensus statement, the most widely accepted model for cognitive abilities. Claims of NPOV must be accompanied by citations. --DAD T 01:24, 15 July 2005 (UTC)
Biological correlates of g - size
edit"g correlates less strongly, but significantly, with overall body size." That's an unfortunate choice of phrasing given the example earlier in the article about measurement of body size. Is the assertion that g correlates with height? With cubital length? With body mass (are the obese more intelligent?)? --Nclean 25th of August 2006
I agree. Obese people often have larger heads, and probably larger brains- does this mean people who are more obese are more intelligent? I've never found that at all
- Cranium size is the important thing. Being fat does not increase the cranium size, does it? --Deleet (talk) 23:46, 31 May 2009 (UTC)
Disambiguation Needed!
editCan someone please disambiguate "G factor" as it can also refer to the G-factor in physics. I don't know how to do disambiguation, sorry. See also Talk:G_factor. Thanks! Rotiro 10:48, 14 December 2006 (UTC)
Widely accepted but controversial?
editIn the lead, it is stated that the g-factor is "widely accepted but controversial". Honestly, I haven't read the rest of the article yet, but that seems to contradict itself.--Niels Ø (noe) 11:06, 23 February 2007 (UTC)
Some of these studies
edit"Brain size has long been known to be correlated with g (Jensen, 1998). Recently, an MRI study on twins (Thompson et al., 2001) showed that frontal gray matter volume was highly significantly correlated with g and highly heritable. A related study has reported that the correlation between brain size (reported to have a heritability of 0.85) and g is 0.4, and that correlation is mediated entirely by genetic factors (Posthuma et al., 2002). g has been observed in mice as well as humans (Matzel et al., 2003)."
The references to these studies should be listed, and I'd like to comment on them- did anyone ever notice how the Thompson study, which found grey matter to be so "heavily determined by genetic factors" examined TWINS RAISED TOGETHER?
I'm sorry, but that's a disgusting, profound amount of intellectual dishonesty and ignorance. To measure the heritability of a trait, you have to have the twins SEPERATED- NOT RAISED TOGETHER. Yet this study took twins that lived their entire lives together, exposed to the same environmental influences, causing thier intellect and personality to develop along the same patterns... yet, they just grabbed some random twins, saw "how similar they were", completely ignoring the dynamics of heritability studies, and, in some huge media blitz, where the study was spread and reported in countries across and the globe and even put on the cover of Nature- fucking disgusting. How could a study with such inane criteria EVER make it through peer-review?
- Um, dizygotic twins share 50% of their genetic code, while monozygotic twins share 100%. Thus, if certain sections of the brain are shared more strongly by monozygotic than dizygotic twins, that indicates those sections of the brain have some underlying heritable component to them. I'll grant that I haven't seen the Thomson study, but the fact that it has made it through peer review suggests that its authors know what they're doing; conversely, writing angry posts in all caps and punctuating them with profanity does little to help your case. Harkenbane (talk) 04:59, 12 March 2008 (UTC)
Flawed Reasoning
editYet no single measurement of a human body is obviously preferred to measure its "size" (although obviously the volume is).
This statement uses "obviously" twice, contradicting itself. This is unfortunate because the entire analogy/paragraph only makes sense if the parenthetical is false. I would just delete the paragraph outright, but perhaps someone has a better solution (or a better analogy). I'll change it to "(excluding the volume, admittedly)" so it isn't quite so blatantly idiotic. Thehotelambush 06:31, 26 May 2007 (UTC)
anonymous comment
editThis page fails to acknowledge that the Flynn effect is seen by many as causing significant damage to the value of "g", because it effects individual skills on IQ tests disproportionately. For example, there are huge IQ gains in Raven's and Similarities tests, but relatively small gains in learned skills such as Arithmetic. Someone please update the page to reflect that the Flynn effect serves to challenge the meaningfulness of "g." —Preceding unsigned comment added by 156.56.154.85 (talk) 22:54, 24 September 2007 (UTC)
- I really don't think the Flynn Effect provides any challenge to the existence of g. Yes, the Flynn Effect has affected some tests more than others, but scores on any one subtest are still positively correlated with scores on all other subtests. Now, James Flynn may argue that the Flynn Effect somehow disproves g, and that might be worth adding to the controversy section, but only because he made that claim; I don't think that the claim itself has any weight. Harkenbane (talk) 04:46, 12 March 2008 (UTC)
Page Title
editIt might be useful to decide if you want to talk about the "General factor of psychometric intelligence," which has been exhaustively discussed in the technical literature, or a "General Intelligence factor" which I don't really know anything about. Most of this article is about the general factor of psychometric intelligence. The general factor is derived by performing hierarchical factor analyses on a correlation matrix of performance on mental ability tests. The process for determining it has been exhaustively described by Jensen (The g Factor). John B. Carroll (Structure of Cognitive Abilities) describes a complete algorithm for determining the common factor in such a matrix. Spearman called it g. By this, he meant that it was a general factor. He did not mean it was a factor of general intelligence. He used g, and contrasted it to s, which he described as a specific factor. This was his two factor theory. The modern understanding of g, typified by Jensen’s and Carroll’s description, is in terms of factor analysis. It is sometimes called the "g factor," or "Spearman's g" (in deference to Spearman). In other words, it is the "general factor" of cognitive ability - not a "General Intelligence Factor." —Preceding unsigned comment added by 24.60.239.250 (talk) 01:34, 20 October 2007 (UTC)
Savants and people on the ASD and brain damage.
editI believe that something in the challenges section should be added concerning savants and people in the ASD, because savants only excel highly in one thing, and people on the Autistic Spectrum Disorder have a very uneven profile of abilities on an IQ test. Also, If one part of the brain gets damaged, then the other parts don't necessarily become defunct (neurological isolation).
superyuval10 (talk) 21:24, 24 August 2008 (UTC)
- You have the wrong article. I believe you need to address this on the talk page for the general article on Intelligence. General intelligence factor is a psychometic and statistical concept and does not directly pertain to the general issues in intelligence. Ward3001 (talk) 00:49, 25 August 2008 (UTC)
Title Controversy of "General intelligence factor"
editProper english states that each 'important' word in a title or phrase should be capitalized to represent it's value.
Hence, 'General intelligence factor' should be 'General Intelligence Factor".
Consider Revision.
Thanks.
74.184.100.154 (talk) 14:58, 21 March 2009 (UTC)
- There's no such thing as "proper English" for encyclopedia article titles. It's a matter of the particular writing style being used. Wikipedia style states: "The initial letter of a title is capitalized (except in very rare cases, such as eBay). Otherwise, capital letters are used only where implied by normal capitalization rules (Funding of UNESCO projects, not Funding of UNESCO Projects)".
- And by the way "proper" English capitalizes the word "English", not "english". Ward3001 (talk) 16:13, 21 March 2009 (UTC)
- Way to be helpful! Thanks jack@@@! :D
74.184.100.154 (talk) 22:10, 21 April 2009 (UTC)
I'm fairly certain that the current title is wrong anyway. The g factor is a measurement of "general mental ability" as formulated by Galton and as used by its most notable proponents such as Jensen. Numerous citations could be provided to support this, but really are entirely unnecessary. If it is unclear why it's "general mental ability" and not "general intelligence", you only need to read Jensen to find out. Do we have to go through some huge drama to get this changed? --Aryaman (talk) 18:46, 28 October 2009 (UTC)
Flynn effect?
edit"In addition, there is recent evidence that the tendency for intelligence scores to rise has ended in some first world countries."
There are three citations for this. Two of them are dead. The last links to an article that doesn't mention it. Does anyone have a cite? --Deleet (talk) 23:49, 31 May 2009 (UTC)
Intelligence Citations Bibliography for Articles Related to IQ Testing
editI see that this article discussion page has been quiet for a while. You may find it helpful while reading or editing articles to look at a bibliography of Intelligence Citations, posted for the use of all Wikipedians who have occasion to edit articles on human intelligence and related issues. I happen to have circulating access to a huge academic research library at a university with an active research program in these issues (and to another library that is one of the ten largest public library systems in the United States) and have been researching these issues since 1989. You are welcome to use these citations for your own research. You can help other Wikipedians by suggesting new sources through comments on that page. It will be extremely helpful for articles on human intelligence to edit them according to the Wikipedia standards for reliable sources for medicine-related articles, as it is important to get these issues as well verified as possible. -- WeijiBaikeBianji (talk) 17:33, 30 June 2010 (UTC)
- There are still books directly related to the topic of this article that aren't mentioned in the article at all. You are all encouraged to check the source list and to suggest new sources for it as editing of this article continues. -- WeijiBaikeBianji (talk) 14:30, 24 August 2010 (UTC)
Good recent edits.
editI particularly like the new edit to article text referring to "g" as a "statistic," which is a correct use of the word "statistic" and a good picture of what g is in current psychometrics. I see from the edit summaries on the recent edits that there is still a call for more reliable secondary sourcing of this article. I hope to further update the Intelligence Citations source list linked to from this talk page over the weekend, and I welcome other editors who are familiar with the literature to suggest new sources for the source list, which you, other editors, and I can use for further updates of article text. -- WeijiBaikeBianji (talk) 17:18, 14 October 2010 (UTC)
g undefined in article, better sourcing needed
editThere are many statistical terms used in this article without definition or reference. It appears that there are multiple models used to define g, yet none of them are discussed in the article. Terms like g-loading are vague and imprecise, and there appears to be no discussion of secondary or tertiary factors and how they relate to intelligence. Without a discussion of how factor analysis is applied to intelligence test (like WAIS), this article makes g sound like touchy feely woo, instead of what it really is an application of a statistical method. aprock (talk) 17:41, 14 October 2010 (UTC)
Renaming the article?
editGeneral intelligence factor is not the most common synonym of g. A Google Scholar search suggests that General mental ability and General cognitive ability are much more common. I think we should use one of those names. Just General intelligence would be better as well.--Victor Chmara (talk) 14:19, 20 October 2010 (UTC)
- I agree, although I think the generally accepted term is g-factor, which would make a fine title, with the appropriate disambiguation. aprock (talk) 16:04, 20 October 2010 (UTC)
- Not sure from a Google Scholar search that the words "general intelligence" refers to g in all cases. The same with the other two suggestions. Also, "General intelligence" sounds like something the Central Intelligence Agency could be doing. *G-factor" would be fine except that it is an abbreviation and Wikipedia avoids that in titles.Having "factor" in the term is good since it indicates that it is not a proven fact but a statistical result. The 1996 APA report seems to prefer "general factor" or "general intelligence factor". I think the current title is OK.Miradre (talk) 04:49, 21 October 2010 (UTC)
- The name g-factor is already reserved for another topic. General mental ability, or GMA, seems to be a term that is increasingly used in scholarly articles. It has been used by prominent researchers such as Jensen[4], Deary[5] , Schmidt and Hunter[6], Rushton[7], Johnson and Bouchard[8], Roth et al.[9], Gottfredson [10], and many others. Some others, Robert Plomin for example, seem to prefer 'general cognitive ability'.
- The APA report talks about both general intelligence, general intelligence factor, and general cognitive ability, so I don't think it can be used to support the use of any one term. Miradne, can you give me examples of academic sources where the term 'general mental/cognitive ability' refers something other than g? My readings suggest that GMA is overwhelmingly and probably exclusively used as a synonym for g.
- There are rules for choosing article names, and "it supports my POV" ("not a proven fact but a statistical result") is not one of them. For example, Gardner's theory of multiple intelligences cannot be called a scientific theory (something which Gardner admits in his later writings, referring to it as a "philosophy"), but it is the term most commonly used, so that's what Wikipedia uses, too. We should prefer a term that is more commonly used to one that is less common, and 'general mental (or cognitive) ability' is more widespread than 'general intelligence factor'. Moreover, the latter is usually used to refer specifically to the results of factor analysis, whereas GMA is a more general term. As this article is clearly not just about factor analysis, but rather about the relation of a theoretical construct usually called g, GMA, GCA, or general intelligence, to biological, social, and other variables, general mental ability would be a good article name.--Victor Chmara (talk) 11:54, 21 October 2010 (UTC)
- Here is an article that uses GMA or general mental ability as a synonym for intelligence. It is, in this study, measured with g, IQ tests, grades, teachers estimates, officers ratings... See Appendix 1-2.[11]Miradre (talk) 12:51, 21 October 2010 (UTC)
- I further disagree. If you want to talk about intelligence broadly there are other articles for that. This is the article for g.Miradre (talk) 12:54, 21 October 2010 (UTC)
- Rushton indeed seems to use GMA inconsistently, because in that article its meaning is "intelligence in general", whereas it's a synonym for g in [12] and [13]. However, in most sources GMA and g mean the same thing. I have nowhere suggested that this article should be about intelligence more broadly.--Victor Chmara (talk) 15:00, 21 October 2010 (UTC)
- There are rules for choosing article names, and "it supports my POV" ("not a proven fact but a statistical result") is not one of them. For example, Gardner's theory of multiple intelligences cannot be called a scientific theory (something which Gardner admits in his later writings, referring to it as a "philosophy"), but it is the term most commonly used, so that's what Wikipedia uses, too. We should prefer a term that is more commonly used to one that is less common, and 'general mental (or cognitive) ability' is more widespread than 'general intelligence factor'. Moreover, the latter is usually used to refer specifically to the results of factor analysis, whereas GMA is a more general term. As this article is clearly not just about factor analysis, but rather about the relation of a theoretical construct usually called g, GMA, GCA, or general intelligence, to biological, social, and other variables, general mental ability would be a good article name.--Victor Chmara (talk) 11:54, 21 October 2010 (UTC)
As far as I know, there would be no problem naming the article something like ''g-factor'' (intelligence). aprock (talk) 14:33, 21 October 2010 (UTC)
- That would be preferable to the current situation. Were we to do that, I think the other article should be moved to [[g-factor (physics)]], while searching for "g-factor" would take you to a disambig page with links to both. BTW, is it possible to use italics in article names?--Victor Chmara (talk) 15:14, 21 October 2010 (UTC)
- Re: italics, the article Cars (film) does. Italics might be reserved for creative works though. aprock (talk) 17:01, 21 October 2010 (UTC)
I'm thinking of renaming this article to g factor (intelligence). I would think this would be an uncontroversial move, as g factor is a more common term than general intelligence factor so there would be no need to do this more formally. Or is someone opposed to this move?--Victor Chmara (talk) 14:00, 27 October 2010 (UTC)
- Fine with me. Or alternatively g factor (psychometrics).Miradre (talk) 14:55, 27 October 2010 (UTC)
- I think g-factor (psychometrics)makes more sense. aprock (talk) 01:01, 28 October 2010 (UTC)
- Thanks for the interesting discussion of various rationales from all the editors. I too like g-factor (psychometrics) as an article title. -- WeijiBaikeBianji (talk, how I edit) 01:56, 28 October 2010 (UTC)
- I think g-factor (psychometrics)makes more sense. aprock (talk) 01:01, 28 October 2010 (UTC)
Okay, as everybody seemed to agree, I renamed the article.--Victor Chmara (talk) 06:43, 28 October 2010 (UTC)
"Controversial"
editThe lead section says that g is "a controversial statistic". But is that really true, or rather is it not POV-pushing to make such a claim in the lead? To denote something as "controversial" suggests that it is a fringe view or at least not a mainstream one. However, my reading suggests that just about no one thinks that g does not exist. In fact, I don't see how they could think that it does not exist, when its existence is an empirical question, and innumerable studies have replicated it.
The lead talks about g as a statistic, i.e. something that results from statistical analysis. It is an empirical fact that if we give a battery of mental tests to a (reasonably big and cognitively diverse) sample of people and then plug their intercorrelations into a matrix, all correlations will be positive, and one can extract g from the matrix using various methods. To say that this is not possible is a fringe view, and I don't see why we should privilege it by claiming that g is "controversial". If g is controversial, then just about everything in psychology is, and we would have to change all sorts of articles accordingly. If the lead was about some specific aspects of what may be called "g theory" (e.g. that g is "mental energy" or "mental speed", as Spearman and Jensen, respectively, have suggested) we could say that it's controversial, but it isn't about such.
So, I suggest we remove the claim that g is controversial. There's a separate section for Challenges to g.--Victor Chmara (talk) 22:36, 27 October 2010 (UTC)
- There are two aspects here. The computation of g is not at all controversial. While it isn't spelled out clearly in the article, computing a g-factor for a given set of tests is rather straightforward (although there is a wide degree of variation and subjectivity in defining the various subtests). However, the interpretation of that value has been quite controversial. Some think it represents something fundamental, others note that it is a statistical construct. So when it comes to whether controversy should be in the lead, I don't think it's required in any strict sense, as long as it's clear that there is no consensus on what a g-factor represent with respect to intelligence. Computationally, the g-factor is analogous to the eigenface with the the highest eigenvalue. aprock (talk) 01:00, 28 October 2010 (UTC)
- Yes, what Aprock said. The numerous sources I have read on the subject suggest no disagreement at all about the linear algebra involved in calculating a g factor, but whole books worth of disagreement (some of which are still not cited in this article, alas) about the import of that calculated result. There are a lot of ways to reflect that controversy in article text, and I am not insistent on any one approach to fairly reflecting the sources, as long as we discuss that collegially. -- WeijiBaikeBianji (talk, how I edit) 02:00, 28 October 2010 (UTC)
- Well, Schonemann does criticize the g-factor as calculated by Arthur Jensen as being simply a principal component and not a true factor analysis ("Psychometrics of Intelligence" [14]). There are differences between these. Eric Kvaalen (talk) 13:24, 7 November 2010 (UTC)
- Schönemann's papers are always a hoot due to his Tourette-ish inability to refrain from heaping abuse on Arthur Jensen. In The g Factor Jensen advises against using PCA (even though, according to him, the specific method used in extracting g is not a big issue) so I'm not sure what Schönemann talks about. Schönemann's repeated criticism that g as used by Jensen and others is invalid because it differs from Spearman's definition is a weird opinion. The inadequacy of Spearman's two-factor model was acknowledged a long time ago, even by Spearman himself, but this does not in any way undermine the fact of the positive manifold of correlations between all intelligence tests.--Victor Chmara (talk) 16:30, 7 November 2010 (UTC)
- I'll hear out the editors who are more familiar with linear algebra than I am (that may be most editors here) on this specific issue. There are other sources that appear to agree with the substance of Schönemann's criticism, perhaps in different language, and the pure mathematics of linear algebra has certainly developed since Spearman's era a century ago. -- WeijiBaikeBianji (talk, how I edit) 17:21, 7 November 2010 (UTC)
I removed 'controversial' from the first sentence of the article, but at the end of the lead, it stills says that "significant controversy attends g and its alternatives", and there's a link from the word controversy to a discussion about multiple intelligences. I don't think Gardner's scheme is the most serious alternative to g, so this could be worded in some other way, with references to other criticisms of g.--Victor Chmara (talk) 07:13, 28 October 2010 (UTC)
- Here is a better criticism: [15]Miradre (talk) 11:29, 28 October 2010 (UTC)
- Another is here [16]. See the reply to Rinderman's study titled "Little g: Prospects and Constraints" on page 716. Lots of other interesting replies also.Miradre (talk) 11:35, 28 October 2010 (UTC)
- For more peer-reviewed criticisms of g see
- Louis Guttman: The Irrelevance of Factor Analysis for the Study of Group Differences. Multivariate Behavioral Research 27(2), 175-204. 1992.
- Two failures of Spearman’s hypothesis: The GATB in Holland and the JAT in South Africa. Conor V. Dolan, Willemijn Roorda, Jelte M. Wichert. Intelligence 32, 155-173. 2004.
- Dolan, C.V. and Hamaker, E.L. (2001). Investigating black-white differences in psychometric IQ: Multi-group confirmatory factor
analyses of the WISC-R and K-ABC, and a critique of the method of correlated vectors. In: Frank Columbus, Editor, Advances in Psychological Research vol. VI, pp. 31-60. Huntington, NY: Nova Science Publishers, Inc.
See also Dolan and Lubke's critique of Schonneman (Viewing Spearman's hypothesis from the perspective of multigroup PCA: A comment on Schoenemann's criticism. Intelligence 29 (2001): 231-245) where they still conclude that g is a "suboptimal test" of b-w intelligence difference. Timjim7 (talk) 23:37, 31 July 2011 (UTC)
The first sentence of the article now reads: "The g factor, where g stands for general intelligence, is a statistic used in psychometrics to quantify the variation of intelligence test scores.[citation needed]" Who inserted the citation needed tag, and what is it that is suspect about the sentence?--Victor Chmara (talk) 23:08, 28 October 2010 (UTC)
- I don't know, but I removed it. I also got rid of the word "variation", because it's not a measure of variation (like a standard deviation) but of the intelligence itself (supposedly). Eric Kvaalen (talk) 13:24, 7 November 2010 (UTC)
- I can't remember if I added the tag, but if I did, it was probably with the issue that Eric mentions in mind. The definition can be further refined by subsequent edits now that several editors are active here. -- WeijiBaikeBianji (talk, how I edit) 16:13, 7 November 2010 (UTC)
- Reviewing the article history, I see the dated citation tag was added by a bot to an older citation tag in the abbreviated form (which I never use). I didn't trace far enough back to see who first added the tag, and with what edit summary (if any), but again I agree with Eric on what the most salient issue about that sentence was before his rewrite. I'll check the sources I have at hand to see what they say about definitions. -- WeijiBaikeBianji (talk, how I edit) 16:24, 7 November 2010 (UTC)
- I added the citation needed tag because there is currently no properly sourced definition in the article for the definition of the concept. The current definition is not the correct defintion, and should probably be changed to the mathematical definition, with a separate paragraph for interpretation. aprock (talk) 18:55, 7 November 2010 (UTC)
g versus broader measures of intelligence
editHi - just to avoid accusation of drive-by tagging (I got called away after adding the tag), I think we need the statement "IQ tests that measure a wide range of abilities do not predict much better than g" to be sourced because it's clearly a report of research findings. I've had a quick look on google scholar ("g factor" "broader measures"), but not come up with much either way. Could someone more familiar with the literature turn something up? VsevolodKrolikov (talk) 06:52, 1 December 2010 (UTC)
- Thanks to whomever added the ref. I've found another one that describes the important of general knowledge and crystallised g as opposed to biological g as age increases. As I'm not up to speed in this topic area and fear making a mistake in summarising it, could someone take a look? It summarises a fair bit of research as well as reporting on its own findings. Here it is cite formatted: "Ability and personality correlates of general knowledge" (PDF). Personality and Individual Differences. 41: 419–429. 2006.
{{cite journal}}
: Unknown parameter|authors=
ignored (help)VsevolodKrolikov (talk) 03:55, 9 December 2010 (UTC)
POV in denigrating S. J. Gould's view on the topic
edit"Stephen Jay Gould, who was not a psychologist, statistician, or psychometrician, voiced his objections..."
Why does it matter what Gould was not? He was not an infinite number of things, including an astronaut, soybean breeder, and a spotted pig. This statement may constitute a subtle case of POV, as it implies that Gould was not a position to comment on the topic or that his opinion was less informed. Not so. Without commenting on the content of Gould's argument at all, I maintain that he is a legitimate contributor to the discussion as a renowned biologist. You may not like what he says, but denigrating him is a POV and thus should be erased.
TippTopp (talk) 01:49, 4 March 2011 (UTC)
- Gould's criticisms of g were silly, but I agree that there's no need to say what he was not (unless there a source directly arguing that).--Victor Chmara (talk) 09:46, 4 March 2011 (UTC)
Summarizing Heritability of IQ in "Biological and genetic correlates of g"
editThe section "Biological and genetic correlates of g" is a mess. It relies heavily on a primary sources, unverifiable sources, and synthesis. I suggest that the section be rewritten to be a straightforward summary of the main article linked. aprock (talk) 19:44, 13 September 2011 (UTC)
Spearman and partial correlations
editIt seems that Spearman indeed used partial correlations in his 1904 paper. However, he used the tetrad method in his later work, and considered the method central to his research on g. Most accounts of his work specifically discuss the tetrad method, which makes it misleading to state in the lead section that he used partial correlations. However, I don't think there's any reason to mention the particular method in the lead section (the topic is not even discussed elsewhere in the article, although it should).--Victor Chmara (talk) 15:12, 26 September 2011 (UTC)
- Thanks for checking up on that. I agree that something more concrete about the various models should be in the article. Unfortunately, there is a lot of conflation of g as general intelligence, and g as model parameter in a lot of secondary and tertiary sources. It might be useful to have separate sections dealing with those separate uses of the term. aprock (talk) 16:37, 26 September 2011 (UTC)
Positive manifold
editWhile discussion of the mathematical models used to compute g are good to have in the article, they need to either be properly defined, or expressed in lay terms. Repeatedly using "positive manifold" appears to be shorthand for positive correlation matrices, which is probably better shortened to positive correlation, as it is clearer and more correct. aprock (talk) 02:20, 23 February 2012 (UTC)
- Positive manifold is briefly defined in the section "Mental testing and g" (first paragraph). It's a term that is used constantly in the literature, and it's much shorter than "universally positive correlation matrices/correlations" or any other expression. "Positive correlation" is not exact enough. It's much easier to discuss various theories of g when you can use the short and unambiguous term positive manifold. If needed, the term can be defined more clearly in the article. A pic of a positive correlation matrix would be useful.--Victor Chmara (talk) 12:10, 23 February 2012 (UTC)
- If used, the term should be more clearly defined. While it's shorter than "universally positive correlation matrices/correlations", it's also less clear. Introducing new jargon that looks to be unique to intelligence research without it being well defined confusing. As best as I can tell, a positive manifold is a Riemannian manifold, but none of the sources I have define it mathematically. Linking to Estimation of covariance matrices#Intrinsic covariance matrix estimation, may or may not make sense depending on what the actual definition is. aprock (talk) 16:04, 23 February 2012 (UTC)
- The positive manifold is not a very difficult concept, so I don't think it's problematic here. I added a correlation matrix from Spearman's original study to the section "Mental testing and g".--Victor Chmara (talk) 13:34, 25 February 2012 (UTC)
Defining Factor analysis
editThis undo: [17] with the edit summary "no need to define factor analysis in general terms in this article; tautology (variability, variable); and factors aren't necessarily uncorrelated" is quite confusing. Why would we not define factor analysis? Likewise, the statement that factor analysis doesn't yield uncorrelated factors is incorrect. aprock (talk) 19:07, 14 March 2012 (UTC)
- This article is about the g factor. There's a separate article about factor analysis. In this article we only need to describe what the purpose of factor analysis is when applied to test data. In general, Wikipedia articles about scientific subjects do not contain detailed descriptions of the statistical methods used in that field. The paragraph on factor analysis is based on what Mackintosh says about its use in IQ research.
- It is not true that factors are always uncorrelated. Factors do not have to be orthogonal in oblique factor analysis, and hierarchical factor analysis is by definition based on correlated factors.--Victor Chmara (talk) 19:30, 14 March 2012 (UTC)
- You're getting confused here. No one suggested a detailed description of the statistical method. Just defining what the term is. Likewise, your understanding of hierarchical factor analysis is incorrect. Each set of branches from a given node in the hierarchy is computed to be independent of each other. aprock (talk) 19:33, 14 March 2012 (UTC)
- Factor analysis is defined quite clearly in the article. Your suggested modification just defined it in general terms without reference to IQ research. The definition itself was the same (explaining a bunch of variables in terms of a smaller number of variables). I see no reason to define it in more general terms.
- Here's[18] a short description of the use of correlated factors. In hierarchical FA you factor analyze correlated first-order factors to get uncorrelated second- (or even higher) order factors.--Victor Chmara (talk) 19:56, 14 March 2012 (UTC)
- Factor analysis exists independent of the g-factor, and should be defined clearly and correct. What is in the article now is incorrect and imprecise. From the link you cite: "In this strategy, STATISTICA first identifies clusters of items and rotates axes through those clusters; next the correlations between those (oblique) factors are computed, and that correlation matrix of oblique factors is further factor-analyzed to yield a set of orthogonal factors that divide the variability in the items", emphasis added. Said for the layman, clustering is not factor analysis. HFA is just using factor analysis within a hierarchy. The factor analysis process yields orthogonal factors. aprock (talk) 20:25, 14 March 2012 (UTC)
- This is getting silly. What is incorrect and imprecise in the current formulation, and based on which sources? In hierarchical FA, you first factor-analyze, say, IQ test data, allowing for non-orthogonality. Then you factor analyze the correlation matrix of those factors you got, which yields one or more second-order factors. Nothing is "said for the layman" in the STATISTICA description; it describes factor analysis performed on two hierachical levels. I'll quote from the statistician David Bartholomew, who is an expert on factor analysis:
- An apparent resolution of the conflict between Spearman and Thurstone was provided by what came to be called hierarchical factor analysis. This was done by treating the factors in exactly the same way as the original indicators. After all, if one can interpret the positive correlation between indicators as evidence for a common underlying factor, then one can treat a set of correlated factors in precisely the same way. Thus, if human performance on a varied set of test items could be described by a cluster of correlated abilities, then there is no reason why one should not analyse the correlations between those abilities in the same manner as the original variables. If one arrived at, say, five such abilities from the primary factor analysis which were positively correlated among themselves, that fact could be regarded as indicative of some common source on which they all depended; that is, a more fundamental underlying variable at a deeper level. That, indeed, turned out to be the case and so Spearman’s g duly re-emerged as the ultimate explanation of the original correlations. It was thus possible for the multiple-factor model of Thurstone and the two-factor model of Spearman to co-exist.
- "In hierarchical FA, you first factor-analyze, say, IQ test data, allowing for non-orthogonality." No, you don't. You're confusing two different things. There is an analysis of factors, which may be arrived at through various heuristic methods (like clustering, expert opinion, intuition etc), and there is factor analysis which yields orthogonal factors. Please reread your quote by Bartholomew. In it he does not say that the hierarchical factor analysis uses factor analysis to determine the initial set of factors. You need to take care when discussing these things because you are not only confusing yourself, but the reader as well. aprock (talk) 22:24, 14 March 2012 (UTC)
Let me clarify here. There are a few distinct topics here. The factor analysis as a "family of mathematical techniques used to describe the variability of correlated variables in terms of a smaller number of uncorrelated variables". There is the factor analysis that Spearman did, which is a specific algorithm. And there is analysis of factors (also called factor analysis), where we talk about the general problem of coming up with models of factors which explain outcomes. They are all related, but distinct meanings of the phrase factor analysis. The problem is that the text of the article treats all these different concepts as the same thing. This is what I mean when I say that the presentation is incorrect and imprecise. Specifically, it muddles these three different aspects into a single concept without being clear about what is going on. I suspect that this is in no small part due to trying to treat mathematical topics colloquially. aprock (talk) 22:54, 14 March 2012 (UTC)
- I have stated my views above, with sources. If you disagree, please present reliable sources that support your views.--Victor Chmara (talk) 23:16, 14 March 2012 (UTC)
- Agree with Victor Chmara regarding hierarchical FA using the same method. "This was done by treating the factors in exactly the same way as the original indicators. After all, if one can interpret the positive correlation between indicators as evidence for a common underlying factor, then one can treat a set of correlated factors in precisely the same way... ...If one arrived at, say, five such abilities from the primary factor analysis...Acadēmica Orientālis (talk) 02:25, 15 March 2012 (UTC)
general intelligence vs. G
editThe first sentence of the first paragraph clearly discusses the original conception from the 1904 paper of what the g-factor was hoped to measure. From the introduction of the paper:
Our particular topic will be that cardinal function which we can provisionally term "General Intelligence;" first, there will be an inquiry into its exact relation to the Sensory Discrimination of which we hear so much in laboratory work; and then -- by the aid of information thus coming to light -- it is hoped to determine this Intelligence in a definite objective manner, and to discover means of precisely measuring it. Should this ambitious programme be achieved even in small degree, Experimental Psychology would thereby appear to be supplied with the missing link in its theoretical justification, and at the same time to have produced a practical fruit of almost illimitable promise.
That factor analysis was used, and that one of the factors came to be known as G does not change the fact that Spearman was investigating his notion of "general intelligence". aprock (talk) 20:22, 14 March 2012 (UTC)
- While rewriting this article in recent weeks, I have been very careful to distinguish between the statistical facts about g and the psychological and physical interpretations that have been given to these statistical findings. You are now muddying up this distinction in the lead section with your edits that are based on your interpretations of primary sources. Based on secondary sources, I maintain that it is important not to confuse the statistical facts about g with any causal interpretations that have been given to them. For example, Deary 2001, p. 12 writes:
- The first person to describe the general factor in human intelligence was an English army major turned psychologist, Charles Spearman, in a famous research paper in 1904. He examined schoolchildren's scores on different academic subjects. The scores were all positively correlated and he put this down to a general mental ability.
- Deary clearly distinguishes between the fact that the scores were positively correlated (=the general factor) from the particular interpretation that Spearman gave to it (=general mental ability). Similarly, in my version of the lead, g as a statistical finding is distinguished from Spearman's interpretation of g as a general mental ability. I wrote as follows:
- The existence of the g factor was originally proposed ["was discovered by" could be a better phrasing] by the English psychologist Charles Spearman in the early years of the 20th century. He observed that schoolchildren's grades across seemingly unrelated subjects were positively correlated, and reasoned that these correlations reflected the influence of an underlying general mental ability that entered into performance on all kinds of mental tests.
- In the current version of the lead it is claimed that Spearman's great original discovery was general intelligence, i.e. the psychological theory that all mental performance reflects a single underlying ability. However, Galton and Spencer had suggested that there was a general intelligence long before Spearman, so Spearman was not the first to propose it. The lead should make clear the difference between Spearman's important and novel empirical results based on his innovative methods on the one hand, and his interpretations thereof on the other hand.--Victor Chmara (talk) 23:00, 14 March 2012 (UTC)
- Support Victor Chmara. Obviously secondary sources are preferable to a primary source from 1904. Acadēmica Orientālis (talk) 01:09, 15 March 2012 (UTC)
evidence against law of diminishing returns
editI believe that the ongoing study called mathematically precocious youth suggests that the law of diminishing returns is not as likely to be true. http://en.wikipedia.org/wiki/Study_of_Mathematically_Precocious_Youth ONoNotThisGuyAgain (talk) 10:59, 16 June 2012 (UTC)
- The results of the SMPY are consistent with the law of diminishing returns. The study is discussed in the Creativity section of this article, where it is said that according to the SMPY "the level of g acts as a predictor of the level of achievement, while specific cognitive ability patterns predict the realm of achievement". Spearman's "law" predicts that at high levels of g specific cognitive abilities become more salient, which is exactly what the results of the SMPY suggest.--Victor Chmara (talk) 20:59, 13 June 2012 (UTC)
- Yes, after rereading the law of diminishing returns I realize that isn't what I was talking about. I need to be more careful. What I meant was that some people have suggested that there is not much difference in predictive power of school and work achievement of IQ between say the top 5%, top 1%, and top .5%. A different law of diminishing returns I guess. Anyway, there is this article which negates this assertion. I will quote some relevant text. http://www.vanderbilt.edu/Peabody/SMPY/WaiJEP2005.pdf
- Without question, early SAT assessments measure much more than book learning potential and predictive validity for first-year college grades. They differentiate important systematic sources of individuality that subsequently factor into individual differences in occupational performance and creative expression. With respect to the importance of assessing ability differences within the top 1%,it has long been assumed by some that beyond a certain point an ability threshold is reached, and more ability does not matter. For example, in Howe’s (2001) recent book, IQ in Question: The Truth About Intelligence, the concluding chapter is titled “Twelve Well- Known Facts About Intelligence Which Are Not True.” Howe’s 11th point supposes, “At the highest levels of creative achievement,having an exceptionally high IQ makes little or no difference. Other factors, including being strongly committed and highly motivated, are much more important” (p. 163).5 Similarly, in a recent letter published in Science (Muller et al., 2005, p. 1043), 79 authors stated, “There is little evidence that those scoring at the very top of the range in standardized tests are likely to have more successful careers in the sciences. Too many other factors are involved.”Other factors are indeed important, and we agree that being strongly committed and highly motivated is critical for high achievement (Lubinski, 2004; Lubinski & Benbow, 2000; Lubinski,Benbow, Shea, Eftekhari-Sanjani, & Halvorson, 2001). Yet,the data reported here on secured doctorates, math–science PhDs, income, patents, and tenure track positions at top U.S. universities collectively falsify the idea that after a certain point more ability does not matter. Indeed, our criterion variables constitute only a subset of the important markers of achievement and creativity (moreover, each requires an appreciable commitment, and their normative base rates are small); nevertheless, despite these constraints, across all four comparisons, the top versus bottom quartiles of the top 1% revealed statistically significant effect sizes favoring the top quartile. When sample sizes are sufficient to establish statistical confidence and criteria with high ceilings are employed, measures that validly assess individual differences within the top 1% of ability reveal important outcome differences between the able and the exceptionally able (even on outcomes that are exceedingly rare). A recent 20-year longitudinal study of 380 profoundly gifted participants (Lubinski et al., in press), the top 1 in 10,000 on quantitative or verbal reasoning (viz, SAT–M � 700 or SAT–V � 630, before age 13), reinforces this idea. These participants, by their mid-30s, secured tenure track positions at top U.S. universities at the same rate as a comparison group of 586 1stand 2nd-year graduate students attending top-15 math–science training programs and tracked for 10 years.6
- I don't have the time to inject this where relevant, but I thought I would point it out. ONoNotThisGuyAgain (talk) 10:59, 16 June 2012 (UTC)
- Yes, after rereading the law of diminishing returns I realize that isn't what I was talking about. I need to be more careful. What I meant was that some people have suggested that there is not much difference in predictive power of school and work achievement of IQ between say the top 5%, top 1%, and top .5%. A different law of diminishing returns I guess. Anyway, there is this article which negates this assertion. I will quote some relevant text. http://www.vanderbilt.edu/Peabody/SMPY/WaiJEP2005.pdf
- The idea that more IQ does not matter above some threshold is usually called the threshold hypothesis. All the evidence suggests it is false. Here's another study that disconfirms it: [19]. The IQ article might be a better place to discuss it than this article.--Victor Chmara (talk) 11:42, 17 June 2012 (UTC)
- O so it is called the threshold theory? That is good to know, thank you. I will probably mosey over to IQ soon enough and I will mention it then. Thanks. ONoNotThisGuyAgain (talk) 07:57, 19 June 2012 (UTC)
Improve citations
editI decided to make an account since I may make some minor contributions. I made the topic about the law of diminishing returns as well. I think a good direction for this article would be to expand the citations to include more than just authors name and year of publication. This should make it much easier for users who would like to reference these articles. There are a few I am interested in and will adjust it accordingly. ONoNotThisGuyAgain (talk) 10:07, 16 June 2012 (UTC)
- I have improved two citations from "indifference of the indicator." The one citing Jensen and on citing Wendy Johnson. For Jensen, the only thing he seemed to publish in 1998 was a book, so I assume that is right though I haven't read it. The odd thing is that I would expect that to be citation 41 not 42. 41, which supposedly quotes jensen, shows up as macintosh and then 42 was jensen. Well, I haven't moved those citations around because I don't know what macintosh is referring to. For citations 47 and 48, both refer to johnson et al, which I assume means wendy johnson from edinborough. She doesn't have a paper listed on her page from 2008. Are these two different johnsons? I have emailed her about this and will update once I know.ONoNotThisGuyAgain (talk) 10:56, 16 June 2012 (UTC)
The way the citations work in this article is that there are author name(s), year, and (for larger works) page numbers in the Notes section, while the full source information is available in the References section. That's a standard way to reference sources.--Victor Chmara (talk) 11:23, 17 June 2012 (UTC)
- Sorry about that. I didn't realize you split it up. I should have scrolled down a little bit. Sorry, at least that was only a minor thing anyway. I am somewhat new to wikipedia, I will try to be more careful next time. Wendy got back to me and the second paper is from 2007, so I went ahead and fixed that in the references section ONoNotThisGuyAgain (talk) 08:00, 19 June 2012 (UTC)
- I am reading through the article and it asks to please cite the second Johnson paper for 2007, but I just noticed it also says "article in press." She may have sent me an old version in this case. I specifically asked her what the proper date was and she just emailed me this in response. My guess is that it is from 2007 since what she sent is from June of that year. Surely publication wouldn't be delayed until 2008? If this paper she sent me is just old, then just revert the last two edits. ONoNotThisGuyAgain (talk) 08:19, 19 June 2012 (UTC)
- The paper was published in the first issue of the 2008 volume of Intelligence. See here[20].--Victor Chmara (talk) 13:06, 19 June 2012 (UTC)
The 'single most biased article' on all of Wiki
editThe editors show nothing in regards to the evidence of the poor correlation between IQ and real world outcomes. It takes the stance that IQ is the "single biggest predictor ...", but blatantly fails to acknowledge that the vague '.55' correlation between IQ and job performance, leaves a large percentage of the variation in job performance, unexplained. Meaning that while IQ may be the 'single best predictor', it is not, by any means, a good predictor of real world performance. Of course, there is sources to cite which support this view, but none of the editors have taken effort to include any reasonable objections against IQ. — Preceding unsigned comment added by 209.16.113.3 (talk) 16:54, 2 September 2013 (UTC)
- What sources do you recommend for improving the article? Do you know of others besides those in the source list linked to from many of the article talk pages of articles on related topics? I'm always glad to hear about new reliable sources. -- WeijiBaikeBianji (talk, how I edit) 01:23, 3 September 2013 (UTC)
General intelligence is the best single predictor of job performance. This is simply a fact reported in many reliable sources. No one claims that it explains all the variation in job performance. 0.55 is a numerical value, so I don't see how it can characterized as vague if you understand what a correlation is. 0.55 is a medium-to-large effect size according to various guidelines, so it's not correct to say that g is not a good predictor.--Victor Chmara (talk) 17:02, 3 September 2013 (UTC)
Journal of Intelligence — Open Access Journal
editJournal of Intelligence — Open Access Journal is a new, open-access, "peer-reviewed scientific journal that publishes original empirical and theoretical articles, state-of-the-art articles and critical reviews, case studies, original short notes, commentaries" intended to be "an open access journal that moves forward the study of human intelligence: the basis and development of intelligence, its nature in terms of structure and processes, and its correlates and consequences, also including the measurement and modeling of intelligence." The content of the first issue is posted, and includes interesting review articles, one by Earl Hunt and Susanne M. Jaeggi and one by Wendy Johnson. The editorial board[21] of this new journal should be able to draw in a steady stream of good article submissions. It looks like the journal aims to continue to publish review articles of the kind that would meet Wikipedia guidelines for articles on medical topics, an appropriate source guideline to apply to Wikipedia articles about intelligence. -- WeijiBaikeBianji (talk, how I edit) 21:11, 5 December 2013 (UTC)
- The Journal of Intelligence — Open Access Journal website has just been updated with the new articles for the latest edition of the journal, by eminent scholars on human intelligence. -- WeijiBaikeBianji (talk, how I edit) 21:35, 16 February 2014 (UTC)
Comment
editThis page has big problems. Proponents of the G theory have clearly locked down on its edits. Terminology is strongly skewed towards supporting the validity of G, and criticism are muted. For instance, in other theory pages (such as Multiple intelligence) there is a "criticism" section, here it is "challenges". Proponents are more likely to have definitive wording in their accomplishments, ie, Jensen "proved", while critics have less definiteive wording, ie, Sternberg "argued". I've noticed this trend on most of the human intelligence wiki articles, I highly recommend real editors come in and clean up the formatting and language to make it less patently biased.--162.226.6.148
- The coverage of the various viewpoints in this article reflects their prominence in reliable sources. The hierarchical g theory is the mainstream, consensus view of intelligence supported by leading researchers and used in test development. In contrast, non-g theories are mutually contradictory critiques of the mainstream theory, each supported by only a few people. Gardner's theory, for example, has no supporters among empirically oriented researchers and there are no tests of his multiple intelligences.
- As to your claim that there's a WP:SAY-related bias in the article, here are most of the instances where Jensen and like-minded researchers are mentioned in the article text along with the substitutes for 'say' used:
- Jensen maintained, according to Jensen, Jensen argued, Jensen pointed out, Wechsler contended, Jensen hypothesized, Jensen suggested, Jensen argued, according to Jensen's view, researchers have argued, critics have argued, researchers have suggested and argued, analyses have shown, has been much criticized by researchers, researchers have argued, critics have pointed out, researchers have criticized, researchers have rejected, researchers maintain, critics have suggested.
- And here's how the arguments of g critics are referenced in the article:
- Thorndike and Thomsom proposed, researchers have argued, mutualism model proposes, Thurstone argued, Horn argued, Cattell rejected, Cattell and Horn maintained and argued, in Cattell's thinking, Guilford proposed, Guilford claimed, Guilford presented as evidence, Gardner posits, according to Gardner, Gardner contends, Gardner has argued, Sternberg and colleagues have claimed, Sternberg claims, Gould's critique is presented, Gould argued, according to Gould, Gould criticized, Gould argued.
- There is no great difference between how the views of the two camps are described, even though the "g men" represent the mainstream view, while the critics are a heterogeneous bunch supporting a wide range of mutually contradictory views, each of which have generally been espoused by only a few people, none of whom are alive in some cases. The fact that Brody is described as 'showing' rather than 'arguing' a point is not controversial because Sternberg admitted that point, although he gave it a different interpretation than Brody.--Victor Chmara (talk) 09:51, 4 August 2014 (UTC)
NPOV tag
editIt's been three years since Fractionating Human Intelligence [22][23] and this page still reads like a wiki hoax.--TDJankins (talk) 21:59, 26 June 2015 (UTC)
- The conclusions of the fractionating intelligence paper were rejected by experts in several published commentaries in Intelligence and Personality and Individual Differences. There are plausible ways to challenge the g model, but that paper isn't one of them.--Victor Chmara (talk) 12:40, 27 June 2015 (UTC)
- Do you have any credible sources to support your opinion?--TDJankins (talk) 19:10, 27 June 2015 (UTC)
- See [24] and [25].--Victor Chmara (talk) 21:21, 27 June 2015 (UTC)
Those odd commentaries were quickly refuted and Ashton and colleagues relented in most of their claims. See [26] and [27]. Further, while those people may disagree with or not like the results of FHI, without evidence, it's just opinion. FHI physically disproved the theory that the g-factor represents general intelligence. Further, FHI proved that human intelligence can be reduced to no fewer than three factors (reasoning, short term memory, and verbal agility). Conversely, there isn't a single study that proved the theory that the g-factor represents general intelligence. FHI represents the current science, so should this page.--TDJankins (talk) 22:30, 28 June 2015 (UTC)
- Ashton et al. never relented on their arguments[28]. Wikipedia articles should primarily be based on reliable secondary sources and represent all significant viewpoints in proportion to their prominence in such sources. The omission of one N=16 primary source whose conclusions have been rejected by experts certainly does not warrant an NPOV tag.
- And, while it's irrelevant from the perspective of WP policy, Hampshire et al. certainly did not "physically disprove" anything. They forced an orthogonal three-factor solution on their neuroimaging data. Orthogonality is an assumption of their model, not something that they test against other models. Their neurodata weren't available for reanalysis, but Ashton et al. showed that the three-factor model fits poorly to the IQ data of Hampshire et al., while a higher-order g model shows good fit to those data. Moreover, the assumption that intraindividual differences should have the same factor structure as interindividual differences is obviously untenable.--Victor Chmara (talk) 18:15, 29 June 2015 (UTC)
- Victor here is right, of course, with his analysis of a very disjointed and badly put together "paper" which is little more than an attack on the field of psychometrics. One thing to point out here, is that Victor's argument against FHI is actually relevant for inclusion in the article, I think. If we are to include FHI and the responses by other academics, these criticisms of FHI which knock it down completely are definitely relevant and must be included to improve the article.
- Victor, do you have any sources to start us off on that front? Wajajad (talk) 12:37, 30 June 2015 (UTC)
- I don't think the fractionating intelligence paper should be discussed in this article. It's one small, deeply flawed study and the debate around it was just a flash in the pan within the wider IQ/g literature.--Victor Chmara (talk) 16:13, 1 July 2015 (UTC)
You're free to believe whatever you want to. Meanwhile, the rest of the world is proceeding with FHI. See [29]. Nobody cares about the whining of those who Neuron didn't think were experts and whose research has been deemed irrelevant.--TDJankins (talk) 00:35, 2 July 2015 (UTC)
- You seem to be forgetting the Wikipedia content guideline about preferring secondary sources to primary sources. -- WeijiBaikeBianji (talk, how I edit) 03:28, 2 July 2015 (UTC)
- Ashton may have had an argument if FHI were based solely upon the IQ data, but it's not, so he doesn't. FHI was based upon the COMBINING of IQ data with neuroimaging data. One can only surmise that Ashton's commentary confused FHI with some other paper he read and that Personality and Individual Differences only published it because FHI physically proves that half of the things they and their individual editors published in the past are pseudoscience (same goes for Intelligence). I guess that's what happens when you pretend a far-fetched theory is a fact. As such, neither Personality and Individual Differences nor Intelligence are independent sources.--TDJankins (talk) 01:28, 3 July 2015 (UTC)
What is the difference between correlation and explained variance between individuals?
editThe article's opening states: "The g factor typically accounts for 40 to 50 percent of the between-individual variance in IQ test performance, and IQ scores are frequently regarded as estimates of individuals' standing on the g factor". Explained variance is usually used synonymously with R^2. However, later in the article, correlations seem to be much higher, such as "the correlations between g factor scores and full-scale IQ scores from David Wechsler's tests have been found to be greater than .95." and "Raven's Progressive Matrices is among the tests with the highest g loadings, around .80."
How are these numbers consistent with one another?
- If you factor-analyze an IQ test battery, the subtests will have varying g loadings. The average loading may be around 0.6 or 0.7. This means that g's contribution to the total variance of the IQ battery is the square of that average loading, which is usually between 40 or 50 percent. In contrast, full-scale IQ scores are (weighted) averages of all subtest scores in a test battery. Such composite scores will mostly reflect what is common to all the subtests, that is, g. The g loadings of composite scores are higher than those of individual subtests, typically above 0.9, because they have been mostly "purged" of those sources of variance that are not common to all tests.
- Individual differences in full-scale/composite IQ scores are mostly on the g factor. But individual differences on a randomly chosen subtest may be mostly due to other sources of variance, depending on the subtest's g loading.
- I'll try to make this distinction clearer in the article.--Victor Chmara (talk) 07:52, 24 August 2015 (UTC)
Change to the lead
edit@Victor Chmara You reverted my edits to the article lead [30]. I have not reverted, and want to resolve your concerns in a civil way and to resolve the issue here on the talk page. I completely disagree with your revert. Please explain further why you did it? Thank you.Charlotte135 (talk) 21:24, 30 January 2016 (UTC)
- Looking at this again, and your brief comments, I should say I partly disagree as I can see *some* logic here. I made the edit to the lead and was going to revise the other redundant section I deleted by placing it in another section of the article. Let's talk about it.Charlotte135 (talk) 21:29, 30 January 2016 (UTC)
Like I said in my edit summary, the term g factor refers to a population-level variable and it is typically represented in terms of factor loadings. You can't say that "Peter's g factor is 120." In contrast, terms like IQ, intelligence, etc. usually refer to test scores of individuals ("Peter's IQ is 120"), not to a population-level variable. Furthermore, 'g factor' can be regarded as a theoretically neutral term (=the largest factor in a factor analysis of cognitive data, regardless of what sort of interpretation one gives to it), whereas terms like general mental ability are clearly wedded to a particular theoretical position, namely the Spearman-Jensen one.--Victor Chmara (talk) 23:45, 31 January 2016 (UTC)
- I think we are on the same page with a lot of this. The problem I see immediately with this article is that general intelligence redirects to this g factor article and general intelligence is discussed throughout. As you would be aware the concept of general intelligence is controversial and no consensus exists, as to what it is, and how it can be defined. The objective fact remains that the reliable sources do refer to g factor as general intelligence (and the other terms I introduced in the lead) and we need to reflect this from the get go, IMO at least. I'd like to approach this discussion here in small chunks if that's ok. What do you think?Charlotte135 (talk) 00:35, 1 February 2016 (UTC)
- Will restore the change to the lead I made and restore the other paragraph which is what I think your concern was.Charlotte135 (talk) 19:48, 1 February 2016 (UTC)
External links modified
editHello fellow Wikipedians,
I have just modified one external link on G factor (psychometrics). Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Added archive https://web.archive.org/web/20140407100036/http://www.psych.umn.edu/faculty/waller/classes/FA2010/Readings/Spearman1904.pdf to http://www.psych.umn.edu/faculty/waller/classes/FA2010/Readings/Spearman1904.pdf
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—InternetArchiveBot (Report bug) 19:53, 9 October 2017 (UTC)
Using preprints to make a claims
editCan we do that on this article ? Walidou47 (talk) 14:43, 29 December 2019 (UTC)
Should the subsection "Social exchange and sexual selection" be cut / renamed?
editApart from the first sentence, this subsection doesn't even mention g factor. And neither that sentence nor the paragraph it belongs to refers to the ostensible subject of the subsection: "Social exchange and sexual selection". Seems like a big muddle to me. Perhaps some of this material belongs in our article Evolution of human intelligence? I'm going to reserve judgment on that. But, apart from that first paragraph, it does not appear to be at all germane here. And if we are to keep that first paragraph I'd suggest we need to find a more apt name for the subsection. Generalrelative (talk) 18:10, 29 December 2020 (UTC)
- @Generalrelative: If you want to move the content from the fourth paragraph to the Evolution of human intelligence and social selection articles and shorten the title to "Social exchange", that's fine with me but I'm not sure what you mean when you say that "neither that sentence nor the paragraph it belongs to refers to the ostensible subject of the subsection" and "apart from that first paragraph, [the subsection's material] does not appear to be at all germane here". Kaufman and his co-authors found that performance on modified versions of the Wason selection task in the context of social exchange as proposed by Tooby and Cosmides is more strongly related to g than decontextualized versions of the task. The second paragraph (minus the last sentence) summarizes the research Tooby and Cosmides conducted to formulate a theoretical explanation of the disparities in the different contexts.
- I'd still argue that it would be better to have a brief summary of the selection processes of evolution that shaped human intelligence in the section about theories about what causes the subtest intercorrelations but perhaps that's just my opinion (and as an aside, is why I created the subsection in the first place). As for the content in the third paragraph, a later subsection in the article discusses the practical validity of the g factor in predicting academic achievement, and Geary's, Pinker's, and the other educational psychologists' criticism of the ineffectiveness of constructivist pedagogical techniques in math and science education because it assumes formal logical reasoning to be biologically primary rather than being a by-product is cited in correspondence to the research Tooby and Cosmides and Kaufman and his co-authors conducted about the Wason selection task. -- CommonKnowledgeCreator (talk) 22:25, 31 December 2020 (UTC)
- @CommonKnowledgeCreator: Thanks for engaging. I had typed up a point-by-point response but then reconsidered what I wanted to say. Instead of leading with a discussion of how to improve this subsection I'm going to make the case that it doesn't belong here at all.
- Upon reflection, it seems clear to me that even paragraph 1, which does discuss g factor, does not address the topic indicated in the "Theories" section lead, i.e. "what causes the positive intercorrelations" which constitute g. Kaufman et al.'s finding does relate explicitly to g and therefore might belong somewhere in this article, but as described it is just a correlation, not a theory of causation. If these authors do propose a causal theory, that would certainly belong here, but as written no such theory is indicated.
- I'm not sure what I need to say with regard to paragraphs 2-4 other than to repeat my observation that they do not relate directly to g factor, the topic of the article. No amount of synthetic reasoning on our part will make them appropriate content for this article. Only if the reliable sources upon which these paragraphs are based explicitly discuss g factor do they belong here. And if so they would need to be substantially rewritten to reflect that. Otherwise these discussions of constructivist pedagogy and sexual selection are at best off-topic diversions incompatible with WP:SUMMARY and at worst implicit WP:SYNTH.
- If I've missed something I'm definitely open to hearing why I'm wrong. I also don't want to diminish the effort you've clearly put into creating and developing this subsection. If you can find a better place for some of this material, again possibly at Evolution of human intelligence, that would be great. Wishing you a happy and productive new year. Generalrelative (talk) 18:03, 2 January 2021 (UTC)
- @Generalrelative: Thanks for providing a more considerate response and sorry for the delay in mine. At this point, I would only argue that you allow for keeping at least the content summarizing the Kanazawa and Kaufman articles since those articles explicitly discuss g. I suppose if I wished to continue defending keeping the section in its entirety, I would argue that John Tooby stated in an August 2010 interview with Reason that the "only force that organizes any biological system functionally is natural selection", and that because human intelligence is ultimately a product of the physical activity of the human brain, the ultimate cause of the intercorrelations of the subtests would be the selection processes of evolution (at least for Spearman's subtests since they involved the use of language and music), but I would have to concede that such an assertion would violate WP:SYNTH since none of the sources explicitly say that and thus cannot be a justification for keeping the section (and as Kaufman notes in the Psychology Today article, Tooby himself quipped in 2006 that he didn't know what to make of general intelligence). Also, the more I think about it, I am probably also thinking at the wrong level of analysis (i.e. proximate vs. ultimate causation).
- I'll just copy the second paragraph (minus the last sentence) over to the Evolution of human intelligence article and we can just cut the other two since most of the Miller, Nesse, and West-Eberhard content is already summarized in the Evolution of human intelligence and the Social selection articles (minus the 1989 Buss study which I'll copy over to the Social selection article). The content on constructivism from Pinker and Simon is already included in their articles while the Clark et. al article is already summarized on the Constructivism (philosophy of education) article and in the Psychology article subsection on Replication; Geary's can be copied over to his but I'm not sure how necessary that would be. I still think it would be worthwhile for there to be a brief section summarizing the content from the Evolution of human intelligence article somewhere in this article since the lede does link to that article, but I would have to concede that I have no idea where that content should be included if it is not included in the Theories section.
- "Thanks for engaging." Do most people not engage? In addition to what I said before, as someone who has been in and out of college and who was studying and tutoring mathematics (as well as taking courses to become a math teacher), my motivation to create and curate the section as extensively as I have is because I have often encountered other students studying mathematics and computer science who seem to believe some variation of what Kanazawa says and who believe that their ability to use the rules of logical inference is due to general intelligence. When these topics have come up in conversation, I couldn't remember where I found Kaufman's research and I was unaware of the disputed research Tooby and Cosmides cite about the rules of logical inference. At least to me anyway, the latter assumption that the students make sounds about as evolutionarily improbable as assuming that literacy and numeracy are evolved psychological mechanisms but I digress. I just hope that a summary of it all remains somewhere on Wikipedia so that I and anyone else who encounters such people don't have to look that hard to find it when it comes up in conversation. Wishing you a happy and productive year as well. -- CommonKnowledgeCreator (talk) 02:26, 12 January 2021 (UTC)
- No worries. I've moved the Kanazawa and Kaufman et al. findings to the section on correlations ("Practical Validity") and cut the rest. Good to see that you've found a place for much of this material at Evolution of human intelligence. Best, Generalrelative (talk) 07:52, 12 January 2021 (UTC)
Misinterpreted sources in practical validity
editA recent change was introduced to the article which misinterprets the claims made in the source paper. Consider the very first paragraph on the section about practical validity:
The practical validity of g as a predictor of educational, economic, and social outcomes is the subject of ongoing debate.[1]
However, the cited journal states:
For as long as psychometric tests have been used to chart the basic structure of intelligence and predict criteria outside the laboratory (e.g., grades, job performance), there has been tension between emphasizing general and specific abilities [19,20,21]. Insofar as the basic structure of individual differences in cognitive abilities, these tensions have largely been resolved by integrating specific and general abilities into hierarchical models.
Alternatives to models of intelligence rooted in Spearman’s original theory have existed almost since the inception of that theory (e.g., [64,65,66,67,68]), but have arisen with seemingly increasing regularity in the last 15 years (e.g., [69,70,71,72,73,74]). Unlike some other alternatives (e.g., [75,76,77,78,79]), most of these models do not cast doubt on the very existence of a general psychometric factor, but they do differ in its interpretation. These theories intrinsically offer differing outlooks on how g relates to specific abilities and, by extension, how to model relationships among g, specific abilities and practical outcomes. We illustrate this point by briefly outlining how the two hierarchical factor-analytic models most widely used for studying abilities at different strata [73] demand different analytic strategies to appropriately examine how those abilities relate to external criteria.
(Relevant parts in bold by me)
As you can see, there is not any controversy or a "debate" going on about the g-factor, just different interpretations about the observed effect. For a comparison a similar situation exists in quantum mechanics currently as well, there are at least five different mutually exclusive interpretations of quantum mechanics, yet nobody calls it a debate. Doing so is sensationalist fear, uncertainty & doubt-style propaganda aimed at discrediting the underlying observations by stressing out differing views instead of reinforcing the agreed upon facts.
Also, when you read the whole section from the beginning to the end, it's gives out a wacky feel because the first paragraph tries to paint a controversial picture, yet then the rest of the section gives a half-dozen proofs how it is not actually controversial at all. That's why the first paragraph should be left as it was before the (obviously ideologically motivated) edit. 83.102.62.84 (talk) 22:13, 23 January 2021 (UTC)
- 1) First sentence of this reference:
The relative value of specific versus general cognitive abilities for the prediction of practical outcomes has been debated since the inception of modern intelligence theorizing and testing. This editorial introduces a special issue dedicated to exploring this ongoing “great debate”.
Whilemost of these models do not cast doubt on the very existence of a general psychometric factor
, the article goes on to state thatIn the applied realm, however, debate remains.
Seems pretty straightforward to me that this source supports the statement that there isongoing debate.
- 2) Other sources were cited to support the other sentence you've been edit warring over:
Others have argued that tests of specific abilities outperform g factor in analyses fitted to real-world situations.
Specifically [31], [32] and [33]. - 3) The absence of a detailed discussion of these more critical sources in the status quo article reflects the fact that since discovering them I have had limited time to improve the article. More robust and expanded consideration of these sources would be WP:DUE
- 4) Continuing to claim that my editing is
obviously ideologically motivated
, especially after being warned about WP:NPA on your user talk page, is highly inappropriate. In this case both MrOllie and Megaman en m, two highly experienced editors, have also reverted you. Generalrelative (talk) 22:35, 23 January 2021 (UTC)
- 1)
In the applied realm, however, debate remains.
There's no proof for a claim that such debate exists, apart from this one article which claims that.
- 2) Stop misinterpreting your own sources.
g is one of the best predictors of school and work performance (for a review, see [7], pp. 270–305; see also, [8,9]). Moreover, a test’s g loading (i.e., its correlation with g) is directly related to its predictive power. In general, tests with strong g loadings correlate strongly with school and work criteria, whereas tests with weak g loadings correlate weakly with such criteria.
Consistent with these findings, Thorndike [10] found that g explained most of the predictable variance in academic achievement (80–90%), whereas non-g factors (obtained after removing g from tests) explained a much smaller portion of variance (10–20%). Similar results have been found for job training and productivity, which are robustly related to g but negligibly related to non-g factors of tests (e.g., rnon-g < 0.10, [7], pp. 283–285; see also, [9,11]).
As you can see, the article does not claim there's a debate, it repeats the consensus fact that predictive power of g is 80-90%, the rest 10-20% attributing to other factors. So unlike you claim, other factors do not outperform g.
Here, as in many other fields [9,10,11,12,13], general mental ability or the g-factor has often been singled out as the best predictor of scholastic performance [14,15,16] with specific abilities purportedly adding little or no explained variance [17]. However, this focus on the so-called general factor seems to not take full advantage of the structure of intelligence [18,19], which postulates a hierarchical structure with a multitude of specific abilities located at lower levels beneath a g-factor.
Table 2 contains the results for the linear regressions using the g-factor score as independent variable. It can be seen that the models yield moderate (German and English) to strong relations (math) with the exception of sports. Accordingly, the regression weights were significant with the exception of sports.Moreover, it can be seen that, while the model with linear terms only fit best whenever the g-factor score was used as a predictor, the models with specific ability test scores as predictor yielded the best results when assuming curvilinear relations.
Again, strong correlation between g and theoretical subjects found. Not surprisingly g does not correlate with sports, something which was never claimed by anyone, further giving validity for g as in a real, existing variable instead of some made up concept. Then the paper fits g by linear factor and arrives to the exact same conclusion as every other article: g's predictive power cannot be matched.
Then the paper goes on modeling specific abilities as curvilinear relations, which unsurprisingly fits better to their assumptions because of the mathematical nature of said relation. But it has nothing to do with g itself, they present a whole different model, which should have a wiki-article of its own.
The article is actually an aggregate of three different studies.
The first study concludes:
g explained 19 percent of the variance in job performance, with the Primary Mental Abilities accounting for only an additional 4.4
The second study was made by the US army and found that g is the most predictive in leadership among other factors.
Third is again by military, studying verbal abilities. Hardly anything new in these studies, if you actually read them.
- 3) Why make half-effort edits then? There is no point in publishing quick edits resulting in misleading articles unless...
- 4) You are actually on a mission here, declaring your own subjective world view as a fact. WP:NPA states that:
Editors are allowed to have personal political POV, as long as it does not negatively affect their editing and discussions.
Based on which I am claiming that looking at your edit history, you clearly are way too biased to write about this subject objectively.
Continuing to claim that my editing is
obviously ideologically motivated
Because it is, why do you even try to deny it when it's plainly visible for everyone to see? Why cannot you admit just it? Very intellectually dishonest of you -just as every edit in your edit history.
two highly experienced editors
Argumentum_ab_auctoritate 83.102.62.84 (talk) 23:34, 23 January 2021 (UTC)
- 1) One source is sufficient for the claim that there is an
ongoing debate
when it is the lead article in a recent special issue about this debate from a respected peer-reviewed journal. The other three references under discussion here could also have been cited, but this was unnecessary. - 2) I haven't, and frankly it's not at all clear to me that you've presented an argument to the contrary.
- 3) Again, I haven't. I've simply stated that a more robust discussion of these sources would be WP:DUE.
- 4) Article talk pages are not an appropriate forum to speculate on the imagined motivations of other editors, no matter how strongly you may believe these motivations to be real. And while arguments from authority are certainly not binding, it's still wise to respect experience, especially on a project like Wikipedia where there are many norms and guidelines which may not be obvious to a brand new editor (assuming that is what you are, given your contribution history). The fact that multiple editors, including two who have been with the project for upwards of 12 years each, have flagged your edits as either edit warring or as
disruptive
should signal to you that it's time to take a step back and examine your conduct. Generalrelative (talk) 01:46, 24 January 2021 (UTC)
- 1) One source is sufficient for the claim that there is an
References
- ^ Kell, Harrison J.; Lang, Jonas W. B. (September 2018). "The Great Debate: General Ability and Specific Abilities in the Prediction of Important Outcomes". Journal of Intelligence. 6(3): 39.