Talk:Representativeness heuristic
This article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
Someone who understands representativeness please look at this
editThis article does not actually explain what the representativeness heuristic is. For reference, I have published multiple scientific papers about heuristics and biases, have read dozens of papers and books about biases including a lot of Kahneman's work, and I still don't understand representativeness. And this article hasn't helped. Tversky and Kahneman simply didn't explain it clearly. Please, someone who gets this sort it out.Paul Ralph (University of Auckland) (talk) 04:35, 23 September 2018 (UTC)
Goodfellow paper
editBelow citation reportedly claims that subjects will avoid answering test questions in a "yes-no-yes-no-yes" pattern (presumably because it doesn't look like a distribution they would expect naturally). I would love if someone with access to an academic abstracts database could confirm this for me.
- Goodfellow, Louis D. (1940). Three-fourths will call heads, etc. Journal of General Psychology, 23, 201-205. cf. Flesch, Rudolf. (1951). The Art of Clear Thinking, p. 132. New York: Harper & Brothers.
--Taak 23:23, 12 Jun 2004 (UTC)
Taxicab problem
editThe example given as the "Taxicab" problem is a poor example. There is a 100% chance that the man says the car is blue (Because that is what color it says he says it is) and there is a 20% chance he got it wrong. Therefore there is an 80% chance that a blue car was involved in the accident using Bayes' theorem. 100% x 80% = 80. This needs to be clarified, with the misleading phrases and words removed. Instead of saying that he testified that the car was blue, simply say his percentages of getting the color right.
- If your logic is right, then let's extend it: We take the same man and have him look at the sky. He says it is green. Does this mean that there is a 20 per cent chance that the sky is green? Of course not. The reason is that the prior probability of the sky being green is much lower. We all know the sky is blue, and there is a small chance we are all wrong. But it is very small.
Or imagine that we have two people with the same 80/20 error rate look at the sky. One says it is blue, and the other says it is green. By your logic, this means the sky has an 80 per cent chance of being green, and an 80 per cent chance of being blue. These probabilities do no add up to one, which indicates a flaw in your reasoning.
The point of Tversky's experiment was that probability is more complicated than we make it out to be.
- Essentially the question in debate comes down to this: Does the actual color of the taxi affect the probability that the man was correct in his identification? The answer is stated in the experiment itself; he is accurate in his color identification 80% of the time (not 80% of the blue and 80% of the green, but 80% overall). Therefore, the information about the ratio of green to blue taxis is spurious. The 41% figure is erroneous, because that would mean that out of all the possible cases, the correct color would only be identified 59% of the time, by a person with an 80% rate of accuracy. Perhaps this should be clarified within the article.
- Actually, the question in debate comes down to: are we looking at the probability of one event happening, or the probability of two independant events happening? Consider: If we run the experiment hundreds of times, crashing random cabs into objects and having the witness watch each time, we get the following results: 12% of the time he says he saw a blue cab and it actually was a blue cab, 17% of the time he says he saw a blue cab and it actually was a green cab, 68% of the time he says he saw a green cab and it actually was a green cab, and 3% of the time he says he saw a green cab and it actually was a blue cab. Now, if we ask the question "of all the accidents, how often did he identify the cab correctly?" we get 80%. However, what if we asked the question "of all the times he said he saw a blue cab, how often was a blue cab actually there?" He said he saw a blue cab 12% of the time when he actually did see a blue cab, and he said he saw a blue cab 17% of the time when he saw a green cab and was wrong. Therefore, of the times he says he saw a blue cab (29% of the time) he actually did see a blue cab for 41% of those times, and 59% of the time he says he saw a blue cab, he actually saw a green one. He's still right 80% of the time, it's just that there's so many more green cabs that his chances of wrongly identifying a green cab are actually better than his chances of rightly identifying a blue one. That being said, the reasoning behind the probability calculations could be explained more clearly. —Preceding unsigned comment added by 76.25.21.174 (talk) 20:25, 24 July 2008 (UTC)
- Yeah, the post above is correct, the math in the article is correct and is the only reasonable way to interpret the problem. There really is no debate here. What the first poster is confused about is that he thinks "the witness correctly identified each one of the two colors 80% of the time" implies "if the witness says it's a car of color x, the probability it's color x is 80%" while it only means "if the witness sees a car of the color x, there is a 80% chance he will say it's color x". These are fundamentally different. Put it this way - let's say a person correctly identifies flying objects 51% of the time. This person says he saw a flying pig (as an example of something extremely unlikely). Is it more likely that it was a flying pig or that it was something else and he identified it wrong? In the example in the article, blue cars are flying pigs (but not as unlikely). —Preceding unsigned comment added by 93.136.53.49 (talk) 20:11, 28 June 2010 (UTC)
Real money, so what?
editRegarding Bar-Hillel and Neter's studies on the Disjunction Fallacy:
These incorrect appraisals remained even in the face of losing real money in bets on probabilities.
The significance of this statement is not clear. Surely in a correctly controlled study the decisions made by the subjects (being the point of the experiment) would be so obtained that there would have been some desire by the subjects to give a reasonable answer? Therefore, what's the point of saying that even when 'real' money was at stake, then the judgments remained faulty? That's what one might expect, given that the appraisals have already been said to be wrong.
It's an offline reference, so I'm unable to check the source. Centrepull (talk) 09:28, 17 December 2009 (UTC)
- It gets round the potential objection that "okay, subjects didn't make correct judgements in this experiment. So what? They had no incentive to." It's a common line of evidence explored in research on decision making. Without the money, there might have been some desire to be correct, but it might have been negligible and if so the experiment would have been irrelevant to everyday decisions. MartinPoulter (talk) 16:29, 17 December 2009 (UTC)
alternate explanation for taxicab
editI can't imagine that in any normal sample of people, more then 1 or 2%, at most, would even know, much less bother, to think about the math. this example shows only that social scientist create artificial situations where ordinary people are expected to have extra ordinary knolwedge One would expect that an ordinary person, on being given the taxicab problem, would, at least subconsiously realize that they did not know how to do the mathCinnamon colbert (talk) 15:59, 22 February 2010 (UTC)
what artificial situations? — Preceding unsigned comment added by 12.53.232.148 (talk) 21:25, 12 March 2020 (UTC)
Brief definition of Bayesian needed
editFor the many who don't know the meaning of the term Bayesian, a short clause should be added to include the definition of this term. __meco (talk) 09:59, 7 October 2011 (UTC)
Text needing to be sourced
editI have removed the fillowing section from the article since it had a reference request dated January 2010. When properly referenced it can be added back into the article.
==Alternative Explanations== Jon Krosnick, a professor in Communication at Stanford, in his work has proposed that the effects that Kahneman and Tversky saw in their work may be partially attributed to information order effects. When the order of information was reversed - with probability figures coming later, a lot of the effects were mitigated.
Tversky worship
editThis and several related articles appear to have been written by Tversky or a huge fan of his, and focus rather obsessively on citing his work. — SMcCandlish Talk⇒ ɖ∘¿¤þ Contrib. 16:55, 15 June 2012 (UTC)
Gambler's fallacy as example of local representativeness, under Randomness
editI do not see how the stated fallacy is a specific result of a thinking pattern governed by the assumption of local representativeness. If a coin was found to produce very skewed results over a few flips, local representativeness should, contrary to what the fallacy implies, dictate that the coin be seen to be biased toward the result largely produced. Perhaps what is meant is that local representativeness dictates that a small sample is expected to reflect the population, given certain preexisting beliefs about the population? However, this is not what 'local representativeness' implies as a determinant of randomness. I have thus removed mention of the Gambler's fallacy and corrected the application of local representativeness to the example of coin tosses. 183.90.103.144 (talk) 13:08, 22 March 2013 (UTC)
Dead link in external links section
edit"Powerpoint presentation on the representativeness heuristic (with further links to presentations of classical experiments)"
Actual URL is not working for me: posbase.uib.no/posbase/Presentasjoner/K_Representativeness.ppt
Edit: I have found an external link. The PPT contains two pages and five references to Kahneman/Tversky papers. I propose to delete this section because it offers no additional value for Wikipedia readers.
https://web.archive.org/web/20160303172802/http://posbase.uib.no/posbase/Presentasjoner/K_Representativeness.ppt - Jhaeling (talk) 21:48, 17 January 2018 (UTC)
Pareidolia related?
editAs per this example of this heuristics:
An example would be perceiving meaningful patterns in information that is in fact random.
From https://en.wikipedia.org/wiki/Nudge_(book)
Zezen (talk) 18:02, 7 July 2020 (UTC)