The ugly duckling theorem is an argument showing that classification is not really possible without some sort of bias. More particularly, it assumes finitely many properties combinable by logical connectives, and finitely many objects; it asserts that any two different objects share the same number of (extensional) properties. The theorem is named after Hans Christian Andersen's 1843 story "The Ugly Duckling", because it shows that a duckling is just as similar to a swan as two swans are to each other. It was derived by Satosi Watanabe in 1969.[1]: 376–377 

Mathematical formula

edit
 
Watanabe's example, using objects A, B, C, and properties F ("first"), W ("white"). "0", "1", "¬", "", "", and "" denote "false", "true", "not", "and", "or", and "exclusive or", respectively. Since F happens to imply W, each predicate that can be formed from F and W coincides with another one, hence there are only 8 extensionally distinct possible predicates, each shown on an own line. The white ducklings A and B agree on 4 of them (line 2, 3, 4, 8), but so do A and C, too (line 3, 5, 7, 8), and so do B and C (line 1, 3, 6, 8).[1]: 368 [2]

Suppose there are n things in the universe, and one wants to put them into classes or categories. One has no preconceived ideas or biases about what sorts of categories are "natural" or "normal" and what are not. So one has to consider all the possible classes that could be, all the possible ways of making a set out of the n objects. There are   such ways, the size of the power set of n objects. One can use that to measure the similarity between two objects, and one would see how many sets they have in common. However, one cannot. Any two objects have exactly the same number of classes in common if we can form any possible class, namely   (half the total number of classes there are). To see this is so, one may imagine each class is represented by an n-bit string (or binary encoded integer), with a zero for each element not in the class and a one for each element in the class. As one finds, there are   such strings.

As all possible choices of zeros and ones are there, any two bit-positions will agree exactly half the time. One may pick two elements and reorder the bits so they are the first two, and imagine the numbers sorted lexicographically. The first   numbers will have bit #1 set to zero, and the second   will have it set to one. Within each of those blocks, the top   will have bit #2 set to zero and the other   will have it as one, so they agree on two blocks of   or on half of all the cases, no matter which two elements one picks. So if we have no preconceived bias about which categories are better, everything is then equally similar (or equally dissimilar). The number of predicates simultaneously satisfied by two non-identical elements is constant over all such pairs. Thus, some kind of inductive[citation needed] bias is needed to make judgements to prefer certain categories over others.

Boolean functions

edit

Let   be a set of vectors of   booleans each. The ugly duckling is the vector which is least like the others. Given the booleans, this can be computed using Hamming distance.

However, the choice of boolean features to consider could have been somewhat arbitrary. Perhaps there were features derivable from the original features that were important for identifying the ugly duckling. The set of booleans in the vector can be extended with new features computed as boolean functions of the   original features. The only canonical way to do this is to extend it with all possible Boolean functions. The resulting completed vectors have   features. The ugly duckling theorem states that there is no ugly duckling because any two completed vectors will either be equal or differ in exactly half of the features.

Proof. Let x and y be two vectors. If they are the same, then their completed vectors must also be the same because any Boolean function of x will agree with the same Boolean function of y. If x and y are different, then there exists a coordinate   where the  -th coordinate of   differs from the  -th coordinate of  . Now the completed features contain every Boolean function on   Boolean variables, with each one exactly once. Viewing these Boolean functions as polynomials in   variables over GF(2), segregate the functions into pairs   where   contains the  -th coordinate as a linear term and   is   without that linear term. Now, for every such pair  ,   and   will agree on exactly one of the two functions. If they agree on one, they must disagree on the other and vice versa. (This proof is believed to be due to Watanabe.)

Discussion

edit

A possible way around the ugly duckling theorem would be to introduce a constraint on how similarity is measured by limiting the properties involved in classification, for instance, between A and B. However Medin et al. (1993) point out that this does not actually resolve the arbitrariness or bias problem since in what respects A is similar to B: "varies with the stimulus context and task, so that there is no unique answer, to the question of how similar is one object to another".[3][5] For example, "a barberpole and a zebra would be more similar than a horse and a zebra if the feature striped had sufficient weight. Of course, if these feature weights were fixed, then these similarity relations would be constrained". Yet the property "striped" as a weight 'fix' or constraint is arbitrary itself, meaning: "unless one can specify such criteria, then the claim that categorization is based on attribute matching is almost entirely vacuous".

Stamos (2003) remarked that some judgments of overall similarity are non-arbitrary in the sense they are useful:

"Presumably, people's perceptual and conceptual processes have evolved that information that matters to human needs and goals can be roughly approximated by a similarity heuristic... If you are in the jungle and you see a tiger but you decide not to stereotype (perhaps because you believe that similarity is a false friend), then you will probably be eaten. In other words, in the biological world stereotyping based on veridical judgments of overall similarity statistically results in greater survival and reproductive success."[6]

Unless some properties are considered more salient, or 'weighted' more important than others, everything will appear equally similar, hence Watanabe (1986) wrote: "any objects, in so far as they are distinguishable, are equally similar".[7]

In a weaker setting that assumes infinitely many properties, Murphy and Medin (1985) give an example of two putative classified things, plums and lawnmowers:

"Suppose that one is to list the attributes that plums and lawnmowers have in common in order to judge their similarity. It is easy to see that the list could be infinite: Both weigh less than 10,000 kg (and less than 10,001 kg), both did not exist 10,000,000 years ago (and 10,000,001 years ago), both cannot hear well, both can be dropped, both take up space, and so on. Likewise, the list of differences could be infinite… any two entities can be arbitrarily similar or dissimilar by changing the criterion of what counts as a relevant attribute."[8]

According to Woodward,[9] the ugly duckling theorem is related to Schaffer's Conservation Law for Generalization Performance, which states that all algorithms for learning of boolean functions from input/output examples have the same overall generalization performance as random guessing.[10] The latter result is generalized by Woodward to functions on countably infinite domains.[11]

See also

edit

Notes

edit
  1. ^ a b Satosi Watanabe (1969). Knowing and Guessing: A Quantitative Study of Inference and Information. New York: Wiley. ISBN 0-471-92130-0. LCCN 68-56165.
  2. ^ Watanabe's x1, x2, x3, y1, and y2, correspond to C, B, A, F, and W, respectively.
  3. ^ Douglas L. Medin and R.L. Goldstone and Dedre Gentner (1993). "Respects for similarity". Psychological Review. 100 (2): 254–278. doi:10.1037/0033-295x.100.2.254.
  4. ^ Nelson Goodman (1972). "Seven Strictures on Similarity". In Nelson Goodman (ed.). Problems and Projects. New York: Bobbs-Merrill. pp. 437–446.
  5. ^ The philosopher Nelson Goodman[4] came to the same conclusion: "But importance is a highly volatile matter, varying with every shift of context and interest, and quite incapable of supporting the fixed distinctions that philosophers so often seek to rest upon it".
  6. ^ Stamos, D. N. (2003). The Species Problem. Lexington Books. p. 344.
  7. ^ Satosi Watanabe (1986). "Epistemological Relativity". Annals of the Japan Association for Philosophy of Science. 7 (1): 1–14. doi:10.4288/jafpos1956.7.1.
  8. ^ Gregory L. Murphy and Douglas L. Medin (Jul 1985). "The Role of Theories in Conceptual Coherence" (PDF). Psychological Review. 92 (3): 289–316. doi:10.1037/0033-295x.92.3.289. PMID 4023146.
  9. ^ John R. Woodward (Nov 2009). "Computable and Incomputable Functions and Search Algorithms" (PDF). International Conference on Intelligent Computing and Intelligent Systems. IEEE. pp. 871–875. doi:10.1109/ICICISYS.2009.5358045. ISBN 978-1-4244-4754-1. S2CID 14473304. Here: p. 874 lf
  10. ^ Cullen Schaffer (1994). "A conservation law for generalization performance" (PDF). In Willian, H.; Cohen, W. (eds.). Proceedings of the 1994 International Conference on Machine Learning (San Mateo/CA). Morgan Kaufmann. pp. 259–265. Archived from the original (PDF) on 2021-04-12. Retrieved 2021-04-25. Here p. 260 lf
  11. ^ Woodward (2009), p. 875 lf