In computer science, polymorphic recursion (also referred to as MilnerMycroft typability or the Milner–Mycroft calculus) refers to a recursive parametrically polymorphic function where the type parameter changes with each recursive invocation made, instead of staying constant. Type inference for polymorphic recursion is equivalent to semi-unification and therefore undecidable and requires the use of a semi-algorithm or programmer-supplied type annotations.[1]

Example

edit

Nested datatypes

edit

Consider the following nested datatype in Haskell:

data Nested a = a :<: (Nested [a]) | Epsilon
infixr 5 :<:

nested = 1 :<: [2,3,4] :<: [[5,6],[7],[8,9]] :<: Epsilon

A length function defined over this datatype will be polymorphically recursive, as the type of the argument changes from Nested a to Nested [a] in the recursive call:

length :: Nested a -> Int
length Epsilon    = 0
length (_ :<: xs) = 1 + length xs

Note that Haskell normally infers the type signature for a function as simple-looking as this, but here it cannot be omitted without triggering a type error.

Higher-ranked types

edit

Applications

edit

Program analysis

edit

In type-based program analysis polymorphic recursion is often essential in gaining high precision of the analysis. Notable examples of systems employing polymorphic recursion include Dussart, Henglein and Mossin's binding-time analysis[2] and the Tofte–Talpin region-based memory management system.[3] As these systems assume the expressions have already been typed in an underlying type system (not necessary employing polymorphic recursion), inference can be made decidable again.

Data structures, error detection, graph solutions

edit

Functional programming data structures often use polymorphic recursion to simplify type error checks and solve problems with nasty "middle" temporary solutions that devour memory in more traditional data structures such as trees. In the two citations that follow, Okasaki (pp. 144–146) gives a CONS example in Haskell wherein the polymorphic type system automatically flags programmer errors.[4] The recursive aspect is that the type definition assures that the outermost constructor has a single element, the second a pair, the third a pair of pairs, etc. recursively, setting an automatic error finding pattern in the data type. Roberts (p. 171) gives a related example in Java, using a Class to represent a stack frame. The example given is a solution to the Tower of Hanoi problem wherein a stack simulates polymorphic recursion with a beginning, temporary and ending nested stack substitution structure.[5]

See also

edit

Notes

edit
  1. ^ Henglein 1993.
  2. ^ Dussart, Dirk; Henglein, Fritz; Mossin, Christian. "Polymorphic Recursion and Subtype Qualifications: Polymorphic Binding-Time Analysis in Polynomial Time". Proceedings of the 2nd International Static Analysis Symposium (SAS). CiteSeerX 10.1.1.646.5884.
  3. ^ Tofte, Mads; Talpin, Jean-Pierre (1994). "Implementation of the Typed Call-by-Value λ-calculus using a Stack of Regions". POPL '94: Proceedings of the 21st ACM SIGPLAN-SIGACT symposium on Principles of programming languages. New York, NY, USA: ACM. pp. 188–201. doi:10.1145/174675.177855. ISBN 0-89791-636-0.
  4. ^ Chris Okasaki (1999). Purely Functional Data Structures. New York: Cambridge. p. 144. ISBN 978-0521663502.
  5. ^ Eric Roberts (2006). Thinking Recursively with Java. New York: Wiley. p. 171. ISBN 978-0471701460.

Further reading

edit
edit
Polymorphic recursion
Polymorphic recursion