Note: this web article is an older version of a paper which has now been published as an ICFP Pearl. You can find a preprint of that paper here. It is probably better to read the paper rather than this post, as the paper contains more material, is more up to date, and is better explained!

Erratum: A previous version of this article contained a buggy implementation of extrude, which looped when extruding cycling bounds. This was already fixed in the published paper, and the fix has now been ported to the web article.


Algebraic subtyping is a new approach to global type inference in the presence of subtyping. It extends traditional Hindley-Milner type inference while preserving the principal type property — that is, it can always infer the most general type for any given expression. This approach was developed by Stephen Dolan as part of his PhD thesis, along with Alan Mycroft.

Algebraic subtyping was implemented in the MLsub type inference engine. However, the design of MLsub seems significantly more complex than the simple unification algorithms used for traditional ML languages. MLsub has proven harder to grasp, even for people already familiar with compilers and type systems, such as myself.

Dissatisfied with this state of affairs, I wanted to get to the bottom of the algebraic subtyping approach. What really is special about it, beyond the formalism? What are the simple concepts that hide behind the strange notions of biunification and polar types?

This article is an answer to those questions. I propose an alternative algorithm for algebraic subtyping, called Simple-sub. Simple-sub can be implemented efficiently in under 500 lines of code (including parsing, simplification, and pretty-printing), and I think it is much more familiar-looking and easier to understand than MLsub.

⇨ ⇨ ⇨ You can try Simple-sub online here! ⇦ ⇦ ⇦

This article is meant to be light in formalisms and easy to consume for prospective designers of new type systems and programming languages.

The complete source code of Simple-sub is available on Github.

Summary

  1. Introduction

  2. Algebraic Subtyping in MLsub

  3. Algebraic Subtyping in Simple-sub

  4. Simplifying Types

  5. Conclusions


1. Introduction

The ML family of languages, which encompasses Standard ML, OCaml, and Haskell, have been designed around a powerful “global” approach to type inference, rooted in the work of Hindley and Milner, later closely formalized by Damas. In this approach, the type system is designed to be simple enough that types can be unambiguously inferred from terms without the help of any type annotations. That is, for any unannotated term, it is always possible to infer a principal type which subsumes all other types that can be assigned to this term. For instance, the term $\lambda{x}. {x}$ can be assigned types $\mathsf{bool} \to \mathsf{bool}$ and $\mathsf{int} \to \mathsf{int}$, but both of these are subsumed by the polymorphic type $\forall a.\ a \to a$, also written 'a -> 'a, which is the principal type of this term.

Hindley-Milner (HM) type inference contrasts with more restricted “local” approaches to type inference, found in languages like Scala and C#, which often require the types of variables to be annotated explicitly by programmers. On the flip side, abandoning the principal type property allows these type systems to be more expressive and to support features such as object orientation and first-class polymorphism. Note that in practice, even ML languages like OCaml and Haskell have adopted expressive type system features which, when used, break the principal type property.1

Subtyping is an expressive feature allowing types to be structured into hierarchies (usually a subtyping lattice) with the property that types can be refined or widened transparently following this hierarchy. This lets us express the fact that some types are more precise (contain more information) than others, but still have a compatible runtime representation, so that no coercions between them are needed. For instance, in a system where the type nat is a subtype of int, one can transparently use a nat list in place where an int list is expected, without having to apply a coercion function on all the elements of the list. Subtyping can be emulated using somewhat heavy type system machinery (which both OCaml and Haskell occasionally do2), but first-class support for subtyping gives the benefit of simpler type signatures and better type inference.

For a long time, it was widely believed that pervasive implicit subtyping got in the way of satisfactory global type inference. Indeed, previous approaches to inferring subtypes failed to support principal types or resulted in the inference of large, unwieldy typing schemes which included sets of complex constraints, making their understanding by programmers difficult.

Algebraic subtyping was introduced by Dolan and Mycroft as an ML-style type system supporting subtyping and global type inference, while producing compact principal types. Here, compact refers to the fact that the inferred types are relatively simple type expressions without any visible constraints, making them easy to understand by programmer. This was achieved by carefully designing the syntax and semantics of the underlying subtyping lattice, allowing for simplifying assumptions in to be made the constraint resolution process of the type inference algorithm.

However, biunification, the algorithm proposed by Dolan in order to implement algebraic subtyping, has been quite difficult to understand for many experts and non-experts alike. Indeed, on the surface it looks quite different from the usual Algorithm J traditionally used for Hm type systems: it requires several additional concepts, such as bisubstitution and polar types, making it look more complicated.

Thankfully, it turns out that algebraic subtyping admits a type inference algorithm, which I called Simple-sub, that is very similar to the familiar Algorithm J, and also much more efficient than biunification (or at least, than the basic syntax-driven form of biunification.3 In this article, I show that inferring algebraic subtypes is actually extremely easy, and can be done in under 300 lines of Scala code. Most of the complexity actually comes from simplifying the inferred type representations, which we will get to later.

While the implementation we present is written in Scala, it is straightforward to translate into any other functional programming language.

The goal of this article is to recast algebraic subtyping into a simpler mold, allowing more prospective designers of type systems and programming languages to benefit from the great insights that this approach offers.


2. Algebraic Subtyping in MLsub

Let’s start by briefly reviewing what algebraic subtyping and MLsub are and what they are not, and by trying to assess their expressiveness.

Term Language

The term syntax of MLsub is given below. I have omitted boolean literals and if-then-else, as they can easily be typed as primitive combinators. I also used only one form of variables $x$, which will be sufficient for our use (Dolan’s MLsub formalism distinguishes lambda-bound variables from let-bound variables for technical reasons).

name formal syntax ML pseudo-syntax
variable $x$ x
lambda $\lambda{x}. {e}$ fun x -> e
application $e\ e$ e e
record creation $\{\ l_0 = e \,;\; … \,;\; l_n = e\ \}$ { l_0 = e; ...; l_n = e }
field selection $e.l$ e.l
let bindings $\mathsf{let}\ x = e\ \mathsf{in}\ e$ let x = e in e

Type Language

The type syntax of MLsub, summarized below, consists in booleans, function types, record types, type variables, top $\top$ (the type of all values — supertype of all types), bottom $\bot$ (the type of no values — subtype of all types), type union $\sqcup$, type intersection $\sqcap$, and recursive types $\mu{\alpha}. {\tau}$.

name formal syntax ML pseudo-syntax
primitives $\mathsf{bool}$, $\mathsf{int}$, … bool, int, …
function $\tau \to \tau$ t -> t
record $\set{\ l_0: \tau \,,\; … \,,\; l_n: \tau\ }$ { l_0: t, ..., t_n: t }
variable $\alpha $ 'a
top $\top$
bottom $\bot$
union $\tau \sqcup \tau$
intersection $\tau \sqcap \tau$
recursive $\mu\alpha. \tau$ t as 'a

Informal Semantics of Types

While most MLsub type forms are usual and unsurprising, two kinds of types require our special attention: set-theoretic types (more specifically union and intersection types), and recursive types.

Set-Theoretic Types

To a first approximation, union and intersection types can be understood in set-theoretic terms: the type term $\tau_0 \sqcup \tau_1$ (resp. $\tau_0 \sqcap \tau_1$) represents the type of values that are either (resp. both) of type $\tau_0 $ or (resp. and) of type $\tau_1$.

MLsub uses these types to indirectly constrain inferred type variables; for instance, one type inferred for term $\lambda{x}. {\set{l = x - 1 \,;\; r = x}}$ could be $\alpha \sqcap \mathsf{int} \to \set{l: \mathsf{int} \,,\; r: \alpha}$. This type reflects the fact that the original argument, of type $\alpha$, is returned in the $r$ field of the result record, as the input type $\alpha$ ends up in that position, but also that this argument should be able to be treated as an $\mathsf{int}$, expressed via the type intersection $\alpha \sqcap \mathsf{int}$ on the left-hand side of the function type. Keeping track of the precise argument type $\alpha$ is important: it could be later substituted with a more specific type than $\mathsf{int}$, such as $\alpha = \mathsf{nat}$, which would give us $\mathsf{nat} \to \set{l: \mathsf{int} \,;\; r: \mathsf{nat}}$.

On the other hand, there may be type signatures where some $\alpha$ becomes undistinguishable from $\mathsf{int}$. For instance, consider the term $\lambda{x}. {\mathsf{if}\ \mathsf{true}\ \mathsf{then}\ {x - 1}\ \mathsf{else}\ {x}}$, whose simplified inferred type would be just $\mathsf{int} \to \mathsf{int}$, as the seemingly-more precise type $\alpha \sqcap \mathsf{int} \to \alpha \sqcup \mathsf{int}$ does not actually contain more information (see the MLsub paper for details).

The beauty of algebraic subtyping is that this sort of reasoning scales to arbitrary flows of variables and higher-order functions; for instance, the previous example can be generalized by passing in a function $f$ to stand for the $\cdot - 1$ operation, as in $\lambda{f}. {\lambda{x}. {\set{l = f x \,;\; r = x}}}$ whose type could be inferred as $(\beta \to \gamma) \to \alpha \sqcap \beta \to \set{l: \gamma \,,\; r: \alpha}$. Applying this function to argument $(\lambda{x}. {x - 1})$ yields the same type (after simplification) as in the original example.

Recursive types

A recursive type $\mu\alpha. \tau$ represents a type that we can unroll as many times as we want; for instance, $\mu\alpha. {\top \to \alpha}$ is equivalent to $\top \to (\mu\alpha. {\top \to \alpha})$, which is equivalent to $\top \to (\top \to (\mu\alpha. {\top \to \alpha}))$, etc., and is the type of a function that can be applied to any arguments indefinitely (since everything subtypes $\top$, and since the type unrolls to functions taking $\top$ arguments). So a recursive type is conceptually infinite — if we unrolled it fully, it would unfold as an infinitely-deep tree $\top \to (\top \to (\top \to …))$. If this sounds confusing, that’s perfectly fine. We will see how recursive types are to be understood and get an intuition why we need them later on in this article.

Expressiveness

There is a big “gotcha” in the definition of types that we have given above: these types cannot actually be used freely within type expressions. The true syntax of MLsub types is segregated between positive and negative types. In particular, unions and bottom are positive types (and may not appear in negative position) while intersections and top are negative types (and may not appear in positive position).

Polarity of Type Positions

Positive positions correspond to the types that a term outputs, while negative positions correspond to the types that a term takes in as input. For instance, in the type $(\tau_0 \to \tau_1) \to \tau_2$, type $\tau_2$ is in positive position since it is the output of the main function, and the function type $(\tau_0 \to \tau_1)$ is in negative position, as it is taken as an input to the main function. On the other hand, $\tau_1$, which is returned by the function taken as input is in negative position (since it is provided by callers via the argument function), and $\tau_0$ is in positive position (since it is provided by the main function when calling the argument function).

Consequence of the Polarity Restriction

These polarity restrictions mean that the full syntax of types we saw above cannot actually be used as is; programmers cannot write, in their own type annotations, types that violate the polarity distinction. In fact, in MLsub, one cannot express the type of a function that takes either an integer or a string: this type would have been $\mathsf{int} \sqcup \mathsf{string} \to \tau$, but $\mathsf{int} \sqcup \mathsf{string}$ is illegal in negative position. On the other hand, MLsub may assign the legal type $\tau \to \mathsf{int} \sqcup \mathsf{string}$ to functions which may return either an integer or a string… not a very useful thing, since we cannot consume such values!

What this all comes down to, surprisingly, is that the MLsub language is fundamentally no more expressive than a structurally-typed Java! By structurally-typed Java, we mean a hypothetical Java dialect with structural records and upper/lower bounds for type variables. (MLsub in fact strictly less expressive than that, since it does not have class hierarchies and polymorphic methods.) To understand this, consider the following function:

$\lambda x.\ \set{L=x-1 \,;\; R=\mathsf{\textbf{if}}\ x<0\ \mathsf{\textbf{then}}\ 0\ \mathsf{\textbf{else}}\ x}$

which could be given the following unsimplified type by MLsub or Simple-sub:

$\alpha \sqcap \mathsf{int} ~\to~ \set{L: \mathsf{int} \,;\; R: \beta \sqcup \mathsf{nat} \sqcup \alpha }$

or equivalently, after simplification:

$\alpha \sqcap \mathsf{int} ~\to~ \set{L: \mathsf{int} \,;\; R: \mathsf{nat} \sqcup \alpha }$

The $\beta$ type variable is introduced to represent the result type of the $\mathsf{\textbf{if}}$ expression. During type inference, both constraints $\mathsf{nat} \leq \beta$ and $\alpha \leq \beta$ are registered, which is handled by replacing all occurrences of $\beta$ in positive positions (there is only one here) by $\beta \sqcup \mathsf{nat} \sqcup \alpha$. In this simple example, the $\beta$ type variable turns out to be redundant, and is later removed during simplification.

Now, recall that Java allows users to quantify types using type variables, and also allows bounding these type variables with subtypes and supertypes. (In fact, in real Java, lower-bound specifications are inexplicably only supported for wildcard type arguments – an apparent oversight. For more details on type bounds in Java, see docs.oracle.com/javase/tutorial/java/generics/bounded.html and docs.oracle.com/javase/tutorial/java/generics/lowerBounded.html.) The insight is that unions and intersections, when used at the appropriate polarities, are only a way of indirectly bounding type variables.

For instance, the unsimplified MLsub type seen above is equivalent to the following Java-esque type, where type parameters are written between ‘$\langle$’ and ‘$\rangle$’:

$\langle \alpha\ \ \mathsf{\textbf{extends}}\,\ \mathsf{int},\ \beta\ \ \mathsf{\textbf{super}}\,\ \mathsf{nat}\;|\;\alpha \rangle (\alpha) \to \set{L: \mathsf{int} \,;\; R: \beta}$

meaning that $\alpha$ should be a subtype of $\mathsf{int}$ and that $\beta$ should be a supertype of both $\mathsf{nat}$ and $\alpha$.

Moreover, the simplified type is equivalent to:

$\langle \alpha\ \ \mathsf{\textbf{super}}\,\ \mathsf{nat}\ \ \mathsf{\textbf{extends}}\,\ \mathsf{int} \rangle (\alpha) \to \set{L: \mathsf{int} \,;\; R: \alpha}$

meaning that $\alpha$ should be a supertype of $\mathsf{nat}$ and also a subtype of $\mathsf{int}$.

Interestingly, MLsub’s recursive types are essentially expressible via F-bounded polymorphism, which Java also supports (though only for upper bounds), allowing one to bound a type variable with a type that contains occurrences of the type variable itself.

Getting to The Bottom of MLsub

Reading Dolan’s paper and thesis, one could be led to think that algebraic subtyping is all about:

  • supporting unions and intersections in the type language;

  • separating the syntax of types between positive and negative types;

  • a new algorithm called biunification as an alternative to traditional unification.

However, I think that’s not the most helpful way of seeing what is happening here.

In fact, I argue that algebraic subtyping really is about:

  • making the semantics of types simple enough that all inferred subtyping constraints can be reduced to constraints on type variables;

  • recording the resulting constraints on all concerned type variables, which can be done in a traditional way (as we shall see later);

  • using intersection, union, and recursive types to express types in which the type variables are indirectly constrained (avoiding the need for separate constraint specifications).

In my humble opinion, the latter point is more like a “neat trick” and does not really offer any fundamental advantage over traditional approaches. On the contrary, if a type variable appears many times, the MLsub-style constraints on it will be duplicated, which could result in bigger type expressions.

For instance, compare the MLsub type:

$\forall \alpha.\ \alpha \sqcap \{\ x: \mathsf{int} \,;\; x: \mathsf{bool} \ \} \to \alpha \sqcap \{\ x: \mathsf{int} \,;\; x: \mathsf{bool} \ \} \to \alpha \sqcap \{\ x: \mathsf{int} \,;\; x: \mathsf{bool} \ \} \to \alpha$

to the equivalent type with explicit type variable constraints:

$\forall \alpha \leq \{\ x: \mathsf{int} \,;\; x: \mathsf{bool} \ \}.\ \alpha \to \alpha \to \alpha \to \alpha$

Now that we have got an intuition of what is really going on in MLsub, let us design a new algorithm that can leverage that intuition.


3. Algebraic Subtyping in Simple-sub

Here is the Simple-sub algorithm. For simplicity of the presentation, I will start by describing a version without let polymorphism, and add let polymorphism later on.

Simple-sub Term Syntax

The Scala implementation of the term syntax, which we will use in the rest of this article, is derived directly from the previous table, except that I have added a construct for integer literals:

enum Term {
  case Lit (value: Int)                         //  42
  case Var (name: String)                       //  x
  case Lam (name: String, rhs: Term)            //  fun x -> t
  case App (lhs: Term, rhs: Term)               //  s t
  case Rcd (fields: List[(String, Term)])       //  { a: 0, b: true, ... }
  case Sel (receiver: Term, fieldName: String)  //  t.a
  case Let (isRec: Boolean, name: String, rhs: Term, body: Term)
}

The algebraic data type is given in the new Scala 3 enum syntax, but can be expressed equivalently using a Scala 2 sealed class hierarchy.

As I have mentioned before, there is no need for an if-then-else feature, since we can just add one as a combinator: my parser actually parses code of the form $\mathsf{if}\ {e_0}\ \mathsf{then}\ {e_1}\ \mathsf{else}\ {e_2}$ as if it were written $\mathsf{if}\ e_0\ e_1\ e_2$, and I do type checking starting from a context which assigns to the ‘$\mathsf{if}$’ identifier the type $\forall \alpha.\ \mathsf{bool} \to \alpha \to \alpha \to \alpha$.4

Simple-sub Type Syntax

The Simple-sub algorithm starts from the realization that union, intersection, top, bottom, and recursive types are all not really core to MLsub types, and are closer to emergent properties of its treatment of constraints.

Therefore, we will focus on type variables, primitive type constructors, function types, and record types as the cornerstone of the algorithm; their Scala syntax is given below:

enum SimpleType {
  case Variable  (st: VariableState)
  case Primitive (name: String)
  case Function  (lhs: SimpleType, rhs: SimpleType)
  case Record    (fields: List[(String, SimpleType)])
}

The state of a type variable is simply given by all the bounds that have been recorded for it:

class VariableState(var lowerBounds: List[SimpleType],
                    var upperBounds: List[SimpleType])

Notice that we use mutable variables in order to hold the current state of the algorithm — these lists will be mutated as the algorithm proceeds.

Type Inference

All we need to do in order to perform type inference now is to accumulate and propagate constraints that appear between different types, decomposing them until they boil down to constraints on type variables, which we can constrain by mutating their recorded bounds.

Core Type Inference Algorithm

The core function for type inference is typeTerm, which assigns a type to a given term in some context of type Ctx. It will be complemented by a constrain function to imperatively constrain one type to be a subtype of another, raising an error if that is not possible. We have made the ctx parameter in typeTerm implicit so it does not have to be passed explicitly into each recursive calls when it does not change:

type Ctx = Map[String, SimpleType]

def typeTerm(term: Term)(implicit ctx: Ctx): SimpleType = ...

def constrain(lhs: SimpleType, rhs: SimpleType) = ...

We will make use of the two small helper functions freshVar and err, to generate new type variables and raise errors, respectively:

def freshVar(): Variable =
  Variable(new VariableState(Nil, Nil))

def err(msg: String): Nothing =
  throw new Exception("type error: " + msg)

Now that we have established all the premises, we can start writing the core of the basic type inference algorithm:

def typeTerm(term: Term)(implicit ctx: Ctx): SimpleType = term match {
  
  // integer literals:
  case Lit(n) => Primitive("int")

The .getOrElse method is used to access the ctx map at a given key, specifying a thunk to execute in case that key is not found:

  // variable references:
  case Var(name) => ctx.getOrElse(name, err("not found: " + name))
  
  // record creations:
  case Rcd(fs) => Record(fs.map { case (n, t) => (n, typeTerm(t)) })

In order to type lambda abstractions, we create a fresh variable to represent the parameter type, and type the body of the lambda in the current context extended with a binding from the parameter name to this fresh variable:

  // lambda abstractions:
  case Lam(name, body) =>
    val param = freshVar()
    Function(param, typeTerm(body)(ctx + (name -> param), lvl))

To type applications, we similarly introduce a fresh variable standing for the result type of the function that we are applying:

  // applications:
  case App(f, a) =>
    val res = freshVar()
    constrain(typeTerm(f), Function(typeTerm(a), res))
    res

Finally, record accesses result in a constraint that the receiver (the expression on the left-hand side of the selection) is a record type with the appropriate field name:

  // record field selections:
  case Sel(obj, name) =>
    val res = freshVar()
    constrain(typeTerm(obj), Record((name, res) :: Nil))
    res
}

As one can observe, the basic MLsub type inference algorithm is actually quite similar to the traditional Algorithm J for HM-style type inference, and is not complicated at all!

Constraining Algorithm

First, note that type variable bounds, which are updated using mutable variables, may very well form cycles as type inference progresses. We have to account for this by using a cache variable (initially empty) which remembers all the type comparisons that have been or are being made, avoiding repeated work as well as infinite recursions:

def constrain(lhs: TypeShape, rhs: TypeShape)
      (implicit cache: MutSet[(TypeShape, TypeShape)] = MutSet.empty): Unit = {
        
  if (cache.contains((lhs, rhs))) return () else cache += ((lhs, rhs))

The next step is to match each possible pairs of basic types that can be constrained successfully, and propagate the constraints:

  (lhs, rhs) match {
    
    case (Primitive(n0), Primitive(n1)) if n0 == n1 =>
      () // nothing to do
      
    case (FunctionType(l0, r0), FunctionType(l1, r1)) =>
      constrain(l1, l0)
      constrain(r0, r1)
      
    case (RecordType(fs0), RecordType(fs1)) =>
      fs1.foreach { case (n1, t1) =>
        fs0.find(_._1 == n1) match {
          case None => err("missing field: " + n1 + " in " + lhs.show)
          case Some((_, t0)) => constrain(t0, t1) }}

The cases for type variables appearing on the left- or right-hand side are interesting, as this is when we mutate the bounds of these variables. Crucially, we need to iterate over the existing opposite bounds of the variable being constrained, in order to make sure that they are consistent with the new bound:

    case (Variable(lhs), rhs) =>
      lhs.upperBounds ::= rhs
      lhs.lowerBounds.foreach(constrain(_, rhs))
      
    case (lhs, Variable(rhs)) =>
      rhs.lowerBounds ::= lhs
      rhs.upperBounds.foreach(constrain(lhs, _))

Scala syntax notes: foo(a, _, c) is shorthand for the lambda abstraction x => foo(a, x, c), and a ::= b is shorthand for the reassignment a = b :: a which prepends element b to mutable list a.

Finally, if all other options have failed, we report an error that the two types cannot be constrained:

    case _ => err("cannot constrain " + lhs.show + " <: " + rhs.show)
  }}

The reason we need to make sure the bounds of type variables remain consistent is that otherwise, we may end up inferring type variables with bounds that cannot be satisfied, which is a type error.

And that is mostly it: the simple essence of algebraic subtyping!

However, we are not quite done yet. We still need to generate user-readable type expressions, and to support recursive functions and let polymorphism.

User-Facing Types Representations

It may seem to the reader that we have gone too fast; that we have forgotten all about the other part of the MLsub type syntax: where did union, intersection, top, bottom, and recursive types go?

It turns out that these are not really core to MLsub, and I see them more as byproducts of its type inference and type pretty-printing processes. They only come up once we try to display friendly type expressions to users, after type inference has done the gist of its job with the typeTerm and constrain functions described above.

Expressing Type Variables Constraints Indirectly

Remember that we have been constraining type variables, but that type variable constraints are not part of the simple type syntax that MLsub type inference outputs. The “trick” is to indirectly encode these constraints through the use of union and intersection types (recall the examples given in the previous section).

Targeted Type Syntax

In order to produce user-friendly type representations in the tradition of MLsub, we will target the type syntax tree presented below:

enum Type {
  case Top
  case Bot
  case Union         (lhs: Type, rhs: Type)
  case Inter         (lhs: Type, rhs: Type)
  case FunctionType  (lhs: Type, rhs: Type)
  case RecordType    (fields: List[(String, Type)])
  case RecursiveType (name: String, body: Type)
  case PrimitiveType (name: String)
  case TypeVariable  (name: String)
}

Type Expansion Algorithm

In order to produce simple immutable Type values from our internal mutable SimpleType representation, we need to go through a process that I call type expansion: its goal is to replace the positive occurrences of type variables with a union of their lower bounds, and their negative occurrences with an intersection of their upper bounds.

The algorithm starts by defining a worker function go which calls itself recursively in a straightforward manner, but makes sure to keep track of the type variables that are currently being expanded and of the current polarity— whether we are expanding a positive (polarity == true) or negative (polarity == false) type position:

def expandType(ty: SimpleType): Type = {
  
  def go(ty: SimpleType, polarity: Boolean)(inProcess: Set[VariableState]): Type =
    ty match {
      
      case Primitive(n) => PrimitiveType(n)
      
      case Function(l, r) =>
        FunctionType(go(l, !polarity)(inProcess), go(r, polarity)(inProcess))
        
      case Record(fs) =>
        RecordType(fs.map(nt => nt._1 -> go(nt._2, polarity)(inProcess)))

The interesting case here is that of variables. Depending on the polarity, we recurse into the lower or upper bounds, but we also make sure that we are not currently expanding that variable (otherwise we simply return a reference to it, to avoid an infinite recursion). Moreover, after having expanded a variable bounds, we make sure to wrap it into the appropriate RecursiveType wrapper if necessary:

      case Variable(ts) =>
        val v = TypeVariable(ts.freshName)
        if (inProcess(ts)) v
        else {
          val bounds = if (polarity) ts.lowerBounds else ts.upperBounds
          val boundTypes = bounds.map(go(_, polarity)(inProcess + ts))
          val isRecursive = boundTypes.exists(_.typeVars.contains(v))
          val mrg = if (polarity) Union else Inter
          if (isRecursive) Recursive(v,
            boundTypes.reduceOption(mrg).getOrElse(if (polarity) Bot else Top))
          else boundTypes.foldLeft[Type](v)(mrg)
        }
    }
  
  go(ty, true)(Set.empty)
}

Here, freshName is a value defined in VariableState which assigns a unique name to each variable, and typeVars is a helper in Type which contains all the variables that are mentioned by the type.

Insights On Recursive Types

The reason we need recursive types in the user-facing type syntax has now become quite obvious: we need them in order to “tie the knot” when we are trying to expand variables which appears in the expansion of their own bounds.

For instance, consider the case where we infer the type representation Function(Variable(s), Variable(t)), where s = new VariableState(Nil, Nil) and t = new VariableState(Nil, Function(Variable(s), Variable(t)) :: Nil). Notice that there is a cycle in the upper bounds of t; therefore, the expansion algorithm will turn this SimpleType representation into the user-facing type Function(Top, Recursive("t", Function(Top, TypeVariable("t")))), which corresponds to $\top \to (\mu\alpha. {\top \to \alpha})$.

Limitations

This algorithm does produces some unnecessary variables (variables that could be removed to simplify the type expression). Moreover, it sometimes duplicates some inner structures of recursive types — for instance, in the previous example, we could have just produced the type $\mu\alpha. {\top \to \alpha}$ instead of $\top \to (\mu\alpha. {\top \to \alpha})$, which is equivalent. We will see later how to simplify type representations.

Supporting Let Polymorphism

In traditional ML languages, let bindings (including local ones) may be assigned a polymorphic type. This requires keeping track of generalized typing schemes that are to be instantiated with fresh variables on every use, and making sure that we are not over-generalizing some type variables which are shared with parts of the environment outside of the let binding.

Efficient Generalization

One way of determining which variables to generalize is to scan the current environment looking for references to type variables, but that is quite inefficient (it adds a linear-time operation in an important part of the algorithm). A better approach is to use levels: the idea is that all the fresh type variables created inside the right-hand side of a let binding should be assigned a higher level, unless they “escape”, through a constraint, into the enclosing environment, in which case we lower their level so that they are not generalized as part of this let binding.

See, for an excellent resource on the subject, the article by Oleg Kiselyov.

Type Inference with Levels

We can use the same levels idea in order to achieve let polymorphism in Simple-sub. However, we have to be a little more careful because we do not merely unify type variables, but instead keep track of subtyping inequalities via their bounds.

The idea is to make sure that lower-level type variables never refer to higher-level ones, and to enforce that property manually when it would otherwise be violated by the addition of a bound.

We first need to add a lvl field to type variables:

class VariableState(val level: Int,
                    var lowerBounds: List[SimpleType],
                    var upperBounds: List[SimpleType])

and to update freshVar correspondingly:

def freshVar(lvl: Int): Variable =
  Variable(new VariableState(lvl, Nil, Nil))

Next, we add an implicit lvl parameter to the typeTerm function, and we make sure to type the right-hand sides of let bindings with a higher level than the current one:

def typeTerm(term: Term)(implicit ctx: Ctx, lvl: Int): SimpleType = term match {
  ...
  
  // non-recursive let bindings:
  case Let(false, nme, rhs, bod) =>
    val rhs_ty = typeTerm(rhs)(ctx, lvl + 1) // use a higher level here!
    typeTerm(bod)(ctx + (nme -> PolymorphicType(lvl, rhs_ty)), lvl)
  
  ...
}

Notice that in the context used to type the body of the let binding, we wrap the right-hand side type into a PolymorphicType wrapper, defined as below:

case class PolymorphicType(level: Int, body: SimpleType) extends TypeScheme {
  def instantiate(implicit lvl: Int) = body.freshenAbove(level)
}

A polymorphic type wraps a simple type body, but additionally remembers the level at which the type variables that appear in body are to be considered universally quantified. Its instantiate(lvl) method copies body, replacing the type variables at or above level with fresh variables at level lvl (which is performed by freshenAbove, whose implementation is fairly boring).

In order to make PolymorphicType and SimpleType type-compatible, we create a TypeScheme common base trait (in Scala, a sealed trait is like an interface which can only be implemented by types defined in the same file), and we use TypeScheme as the type of the values bound in Ctx:

type Ctx = Map[String, TypeScheme]

sealed trait TypeScheme {
  // to be implemented in SimpleType and TypeScheme:
  def instantiate(implicit lvl: Int): SimpleType
  def level: Int
}

enum SimpleType extends TypeScheme {
  case Function (lhs: SimpleType, rhs: SimpleType)
  ... // as before
  
  // the following members are available in all SimpleType values, and implement TypeScheme:
  def instantiate(implicit lvl: Int) = this
  lazy val level = this match {
    case Function (lhs, rhs) => max(lhs.level, rhs.level)
    case Record (fields)     => fields.map(_._2.level).maxOption.getOrElse(0)
    case Variable (vs)       => vs.level
    case Primitive (_)       => 0
  }
}

The level value in SimpleType is lazily evaluated to avoid needless recomputation, caching the maximum level of any type variables contained in a type.

Finally, we adapt the case for variables in typeTerm so that we instantiate the associated type scheme found the current context:

def typeTerm(term: Term)(implicit ctx: Ctx, lvl: Int): SimpleType = term match {
  ...
  
  case Var(name) => ctx.getOrElse(name, err("not found: " + name)).instantiate
  
  ...
}

Constraining with Levels

The next step is to make sure that variables of higher level do not escape into the bounds of variables of lower level. We prevent that from happening by adding guards in the constraining algorithm:

def constrain(lhs: SimpleType, rhs: SimpleType)
    ...
  
    case (Variable(lhs), rhs0) if rhs.level <= lhs.level =>
      lhs.v.upperBounds ::= rhs
      lhs.lowerBounds.foreach(constrain(_, rhs))
      
    case (lhs0, Variable(rhs)) if lhs.level <= rhs.level =>
      rhs.lowerBounds ::= lhs
      rhs.upperBounds.foreach(constrain(lhs, _))

We also need to handle the case where there is a level violation. What we do in this case is to copy the type constraining the variable up to its subtrees of acceptable level, using the extrude function:

    case (lhs @ Variable(_), rhs0) =>
      val rhs = extrude(rhs0, false)(lhs.level, MutMap.empty)
      constrain(rhs, rhs0)
      constrain(lhs, rhs)
      
    case (lhs0, rhs @ Variable(_)) =>
      val lhs = extrude(lhs0, true)(rhs.level, MutMap.empty)
      constrain(lhs0, lhs)
      constrain(lhs, rhs)
      
    ...

The extrude function is defined below. Its goal is to make a copy of the problematic type such that the copy has the requested level and soundly approximates the original type. If a type variable vs needs to be copied as part of an extruded type, two new variables should be created, one for each of vs’s approximating bounds (unless of course the variable occurs strictly positively or strictly negatively, in which case one of the two bounds can be discarded). This way, we essentially create a conservative approximation of vs in the result of the extrusion, and any later instantiation of vs (created at every point the nested let binding is used) will be able to receive additional constraints independently, as long as these constraints are within the extruded approximating bounds of vs.

def extrude(ty: SimpleType, pol: Boolean)
      (implicit lvl: Int, cache: MutMap[PolarVariable, Variable]): SimpleType =
  if (ty.level <= lvl) ty else ty match {
    case Function(l, r) => Function(extrude(l, !pol), extrude(r, pol))
    case Record(fs) => Record(fs.map(nt => nt._1 -> extrude(nt._2, pol)))
    case tv: Variable => cache.getOrElse(tv -> pol, {
      val nvs = freshVar
      cache += tv -> pol -> nvs
      if (pol) {
        tv.upperBounds ::= nvs
        nvs.lowerBounds = tv.lowerBounds.map(extrude(_, pol))
      } else {
        tv.lowerBounds ::= nvs
        nvs.upperBounds = tv.upperBounds.map(extrude(_, pol))
      }
      nvs
    })
    case Primitive(_) => ty
  }

extrude recursively traverses its argument type tree up to its subtrees of acceptable levels. When it finds a type variable vs with the wrong level, it creates a copy nvs of the faulty type variable at the requested level lvl and registers the necessary constraints. This works because nvs has a level lower than vs, satisfying the invariant on levels. We have to recursively extrude the bounds of vs to place them in the nvs copy, but these bounds may form cycles. To avoid going into an infinite extrusion loop, we keep a cache c of the variables already being extruded, along with the polarity of that extrusion.

In other words, extrude copies not only type trees, but also the potentially-cyclic subgraphs of type variable bounds which are rooted in these type trees.

Note on Let Polymorphism in MLsub

By contrast to the approach presented here for Simple-sub, Dolan’s biunification algorithm uses an approach which binds entire type environments to polymorphic variables. I found that approach slightly harder to understand, and it creates many more useless type variables that will need to be simplified later.

I also suspect that Dolan’s approach incurs more copying and is likely to be less efficient. As a wild guess, this is perhaps why the online demo of MLsub seems to scale in a worse way than my Simple-sub demo for inputs of the form let f = fun x -> x x x ... x for many xes — for instance, try:

let f = fun x -> x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x x

Supporting Recursive Let Bindings

Finally, supporting recursive let bindings is done in the usual way, by typing the right-hand side with, in the context, the name bound to a fresh type variable v, and later constraining v to be a supertype of the actual right-hand side type.

For all the details, the complete source code is available on Github.


4. Simplifying Types

I would venture to say that the most complex part of Simple-sub is actually its simplification algorithm. I will briefly describe how it proceeds in this section.

Simplification Approach

The simplification approach proposed by Dolan is to encode the problem as that of finite-state automata minimization, for which many algorithms are already known.

I took a simpler and more ad-hoc approach, which nevertheless seems to be working just as well most of the time. I do not know of instances where MLsub yields simpler types than the current Simple-sub implementation, though I would not be surprised if there were.

I do know some cases where Simple-sub infers simpler types than MLsub. For instance, for the term let rec r = { next = r }, MLsub infers $\{r: \mu{\tau}.\{r: \tau\}\}$, while Simple-sub infers $\mu{\tau}.\{r: \tau\}$.

Also, while the MLsub documentation says that MLsub infers type $\top \to (\mu\alpha. {\top \to \alpha})$ for a function taking an arbitrary number of arguments (which is also what Simple-sub infers), the actual online demo of MLsub infers instead:

((Top) -> ((Top) -> ((Top) -> (rec b = ((Top) -> (rec a = ((Top) -> (a | b)) | b)) | rec d = ((Top) -> (rec c = ((Top) -> (c | d)) | d))))))


Simple-sub’s type simplification essentially hinges on two main ideas:

Co-occurrence Analysis

Co-occurrence analysis looks at every variable that appears in a type in both positive and negative positions, and records along which other variables and types it always occurs. A variable $v$ occurs along a type $\tau$ if it is part of the same type union $… \sqcup v \sqcup … \sqcup \tau \sqcup …$ or part of the same type intersection $… \sqcap v \sqcap … \sqcap \tau \sqcap …$

Based on this information, we can perform three kinds of simplification:

Polar Variable Removal

The first thing we want to do is to remove type variables that only appear positively (or negatively) in a type expression.

For instance, consider the type of $\lambda{x}. {x + 1}$, which would normally be inferred as $\alpha \sqcap \mathsf{int} \to \mathsf{int}$. The variable $\alpha$ in this type is redundant since it only occurs in negative position — whichever $\alpha$ the caller may pick, the function will still require the argument to be an $\mathsf{int}$, and it will still produce an $\mathsf{int}$ as a result. So we can simply remove $\alpha$ and obtain the simplified type $\mathsf{int} \to \mathsf{int}$.

As another example, the type of a function which uses its argument as an $\mathsf{int}$ but never terminates, $\mathsf{int} \to \alpha$, can be simplified to $\mathsf{int} \to \bot$.

Indistinguishable Variables Unification

We have previously mentioned that a type like $\mathsf{bool} \to \alpha \to \beta \to \alpha \sqcup \beta$ (the naive type of if-then-else) is equivalent to the simpler type $\mathsf{bool} \to \alpha \to \alpha \to \alpha$. This is true because the positive occurrences of the variables $\alpha$ and $\beta$ are “indistinguishable” — whenever an $\alpha$ is produced, a $\beta$ is also produced. Therefore, we cannot distinguish the two variables, and they can be unified.

Based on the result of the co-occurrence analysis, we can unify all those variables that always occur together either in positive or in negative position (or both).

Flattening Variable Sandwiches

What I call a “variable sandwich” is an inferred type variable $v$ which contains an upper bound $\tau$ that is also a lower bound of $v$, i.e., $v \leq \tau$ and $v \geq \tau$. This means that $v$ is essentially equivalent to $\tau$.

In an expanded type, this will transpire as $v$ co-occurring with $\tau$ both positively and negatively. So we can use the result of co-occurrence analysis to remove variables that are sandwiched between two identical bounds. As an example, we can simplify the type $\alpha \sqcap \mathsf{int} \to \alpha \sqcup \mathsf{int}$ to just $\mathsf{int} \to \mathsf{int}$.

Conceptually, this idea subsumes polar variable removal (explained above). Indeed, if a variable never occurs positively, it is conceptually as if it always occurs both positively and negatively along with the type $\bot$, so we can replace it with $\bot$, which will have the effect of removing it from the type unions it was a part of.

Hash Consing

Simple-sub’s other simplification approach, hash consing, deals with removing duplicated structures in expanded type expressions.

Consider the following recursive term:

$\mathsf{let}\ \mathsf{rec}\ f = {\lambda{x}.\{l = x \,;\; r = f x \}}\ \mathsf{in}\ {f}$

The expanded type inferred for this term would be:

$\alpha \to \{l: \alpha \,,\; r: \mu\beta. \{l: \alpha \,,\; r: \beta\}\}$

Notice that there is an outer record layer that is redundant. We would like to instead infer:

$\alpha \to \mu\beta. \{l: \alpha \,,\; r: \beta\}$

This can be done by performing some hash consing on the types being expanded in the expandType function: instead of simply remembering which variables are in the process of being expanded, we can remember whole type expressions; when we reach a type expression that is already being expanded, we introduce a recursive type variable in this position, removing the redundant outer layer of types like the above.

An Intermediate Representation for Simplification

The above two approaches do not work very well out of the box.

First, we cannot perform them on non-expanded types, since co-occurrence analysis will miss much information which is contained in bounds that have not been flattened yet. For instance, if we inferred a type variable $\alpha$ with upper bounds $\tau_0 \to \tau_1$ and $\tau_2 \to \tau_3$, only when we flatten these bounds and merge the function types into $\tau_0 \sqcup \tau_2 \to \tau_1 \sqcap \tau_2$ will we notice the co-occurrence of $\tau_0, \tau_2$ and $\tau_1, \tau_3$.

Second, it is awkward to perform the normalization steps necessary for this sort of function type merging on the expanded type representation, which is too loose

The Compact Type Representation

For this reason, we introduce an intermediate CompactType representation between SimpleType and Type, in which to perform simplifications more easily.

The syntax of CompactType corresponds to a normalized form of type representations, where all the non-recursive variable bounds have been expanded:

case class CompactType(
  vars: Set[TypeVariable],
  prims: Set[PrimType],
  rec: Option[SortedMap[String, CompactType]],
  fun: Option[(CompactType, CompactType)])

case class CompactTypeScheme(term: CompactType, recVars: Map[TypeVariable, CompactType])

The recVars field of CompactTypeScheme records the bounds of recursive type variables (which we cannot expand, as they are cyclic).

The compactType function to convert a SimpleType into a CompactTypeScheme is straightforward and looks like the expandType function shown earlier. The simplifyType function is more complicated, and I will not show it here (you can see it in the source code, where it is implemented in a rather low-level way for performance reasons). Finally, hash consing is done as part of the expandCompactType function, and is more straightforward.


5. Conclusions

Algebraic subtyping is a promising new technique which infers principal types in the presence of pervasive, implicit subtyping. In this blog article, I have argued that algebraic subtyping is not, at its core, about bisubstitution or polar types (which are the terms in which it was introduced by its inventor, Stephen Dolan); rather, I think it can be understood using a traditional approach similar to Algorithm J, but recording type variables constraints instead of performing unification on the fly. Type simplification is the more tricky part of the exercise. Yet it is crucial, as inferred types usually contain many useless variables and some duplicated structure. I argued that satisfactory type simplification can be obtained using a combination of “co-occurrence analysis” and hash consing.


  1. OCaml has been quite conservative with such principality-breaking features, notable exceptions being GADTs and overloaded record fields, but Haskell has been adopting them more liberally in order to increase expressiveness. 

  2. For instance, OCaml uses row types to make object types and polymorphic variants more flexible, avoiding the need for explicit coercions in the same way that subtyping would; and Haskell can use type class polymorphism to emulate subtyping, as exemplified by the lens and optics libraries, which are both explained using subtyping. 

  3. Simple-sub is operationally not very far from the graph-based implementation presented by Dolan, but it is framed in a much more familiar setting, and differs considerably from the system presented in the paper by Dolan and Mycroft. 

  4. As explained by Dolan in his MLsub paper, this type is just as general as the more natural $\forall \alpha, \beta.\ \mathsf{bool} \to \alpha \to \beta \to \alpha \sqcup \beta$.