Junior Hindley-Milner I


Finding the Type of an Expression

Ideally, if an expression has a type, we would like to find that type and prove that the expression had it; and if the expression does not have a type, we would like to prove that it does not. There are different approaches here as to how to do this. In more advanced work, with more advanced type systems, there would be type inference and even type inference by algorithm. The standard approach, more or less, is Hindley-Milner type inference. But, in turn, Hindley-Milner can be carried out by different algorithms. Usually in them there is an interleaving of allocating values to variables and extracting constraints that need to be satisfied. But there is something to be said for an approach that extracts the constraints first, then solves the constraints (apparently Hindley was in favor of this). This is clearer at a pedagogical level and it generalizes better to other areas. So that is what we will try. [The Hindley-Milner type systems and algorithms are often alternatively known as Hindley-Milner-Damas systems, because Luis Damas made important contributions.]

Junior Hindley-Milner

Hindley-Milner would never be used on STLC. That would be like killing a fly by shooting it with a shotgun. However, maybe there are some insights to be had here. Also, the H-M approach is typically used where there is no annotation. However, STLC is characterized by a type annotation on the lambda variables. Being able to see a type annotation does help in understanding the algorithms. So STLC, with H-M algorithms, should provide a gentle introduction to type inference.

Variables and 'Algebraic' variables

In our STLC, we have variables, such as x, y, z etc. and they occur in formulas like λx:I.λy:B.λz:I.(x (y z)) . These variables can be called 'term variables'. But in the types for STLC, we do not have any variables. The types are Int, Bool, Int→Bool, Bool→Bool→Bool, etc. and nowhere in there are there any variables. However, in this context we regularly used expressions like "If t1 is a type and t2 is a type then t1→t2 is a type." but 't1' and  't2' are not types (they are not Ints, Bools, etc.). What is going on here? Well,  t1 is and t2 are like variables, they stand for types, or have values that are types, although they are not types in STLC themselves. They are the sort of variables we might use, if we were trying to figure everything out on a sheet of paper or the back of an envelope.  A label we might use here is 'algebraic variables'.

Our STLC has no type variables, but we will be using t1, t2, t3 and t4 etc. as algebraic variables.

Move to unique variables in the STLC Expression

STLC can have expressions like this

(λx:I.(x x)   λx:B.(x x))

It is an application where there are two separate bindings of the lambda variable x within a larger overall expression. Now, there is absolutely nothing wrong with this. But it will be clearer going forward if the binding variables were different from one another in an expression e.g.

(λx:I.(x x)   λy:B.(y y))

We know that the Rule of Alpha Rewrite allows us to rewrite a binding variable to a new variable. We also know that the number of variables in an expression is finite, and the number of variables available in lambda calculus is infinite. So we can always Alpha Rewrite an expression so there are no duplicate binding variables in it. We will imagine that this has been done, if it needs doing, and so all binding variables are unique.

Fresh algebraic type variables.

There are infinitely many t1, t2, t3 and t4 ... . Typically, when starting to use one we will use a new one. We can refer to these algebraic variables as type variables as long as we keep in mind that they are not part of the STLC and STLC's type system, rather they are something outside that we are using analyze the types and produce algorithms about them.

Assumptions and the Assumption Set

During our analysis of an expression in the STLC language, it is likely that we will meet variables (term variables). On first encounter with a specific variable, we often will not know a type for that variable, so what we do is to assume that the variable, say x, has type t3 where the type t3 that we use is new (i.e. fresh). That pairing of a variable in the STLC  language to fresh type variable outside the STLC language is an assumption, and the collection of several assumption amounts to the assumptions or the assumption set. Typically, a successful type analysis will start and end with no assumptions, but assumptions will come and go during the course of the analysis.


Constraints are relations among the types (or, more strictly, type variables). In more advanced work, there can be different types of relations, but at our level for STLC the only constraint relation is that of a kind of equality. So, for example,

t1=t2 and t3 = t4

are constraints. The first example means that whatever type t1 stands for, that type needs to be the same as whatever type t2 stands for (so they could both be Ints, or both Bools, or, indeed both (Int→Bool)s etc.). Similarly, the second example constraint mean that t3 must have a type that matches t4 . To be a little more explicit, if there is a substitution in which the  t1, t2, t3 and t4 are given actual type values, the resulting equations must be true e.g. the substitution [t1:Int, t2: Int, t3:Bool, t4=Bool] would do it (for that would give Int=Int and Bool=Bool).

Generating the constraints

The aim is to type a given expression. We will start by assuming our given expression has type t0 .  That is the unknown that we are going to solve for shortly. Then we can generate constraints by typing its subexpressions, then their subexpressions, and so on, until there is nothing left.

i) An example

For example, consider

(λx:Int.x 7)

initially, we don't know what its type is, we will assume that it is t0.

Application expression

But the expression is an application of form (<function expression> <argument expression>).  We don't know what the type of the function expression is, just yet, so let it be the new t2. We also don't know what the type of the argument expression is, so let it be the new t3 .  So we are thinking that our application (assumed to have type t0 ) is

(t2 t3)

We do know some things. For the overall expression to be well-typed, the function expression must have the type (t3 →t0) . Schematically

((t3→t0) t3)

Now we have some constraints, namely

Constraints from overall application [t2=t3→t0].

The Function sub-expression

This function expression, with type t2, is actually


This is a lambda expression with the form λx:Int.<scope> . In the general case, we do not know what the type of the lambda variable is. And this is where we resort to assumptions. We assume that it has fresh type t4

Assumptions = [ x: t4 ]

As we scan further, we see that the type of x, is an Int


that means that t4 is an Int

Assumptions = [ x: t4 ] Constraints [ t4 = Int ]

Function expression scope

The scope in this case is the single variable


What we do with a variable, is to look up its type in the assumptions, x's type is t4

Any lambda expression is a function, and it is a function from the type of its variable to the type of its scope i.e

<type of variable> → <type of scope>

In the example, both of these are t4, so

t2 = t4→ t4

When we finish analyzing any lambda expression, we discharge (i.e. drop)  assumptions regarding its variable. So the assumptions are now empty. However, the constraints remain and they are

Constraints from function subexpression [t2 = t4→ t4 , t4 = Int]

Argument expression

Earlier we had taken a fresh type variable t3 as being the type of the argument expression i.e. of


The number 7 has type Int, and this introduces a constraint

Constraints from argument subexpression [ t3 = Int]

Collecting up

The constraint analysis is now finished (we have broken the initial expression down to its subexpressions and their subexpressions etc.) We collect up the constraints. They are

[t2=t3→t0, t2 = t4→ t4 , t4 = Int, t3 = Int]

Solving the constraints, in particular solving for t0

We want to solve for the constraints. What does this mean? The ts are algebraic variables. We want to satisfy the constraints i.e. we want to find type values (Ints, Ints→Bools, etc.) under which all the constraints come out true. We want to find values, or a substitution, that satisfies the constraints.

This type of constraint set has the mathematical property that it does not matter which order the constraints are addressed in when attempting to solve them— they are 'confluent'). Right now we are going to use our insight to spot constraints to work with. But later, when we come to producing an algorithm to do this, we will likely put the constraints into a list and work through the list following the order that the list gives us.

Giving our intuitions free rein. You can see one way to start

[t2=t3→t0, t2 = t4→ t4 , t4 = Int, t3 = Int]

apply the substitution [ t4: Int], that gives us

[t2=t3→t0, t2 = Int→ Int , Int = Int, t3 = Int]

The Int=Int constraint does not tell us anything and can be dropped. But we can extend our substitution now to t3 :Int

[t2=t3→t0, t2 = Int→Int, t3 = Int]  apply [ t3 :Int, t4 :Int] gives

[t2=Int→t0, t2 = Int→Int, Int = Int]

extend the substitution to t2 : Int→Int

What we are left with is this


This can be satisfied. The only way to do it is if t0 is an Int. Here is the final substitution

[t0:Int, t2: (Int→Int), t3:Int, t4:Int]

The original expression (λx:Int.x 7) i.e. t0 is well-typed and its type is Int.


To be continued ...