###### 7/8/21

DRAFT

### Let polymorphism

We would like to have polymorphism in certain circumstances. For example, we would like to have the same identity function λx:x that could be used on Booleans and Integers, without having to have two separate identity functions, one of type B->B and the other of type I->I .

The usual way to do this, in this setting, is via a 'let' construction. Most functional programming languages have an expression form similar to

**let** <variable> = <expression1> **in** <expression2>

For example, the following is a well-formed expression in the computer language Haskell

let identity = \x->x in (identity 7, identity False) (and a Haskell interpreter will return (7,False) as the value for this.)

What we are seeing here is a local definition of 'identity'. The expression \x->x is Haskell's version of λx.x. Haskell has a notation for ordered pair and it is ( , ). The expression being evaluated consists of an ordered pair whose first component is identity applied to 7 with a second component of identity applied to False. For Haskell this is well-typed and has a value (7,False). This means that the identity function that has been defined is polymorphic— it has been applied both to Integers and Booleans without any problem or issue.

Haskell, of course, has a much more sophisticated type system than mini-ML But a local definition Let construction is a way of bringing polymorphism into mini-ML

In a general hand-waving way, local definitions are not hard to do in lambda calculus. Say we wanted to define variableX as being expressionY (i.e. variableX=expressionY) and substitute ys for xs throughout expressionZ, all we need to do is to wrap a lambda x around z and apply that to y i.e.

(λvariableX.expressionZ expressionY) ... on reduction that will substitute ys for xs in z

however, there can be details and complications. For example, can the variable x appear within the expression y (as might be the case with recursion)? what about capturing and problems with substitution (where, perhaps, several lets are nested one within another)? Also, will this be able to provide the polymorphism we are looking for?

For this extension to lambda calculus, the construction

**let** <variable> = <expression1> **in** <expression2>

will be accepted as creating well-formed formulas in the extended lambda calculus. [This does introduce questions of how that construction is to be reduced to normal form. However, at present the interest is with types, we will set that aside.] Also, for simplicity, we'll not let the <variable> appear within <expression1>.

As mentioned, the let addition is similar to a plain lambda calculus expression with form

(λvarX. e_{1} e_{2})

How might that entire expression form be typed? What might be a typing rule for it? One obvious approach is to work out the type of e_{1 }then calculate the type of e_{2} using the type of e_{2 }as the types for all the occurrences of varX in e_{2}. That would give a typing rule like Possible Typing Rule 0

Γ ⊢ e_{1}:τ and Γ∪{x:τ)_{ }} ⊢ e_{2}: σΓ ⊢ (let x=e _{1} in e_{2}) : σ |

Unfortunately this will not produce the effect we are looking for. The reason is that every occurrence of x in e_{2 }will have the same type, and there are expressions where different occurrences of x should have different types

The Haskell example above uses an ordered pair, which is not available in our lambda calculus. But there are suitable expressions to illustrate the problem.

let identity = λx. x in

let ignore = identity True in

identity 2

We would expect this expression to have a value of (or to reduce to) 2 which is an Integer i.e. of type I. Using Possible Typing Rule 0

Γ , the final environment is empty, x is identity, e_{1 }is λx. x , and e_{2 }is (let ignore = identity True in identity 2)

⊢ (λx. x):τ and {identity:τ_{ }} ⊢(let ignore = identity True in identity 2): σ⊢ (let identity = λx. x in (let ignore = identity True in identity 2)) : σ |

Then, on the right hand branch, for {identity:τ_{ }} ⊢(let ignore = identity True in identity 2): σ , Γ is {identity:τ_{ }, x is ignore, e_{1' }is (identity True) and e_{2' }is (identity 2)

{identity:τ_{ }} ⊢ (identity True):τ and {identity:τ_{ }}∪{ignore:τ) ⊢(identity 2): σ{identity:τ _{ }} ⊢(let ignore = identity True in identity 2): σ |

But we are in trouble. (identity True) and (identity 2) are applications, and the Application rule is not able to type identity in this case.

Let polymorphism has a definite suggestion here.

### An 'easy' explanation of Let polymorphism.

1. All lambda expressions are going to be monomorphic. That means that were we to annotate the lambda variable to get a view of it, all lambda expressions would look similar to this.

λx:<type>. <scope>

The important point here is that <type> is a plain type and not a universally quantified type-scheme. So the <type> is monomorphic. It may have type variables in it, but if those are valued (perhaps by unification) they will be valued only once (and uniquely).

2. The variable in a let expression is polymorphic. That means that were we to annotate the let variable to get a view of it, all let expressions would look similar to this.

let x:<type-scheme>= e_{1} in e_{2}

The type scheme here likely will be a full blown type scheme with a universal quantifier and a list of quantified type variables. Which type variables you may ask? Well, there is a little excursion needed here into *free type variables (ftvs) . *The type for e_{1 }, were it to be evaluated on its own, might have free type variables in it, denote these by ftv(e_{1}) . And, in the general case for finding types, there will be an environment Γ . Now, Γ is a collection of pairs of term variables and type schemes. Those type schemes may have free variables in them, and all of those in Γ can be collected up. These may be denoted by ftv(Γ). Our interest is in the free variable of e_{1} which are not free variables in Γ i.e. ftv(e_{1})-ftv(Γ). Those are the ones that should be used as the quantified variables in the corresponding type scheme. So, for example, if the type of e_{1 }is typeVar1->typeVar2 and neither of these are free in the environment then ∀typeVar1, typeVar2 (typeVar1->typeVar2). Were we to annotate the opening segment of the let, it would look like this, schematically

let x:∀typeVar1, typeVar2 (typeVar1->typeVar2)= e_{1} in e_{2}

3. The next step is to use instantiations of the type scheme, instantiations that have all new type variables. The scheme ∀typeVar1, typeVar2 (typeVar1->typeVar2), for example, has instantiations (typeVar100->typeVar101), (typeVar102->typeVar103), and many more.

For each occurrence of x in e_{2} , when considering its type, a new type instantiation will be used. This means, to hark back to an earlier example, when considering the types of the sub-expressions (identity True) and (identity 2) , they will have entirely different type variables one from the other (and this will allow them to be solved simultaneously).

4. This can all be rolled into a single rule

Γ ⊢ e_{1}:τ and Γ∪{x:∀A(τ))_{ }} ⊢ e_{2}: σ i) the term variable x needs not to have a prior valuation in Γ andii) A is the list of free variables ftv(e _{1})-ftv(Γ)Γ ⊢ (let x=e _{1} in e_{2}) : σ |

### Using constraints

As alluded to earlier, constraints make the problem of finding types much easier. Some of the expression forms imposes constraints. These constraints can be collected up and the collection solved (mgu (most general unification) will produce a suitable substitution).

In the Junior Hindley Milner pages, all the constraints were equalities e.g. that τ_{1}= τ_{2}->σ . Going forward there may be some constraints of a different form e.g. that one type is an instance of a type scheme.

### The Constraint rules

This determination can be made using 'constraint typing rules'. Each typing rule has some premises and a conclusion. The premises are written above a horizontal line and the conclusion below that line. Each rule permits one step of the derivation or inference.

Of vital importance are *new type variables*. Several of the rules introduce type variables. These need to be totally new to the whole reasoning or proof. Conceptually, we can imagine a central source of type variables and each time a type variable is introduced that central source supplies a new one.

**The general form**

<premises> or <preconditions> <environment> ⊢ <expression>:<its type> <constraints> |

**Variable** (i.e. the expression, e, whose type is to be determined ,is a term-variable)

(e:τ) ∈ Γ Γ ⊢ e:σ {} |

If the expression e to be typed is a term-variable, and it has a value in the environment, then its type is that value and there are no constraints.

Examples of the use of this rule include

(e:I) ∈ Γ Γ ⊢ e:I {} |
(e:I->B) ∈ Γ Γ ⊢ e:I->B {} |

**Constant** (i.e. the expression, e, whose type is to be determined is either a Boolean (B) or an Integer (I))

Γ ⊢ <a Boolean>:B {} |
Γ ⊢ <an Integer>:I {} |

If an expression is True or False, its type is a B. If an expression is one of 0,1,2,3... etc., its type is I. This produces no constraints.

**Lambda** (i.e. the expression, λx.e, whose type is to be determined is a function, without annotation)

This rule requires that the lambda variable (x in the case of λx.e) not be in the original or unaugmented environment. If it is, the lambda expression is alpha-rewritten to use another variable, say y, to give an equivalent expression λy.e which satisfies the precondition

Γ ∪ {x:τ_{ }} ⊢ e:σ CΓ ⊢ λx.e : τ->σ C |

If {x:τ_{ }} is added to the environment, and, then, an expression e (the potential scope) has the type σ requiring constraints C, then a lambda expression λx.e has the type τ->σ using the environment Γ without {x:τ_{ }} and the constraints C are unchanged.

**Application** (i.e. the expression, (e_{1} e_{2}) whose type is to be determined is an application)

There is a new type variable, σ, that is going to be used here. This needs to be not a free type variable of any of the prior environments or constraints. And easy way to achieve this is to ensure that σ is completely new.

C'_{ }= C_{1 }∪ C_{2 }∪ {τ_{1}= τ_{2}->σ}

Γ ⊢ e_{1}:τ_{1} C_{1} and Γ ⊢ e_{2}: τ_{2} C_{2 }and C' is defined as C'_{ }= C_{1 }∪ C_{2 }∪ {τ_{1}= τ_{2}->σ} /*σ new */Γ ⊢ (e _{1} e_{2}) : σ C' |

If you are trying to type an application (e_{1} e_{2}) ... If e_{1 }has the type τ->σ and e_{2} has the type , then (e_{1} e_{2}) has the type σ .