# Polymorphism, Annotation, and Decidable Type Inference

###### 7/8/21

[In this section, we will sometimes use annotated abstractions, and sometimes use unannotated abstractions. This is just to be flexible in discussing their respective merits.]

There certainly is an unusual feature possessed by the annotated STLC that we have thus far. Consider two abstractions with annotations

λx:I.x

and

λx:B.x

and also, this abstraction, without an annotation,

λx.x

λx.x is the Identity function. When applied, it takes x as an argument, or input, and returns x as the value, or output.  It will work perfectly with any x whatsoever, no matter what the type of x.

Going back to λx:I.x and λx:B.x. They too are identity functions, but the first is only well-typed in contexts where it is a function Int->Int (i.e. for example (λx:I.x 5)), and similarly for the second function with Booleans (e.g (λx:B.x True)). Each of these function expressions has a single type. They are monotyped or monomorphic. In general, being more specific with types is a good thing (that is how we catch type errors), but in a case like this, with identity the excessive type annotation is 'fussy'.

#### Polymorphism

What we feel we might want for some abstractions is something like

λx:T.x where T is a type variable that can be instantiated to Int, or Bool, or Int->Int etc. Remember we do not have type variables in the STLC itself so we have no way of doing this as an annotation, at present.

In cases where expression may have a type that involves type variables its type is said to be polymorphic (i.e. may have difference instances of the same general form).

It is clear, with algorithms and code in general, that polymorphism is entirely appropriate in certain circumstances. Consider a data-structure like lists; for example, lists of integers, lists of characters, lists of employee records... etc.; and consider a function which produces the first element of a non-empty list. That function does not 'need to know' what is in the list. It just has to be able to produce the contents of the first slot or cubby. The function should be polymorphic. (Some programming languages call polymorphic functions 'generics'.)

#### Polymorphism and type inference

As we have seen, there are algorithms, such as the Hindley-Milner one, that can infer the types of expressions in STLC (basically without too much trouble). No annotations are needed. Generally, annotations are good in that they provide a check that what the programmer thinks she or he is doing actually is correct as to what the types are. But that check can only be carried out for all cases if the system can infer the types. The system checks what the programmer proposes.

Unfortunately, if STLC is extended to full polymorphism, the type inferencing becomes undecidable. That is: there will be expressions that no type inferencer can figure out what their type is.

#### Two ways out

It is possible to extend STLC to partial polymorphism which retains decidable type inferencing. That is the line taken by so-called Hindley-Milner (HM) type systems (or Let-polymorphism type systems). These can be seen, in part or in whole, in functional programming languages like ML and Haskell.

Alternatively, it is also possible to extend STLC to a fairly extensive polymorphism coupled with User supplied annotations ('hints').  These systems can approach reasonable type inferencing (it may depend on the hints). Examples here include polymorphic lambda calculus and the System F family of calculi (also seen, in part, in the functional programming language Haskell).

Some more detail

Let us build something up for discussion. Consider

λfirst.λsecond.second (that is a longer and clearer version of λf.λs.s). This expression is an abstraction, which has a body that is an abstraction (in fact, identity).  If this 'function' is applied to two arguments, it will drop the first and return the second Say we apply this to two identity applications as arguments

((λfirst.λsecond.second (λx.x 2)) (λx.x True))

Here is the Normal Order Reduction for that expression

((λsecond.second) (λx.x True ))

(λx.x True)

True

Now, let us annotate the variables going in the reverse order

True, a constant, is a Bool

(λx:B.x True)

((λsecond:B.second) (λx:B.x True ))

((λfirst:I.λsecond:B.second (λx:I.x 2)) (λx:B.x True)) and that whole expression has type Bool.

#### How were these variables annotated?

Well, we just looked at the applications and saw what made sense. For example, in the application (λx.x True) the abstraction has to be Bool->Bool so we annotate the abstraction, within the application, as (λx:B.x True). Similarly, in the application (λx.x 2) the abstraction has to be Int->Int so we annotate the abstraction, within the application, as (λx:I.x 2).

#### Annotating these variables by algorithm— might full Polymorphism and Type Variables be a good idea?

As we have seen, there are algorithms, such as the Hindley-Milner one, that usually can produce the annotations for many systems. Then earlier we mentioned that having, for example, infinitely many identity functions λx:I.x, λx:B.x, λx:I->I.x, etc. seems a bit excessive. Why couldn't we have

λx:T.x where T is a type variable that can be instantiated to Int, or Bool, or Int->Int etc. ?

then we could probably drop the annotation and let an algorithm produce the type for us. The actual type in use would an instantiation of the type variable.

#### An issue: passing functions as arguments

Let's make this example slightly more complex by passing the identity function in.

[A preliminary: passing functions/abstractions in is not particularly mystifying. Consider

(λfunction.(function True) λx.x)

this is an application, and it reduces

(λfunction.(function True) λx.x)

(λx.x True)

True

Similarly

(λfunction.(function 2) λx.x)

(λx.x 2)

2

]

Now we can build this into our more complex expression

(λfunction ((λfirst.λsecond.second (function 2)) (function True))     λx.x)

and this reduces to

((λfirst.λsecond.second (λx.x 2)) (λx.x True))

and then the same as before down to True.

But there is an issue here, and it arises with the first reduction step involving identity

function.((λfirst.λsecond.second (function 2)) (function True))     λx.x)

The identity function λx.x on the outer right presumably needs to be polymorphic with a type variable. The 'same' function when it is substituted into (function 2) needs to have type Int->Int. Yet the 'same' function when it is substituted into (function True) needs to have type Bool->Bool.

So, what we have here is a question, or a consideration. Can a polymorphic expression have two separate and distinct type instances, or instance occurrences, when substituted into the same expression?

Part of the problem arises from this. In STLC, when we annotated a binding variable, say like this λx:I.x, that annotation applies throughout the scope or the body of the abstraction (in this case, x). So, one would think that were we to use a variable as an annotation, say, λx:TypeVar.x all reasoning, and instantiation, involving that variable would depend solely on the scope of the abstraction. However, when we wish, for example, to use the argument 2 to help us decide the type of the function in (function 2) then we are going outside the scope of the function and into the local environment. We start with, say,

(λfunction.((λfirst.λsecond.second (function 2)) (function True))     λx:TypeVar1.x)

and that reduces to

((λfirst.λsecond.second (λx:TypeVar1.x 2)) (λx:TypeVar1.x True))

and that isn't quite what we want. The reason is that TypeVar1 cannot have two contradictory values (i.e. Int->Int and Bool->Bool). Instead, we would like whatever is used characterize the type of the identity function to be polymorphic i.e. to permit multiple instances of the same type. We can do that by having what is often called a 'Type Schema'. We permit 'forall' or ∀ (the logical symbol for 'all') in front of a type variable and type expression.

Now there is, so-to-speak, no right or wrong answers here. A type system is being devised and it can be better or worse for certain purposes. What Haskell, a modern functional programming language, does is to allow you to have both schemes in these settings (multiple instances of the same type, and no multiple instances of the same type). If you write the Haskell equivalent of

(λfunction.((λfirst.λsecond.second (function 2)) (function True))     λx.x)

the compiler will produce an error message that the expression is ill-typed because you cannot have a function which is both Int->Int and Bool->Bool. But then Haskell has the keyword 'let' which can be used to introduce local definitions. If you write the Haskell equivalent of

let f = λx.x

in

((λfirst.λsecond.second (f 2)) (f True))

then that will type check and will be type correct. The 'let' keyword invites multiple separate instances of the same type variable, if they are required.

#### Type variables, instances, and schemes

What we would like for the type of an expression like the identity function λx.x is not a single type variable, that has a single value, say λx:TypeVar.x, because other factors in the context might on occasions force us to understand that TypeVar as having contradictory values (e.g. Int->Int and Bool->Bool).  Instead, what we would like is that the type be a universal or a 'scheme', perhaps, notationally, λx:AllTypes or λx:∀Type. And this universal or type-scheme would be capable of have several instantiations or instances. These instances might have different values one from another, but there would be nothing contradictory or awkward in that. Each single instance would have, or would have to have, its own single type.

#### How does STLC fare with this?

STLC has no type variables and every abstraction has to have a type annotation on its binding variable. The effect of this is that, roughly speaking, most of the problematic cases that we have been looking at will fail to have a type (although many of them will reduce to normal forms that do have a type). This failure to have a type initially comes about largely because a type cannot be both an Int and a Bool at the same time. They can reduce to a normal form with a type because one of the problematic components is redundant and disappears in the reduction. Also, many normal forms have a type (for example, a normal form might be an Int, say 39, or a Bool, True)

Neither of these expressions are going to have a type, no matter what other annotations are added.

(λfunction.((λfirst.λsecond.second (function 2)) (function True))     λx:B.x)

(λfunction.((λfirst.λsecond.second (function 2)) (function True))     λx:I.x)

But this can be annotated successfully

((λfirst:I.λsecond:B.second ( λx:I.x 2)) ( λx:B.x True))

﻿﻿