8/29/21
Introduction
We wish to extend what we have beyond STLC to what is sometimes called a Mini-ML type system. Clarity will be needed here. Imagine for one moment the lambda calculus without types. The well formed expressions or formulas all have values. The word for this is that they are 'terms'. They may or may not have a normal form, a terminating reduction etc., but they are terms. If types are added, they are to provide a 'type' for the individual terms. They do not affect the values the terms have. They are a separate system. In a lambda calculus with types, there are expressions, or sub-expressions, for the types within the language. This means, for example, that there might be term-variables (variables whose values are terms) and type-variables (variables whose values are types).
Here are some of the features of our Mini-ML:
- it allows Type Variables within the expression language. Syntactically, these will be strings starting with a lower-case letter e.g. "a", "b", "aTypeVariable". So, types now consist of four kinds of things:
type-variables,
Integers (I),
Booleans (B),
and types constructed from other types using '->'. - there will be Type Schemes. Syntactically, a type scheme consists of a universal quantifier ∀ followed by a list of type variables followed by a type, in parentheses. As examples,
∀a (a->a)
∀a,b (a->b)
∀x (I)
∀k (x->I)are all type schemes. In the zero-case, where the list of type variables is empty, we just omit the quantifier and scoping parentheses, and count the type or scope-type itself as being a type scheme e.g.
∀a (a->a)
(a->a)
(I->b)
Bare all type schemes. So all Type expressions are Type Schemes, but not all Quantified Type Schemes, with a non-zero variable list, are Types (i.e plain type expressions). One consequence of this is that, there is only ever a single occurrence of a universal quantifier in a quantified type scheme and it can only occur on the 'outer level' of the expression. For example,
(a->∀a (a))is not a type scheme because it does not start with a universal quantifier ∀; it is also not a plain type because when '->' is used to construct a type both the left and right components have to be types but ∀a (a) is not a type (it is a type scheme).
- Type-variables can have values which are type schemes (remember: all types are type schemes).
- annotation will be optional. The parser now will read lambda abstractions either with annotations or without annotations. Then the type inferencer will also be able to carry out its inferencing, to a definite result, with or without annotation. Annotation of lambda abstractions should be by plain types (not type schemes with a universal quantifier)
- there is going to be a new form of expression involving 'let ... in'. That will be introduced later, and it will have its own form of annotation. It will permit annotation by a type scheme with a universal quantifier.
- annotation now will be annotation by type scheme (some with, some without, the universal quantifier). So, for example, all annotations from STLC still count as annotations (for all types are type schemes). But there are additional annotations namely those involving either type-variables, which can be used on lambdas, or the universal quantifier ∀, which can be used only in the context of let ... in.
Bound occurrences, Free occurrences, and Scope in Type Schemes
A quantified type scheme has a quantifier, a list of type-variables, and a type (which would usually contain type-variables) i.e. it has the following form
∀<variable list> (<type>)
In this context, the type is often referred to as being the scope of the quantifier. Occurrences of variables within the scope are either bound or free. A bound occurrence of a variable is an occurrence where the variable is both the same as one of the variables of quantification in the variable list, and it occurs within the scope of the quantified type scheme. A free occurrence is any occurrence of a variable in the scope which is not bound. For example,
∀a,b ((a->c)->(d->a)) has two bound occurrences of a (in bold), no bound or free occurrences of b,
a single free occurrence of c, and a single free occurrence of d
The type is analogous to the scope above, and variable occurrences in the type/scope would be free or bound, also as above. Any expression that has only bound variables in it (i.e. no free variables) is a closed expression.
Here are some examples of expressions involving annotated lambda variables:-
λx:∀a (B->a).y in this, x and y are term-variables, and a is a type-variable
λx:∀x (B->x).x in this, the first and fourth occurrences of x are term-variables,
and the second and third occurrences are type-variables. A typical
parser can parse and read this to a correct internal representation.
It is not so easy for humans, though. Lambda abstractions, and
term-schemes are indifferent to the re-writing of bound variables.
It is a lot smarter to present this formula in an equivalent form, say
λx:∀a (B->a).x
Instantiation and Generalization of Type Schemes
An instantiation, or specialization, of a scheme, is the removal of the quantifier and the type-variable list, and the simultaneous substitution of plain types (not quantified type schemes) for the bound variables. So, as examples,
∀a,b (a->b) has instantiations (B->I), (c->B), ((B->B)->B), etc. A new type-variable would be used, if the instantiation has type-variables in it (so 'c' is new).
∀x,y ((I->x)->x->b) has instantiations ((I->B)->B->b), ((I->I)->I->b), ((I->e)->e->b), etc.
There is a notation to show that a type is an instantiation, or specialization, of a type scheme (unfortunately it is not standard). We will use <= . So, for example,
(B->I)<= ∀a,b (a->b)
The counterpart of specialization is generalization, and basically this is the relation the other way around. So, (B->I) has a generalization ∀a,b (a->b), and this can be written
∀a,b (a->b) => (B->I)
A type that has a generalization will usually have many generalizations. For example, ∀a (a->I) is a generalization of (B->I) i.e.
(B->I)<= ∀a (a->I)
Generalizations are going to be used, in our MiniMl, as the type schemes identifying the type of 'variable' term sub-expressions within let expressions. If a closed expression, e, has a type which is a generalization (i.e. a type scheme), there will be a principal type scheme that it has as a type (roughly, a most general generalization). The principal type does not have to be unique.
Environment Γ (or, the Assumptions)
We are aiming to find the types of terms or expressions in the expression language e.g. what is the type of (λx.x True)? When reasoning about this, there may be term-variables in the expression under consideration. We need to know what type the individual term-variables have. For example, consider an identity function applied to the term-variable y i.e. (λx.x y). This entire expression has the same type as the type of y. But, as it stands, we do not know the type of y. It is convenient to introduce the notion of an Environment, which is dictionary, list, set, or collection, of pairs connecting term-variables and type-schemes (i.e. their type values). So with the environment {y:I} the expression (λx.x y ) has type I.
An alternative word for the environment is the 'Assumptions'. Sometimes there is the need to either add a value pair to the environment, remove a value pair from the environment, or combine environments. We can use a notation close to set theory as for these operations. For example,
Γ ∪ {x:t0 } adds
Γ \ x takes out the value for x
Γ1 ∪ Γ2 combines two environments
Capturing, and 'free-for': a red flag
There is an issue that arises with removing a universal quantifier and substituting for (formerly) bound variables in the scope (i.e. specialization or instantiation). It is perhaps easiest to illustrate it in predicate logic rather than in type theory. Using a fairly casual syntax, here is a predicate logic formula using the universal quantifier
∀x(Tall(x))
and from this is follows that Tall(bert), Tall (ann) etc. and these are just substitution instances into the scope. Any instance can be substituted in here and the resulting inference is valid. But if we move up a level of complexity and nest quantifiers, perhaps
∀x (∃y(MotherOf(x,y))) i.e. 'Everyone has someone who is their mother'
∃y (MotherOf(b,y)) i.e. 'There is someone who is the mother of Bert'
∃y (MotherOf(a,y)) i.e. 'There is someone who is the mother of Ann'
∃y (MotherOf(y,y)) i.e. 'There is a y such that y is the mother of herself' ??? wrong ???
What has happened here is that the bolded x in ∃y(MotherOf(x,y)) is a free occurrence in the scope of ∀x (∃y(MotherOf(x,y))) but if y is substituted in there the occurrence is 'captured' and is bound by the existential quantifier ∃y(MotherOf(y,y)). The free occurrence of y is not free for the substitution of x.
To avoid anything similar occurring in mini-ML systems. we can insist that instantiation of a type-scheme to type variables must use new variables (new to the environment/assumptions, and new to any expressions).
Rules for Typing Judgements
We are trying to determine the type of certain expressions in certain environments. If we succeed with this, we have a typing judgement which is written
Γ ⊢ e:t
This is read "in environment Γ expression e has type t".
This determination can be made using 'typing rules'. Each typing rule has some premises and a conclusion. The premises are written above a horizontal line and the conclusion below that line. Each rule permits one step of the derivation or inference.
Variable (i.e. the expression, e, whose type is to be determined is a term-variable)
(e:τ) ∈ Γ and τ ⊇ σ Γ ⊢ e:σ |
The first place to look here is at the conclusion. Then what the rule says is that if you are trying to type a variable, e, then if a pair valuing that variable to a type τ is in the environment, and a type σ is an instance of τ, you may conclude the e has type σ. If τ is a simple type, not a type-scheme, then τ ⊇ τ, trivially, and you would conclude Γ ⊢ e:τ.
Examples of the use of this rule include
(e:I) ∈ Γ Γ ⊢ e:I |
(e:I->B) ∈ Γ Γ ⊢ e:I->B |
Constant (i.e. the expression, e, whose type is to be determined is either a Boolean (B) or an Integer (I))
Γ ⊢ <a Boolean>:B |
Γ ⊢ <an Integer>:I |
This rule does not require any premises. If an expression is True or False, its type is a B. If an expression is one of 0,1,2,3... etc., its type is I.
Lambda (i.e. the expression, λx.e, whose type is to be determined is a function, without annotation)
This rule requires that the lambda variable (x in the case of λx.e) not be in the original or unaugmented environment. If it is, the lambda expression is alpha-rewritten to use another variable, say y, to give an equivalent expression λy.e which satisfies the precondition
Γ ∪ {x:τ } ⊢ e:σ Γ ⊢ λx.e : τ->σ |
Again, the place to look here is at the conclusion. Then what the rule says is that if you are trying to type a lambda expression, λx.e. You assume that x has the type τ and add that assumption to the environment and see what type the body of the lambda i.e. e has under that assumption, say it is σ. Then the conclusion lambda expression λx.e has type τ->σ.
Application (i.e. the expression, (e1 e2) whose type is to be determined is an application)
Γ ⊢ e1:τ->σ and Γ ⊢ e2: τ Γ ⊢ (e1 e2) : σ |
If you are trying to type an application (e1 e2) ... If e1 has the type τ->σ and e2 has the type , then (e1 e2) has the type σ .
There are some more rules to come, but at this point it is convenient to introduce type-derivation trees (using a reduced rule set).
Type-derivation trees
Consider two closely related problems. We have a lambda expression, say (λx.x True) and we would like to find its type and prove that it did indeed have the type that we had found. A related problem starts from the assumption that we had guessed or conjectured or believed that the type of (λx.x True) was B (i.e. Boolean) and that we wanted to proved that our conjecture was correct. Let us try the second problem i.e.
Prove:- (λx.x True):B
Results like this can be proved from our typing rules. We are helped by a feature that the rule set has. The rules permit syntax directed proof, In this case that means that whatever we are trying to prove, only one rule applies for each step; moreover the rule that applies can be identified from the syntax of the expression under consideration. In effect, we going to problem-reduce or to work backwards from conclusion to premises, creating a tree proof.
What we are trying to prove is
⊢ (λx.x True):B
Only one typing rule applies to this: Application. With this e1 is going to be λx.x, e2 is going to be True and σ is B. There is no environment Γ . So the prior step is
⊢ λx.x :τ->B and ⊢ True: τ
App
⊢ (λx.x True):B
Notice here that the two premises of the rule are in the same context as each other. There is an algebraic variable τ for a yet-to-be-determined type. τ appears twice, once in each of the premises. When τ is determined, its value can be inserted into both premises.
The right-hand branch is available from the rule Constant (and no premises are required for that). This also tells us that the type unknown τ has the value B. So a prior step might be
Con
⊢ λx.x :B->B and ⊢ True: B
App
⊢ (λx.x True):B
The value, B, for τ has been inserted in the first premise
The left-hand branch may be grown using the Lambda rule.
{x:B->B} ⊢ x:B->B
Lam Con
⊢ λx.x :B->B and ⊢ True: B
App
⊢ (λx.x True):B
and the left-branch leaf is available from the Variable rule
Var
{x:B->B} ⊢ x:B->B
Lam Con
⊢ λx.x :B->B and ⊢ True: B
App
⊢ (λx.x True):B
This is a type-derivation tree proof that (λx.x True) has type B. (The root of the tree is at the bottom, and the branches grow upwards.)
This tree proof can be linearized
1. {x:B->B} ⊢ x:B->B Var
2. ⊢ λx.x :B->B 1 Lambda
3. ⊢ True: B Const
4. ⊢ (λx.x True):B 2,3 App
Our second similar problem was that of determining the type of (λx.x True) in the case where we did not have any prior idea as to what it might be. An approach here is to use algebraic unknowns e.g. τ0, τ1, τ2 ... etc. for the unknown types, produce equations ('constraints') between the types, then solve the equations. 'Solving' here means 'find a substitution under which the constraints come out to be true'. See also Hindley-Milner III — Solving the Constraints
In this case,
let τ0 be the type of (λx.x True), τ1 be the type of λx.x, τ2 be the type of True, and τ3 be the type of x
App tells us that λx.x can be considered to have the type τ0-> τ2 i.e. τ1 = τ0-> τ2
Const requires that τ2 = B
Lambda gives τ1 = τ3-> τ3
So the constraints are [τ1 = τ0-> τ2, τ2 = B, τ1 = τ3-> τ3 ] i.e. τ0=B
Thus far, we've only really added type variables, although we have not really used them to any purpose. Nor have we used the type schemes.
To be continued...