Combinatory Logic II

9/18/19

As in the lambda calculus, there are the notions of redex, reduction path, normal form...

Combinatory logic is going to do away with the abstractions of lambda calculus. Doing this is known as bracket abstraction and the bracket abstraction algorithm removes all abstractions from lambda expressions and replaces them with a standard combinator expression.

Sketch of a bracket abstraction algorithm.

What we are going to do here is to translate lamdba calculus expressions into SKI combinator expressions. We'll allow S, K, and I into the lambda calculus and translate variables, constants, and applications as themselves unchanged. All we have to consider are abstractions.

Input: a variable x and an abstraction λx.(M) of a standard combinator expression M i.e. M itself has already been translated into and SKI combinator expression.
Output: A standard combinator expression <prefix>M such that <prefix>M is equal to λx.(M)

Case:

 

i) M is a constant c then let <prefix>M be K M (let us just check that these reduce in the same way. λx.(M) is λx.(<constant>) and beta reduce the application of this to any argument λx.(<constant>) arg and <constant> is returned; similarly, apply K <constant> to any argument K <constant> arg and <constant> is returned.)

ii) M is a standard combinator C then let <prefix>M be K M (let us just check that these reduce in the same way. λx.(M) is λx.(C) and beta reduce the application of this to any argument λx.(C) arg and C is returned; similarly, apply K C to any argument K C arg and C is returned.)

iii) M is a variable

a) it is x, then let <prefix>M be I (*note, no M here *) (let us just check that these reduce in the same way. λx.(M) is λx.(x) and beta reduce the application of this to any argument λx.(x) arg and arg is returned; similarly, apply I to any argument I arg and arg is returned.)

b) it is a variable y other than x, then let <prefix>M be K M (let us just check that these reduce in the same way. λx.(M) is λx.(y) and beta reduce the application of this to any argument λx.(y) arg and y is returned; similarly, apply K y to any argument K y arg and y is returned.)

iv) M is an application P Q. Let <prefix1>P and<prefix2>Q be the standard combinator expression bracket abstractions of λx.P and λx.Q. Then what we need is S <prefix1>P <prefix2>Q (let us just check that these reduce in the same way. λx.(M) is λx.(P Q) and beta reduce the application of this to any argument λx.(P Q) arg and (λx.(P) arg) (λx.(Q) arg) is returned (you may have to do some rewrites of variables); similarly, apply S <prefix1>P <prefix2>Q to any argument S <prefix1>P <prefix2>Q arg and <prefix1>P arg (<prefix2>Q arg) is returned.)

We did use the identity combinator, but we have shown how to translate this into an SKI expression.

To sum up, we produce an abstraction function abs[x,E] such that abs[x,E][x] = E. It is defined

 

abs[x,x] = I
abs[x,y] = K y
abs[x,k] = K k
abs[x,E1[E2]] = S abs[x,E1] abs[x,E2]

What we have done here is to show that bracket abstraction is possible. In a running system, though, it might be important to do the translation efficiently—you may have lambda calculus as your functional programming language and combinatory logic as the framework of your graph reducer-- and the algorithm sketched here is not efficient.

As an example of bracket abstraction. Consider

λx.(x x) λy.(y) arg first everything other than the abstractions remains the same ⇒ λx.(x x) λy.(y) arg then λx.(x x) translates to S I I and λy.(y) translates to I so the whole lot becomes S I I I arg and we can check the reductions

S I I I arg ⇒ I I (I I ) arg ⇒ I (I I ) arg ⇒I I arg ⇒ I arg ⇒ arg
λx.(x x) λy.(y) arg ⇒ (λy.(y) λy.(y)) arg ⇒ λy.(y) arg ⇒ arg

Normal form

A CL formula is in normal form if no further reductions of it or any of its subformulas are possible.

If a CL formula has a normal form, that normal form is unique (the Church-Rosser theorem).

Not every CL formula has a normal form (because some contain reductions that can go on forever) e.g.

W W W ⇒ W W W and so on forever

Notion of reduction path

Sometimes it is necessary to indicate which expression is being reduced

I (I x) ⇒ I x (*there are two ways of doing this with a single I reduction *)

And, again like the lamdba calculus, some terms can produce a normal form under one reduction path, and non-termination under another

 

K x (W W W ) ⇒ x (* using normal order reduction, *)

K x (W W W ) ⇒ K x (W W W ) (* using applicative order reduction *)

What normal order and applicative order are is explained shortly.

Try to prove the following:-

 

a) (I x) ⇒ x
b) W W W  ⇒ dummy
c) x W W W W ⇒ x
d) x (W W W) ⇒ keep repeating  x (W W W)

[Click on a formula to select it, choose a rule from the menu.]

 

Normal order reduction

Normal order reduction will produce the normal form if there is one, and so that is an argument for using normal order reduction.

In CL, normal order reduction is particularly simple

always do the leftmost redex first

head reduction or contraction , head reduction path (always work from left to right). Normal order reduction can be inefficient due to evaluating the same expressions more than once. For example, consider a lambda or CL expression which in effect adds x to x; informally let us write this as (x + x) and then apply this to the argument (3 x 2) i.e. informally

(x+x)[(3x2)]

Normal order reduction would first reduce the function by substituting the (3 x 2) for the x i.e.

(x+x)[(3x2)]  ⇒ (3x2)+(3x2)

but then the (3x2) will get evaluated twice.

Applicative order reduction

Applicative order reduction, roughly speaking, is to reduce the arguments first (and, of course, this can be done left-to-right, right-to-left, and other ways). So, for example, 

K (I a) (I b)

would be reduced to

K a (I b)

then

K a b

then

a

In this example, applicative order reduction is inefficient, in that the reduction of (I b) is not needed at all (and if the (I b) had been (W W W) i.e. a non-terminating reduction it would not have terminated in the normal form).

Usually, or often, applicative order is more efficient.  Consider the earlier example i.e. a lambda or CL expression which in effect adds x to x; informally let us write this as (x + x) and then apply this to the argument (3 x 2) i.e. informally

(x+x)[(3x2)]

Applicative order reduction would first reduce the arguement by substituting 6 for the (3 x 2)  i.e.

(x+x)[6]  ⇒ 6+6

then the (3x2) will get evaluated once.

Many functional programming languages will use lazy evaluation which roughly is normal order reduction together with sharing of expressions. So, an expression like (x+x)[(3x2)], or (3x2)+(3x2),  will be set up internally in such a way that the (3x2) is shared rather than there being two copies of it.

Weak head normal form

As mentioned, some combinators 'expect' several arguments before they can be reduced. For example, the compositor B x y z

B x y z

expects 3. This means that if an expression starts with B but has only two arguments it cannot be reduced by a compositor reduction; e.g.

B x y

Now, this formula B x y is in normal form. But consider

B x (I y)

It is not in normal form, because the (I y) can be reduced by the identificator to y. This formula is in weak head normal form (WHNF). The focus here in on the first well-formed CL expression in a formula i.e. the head (whether or not that expression is a combinator with a reduction rule).  If the head (with its 'arguments') cannot be reduced, then the whole expression is in weak head normal form. Any expression in normal form is in weak head normal form. But there are expressions which are in weak head normal form which are not in normal form, e.g. B x (I y). 
 
A formula in WHNF either has a head which is not a combinator or it has a head which is a combinator but which does not have enough arguments for the combinator to reduce.
 

Some other notes

There are Church-Rosser theorems which tell us that if there is a normal form, that normal form is unique and normal order reduction will find that normal form if there is one.

We can also do arithmetic, write recursive functions, build lists ... in pure combinatory logic (just as we could and did in pure lambda calculus).

However, like in lambda calculus, this is not a very sensible thing to do. Instead we extend combinatory logic to include what we want eg

 

ADD x y ⇒ x' + y' (*where x' and y' and the evaluated versions of x and y *)
HEAD x ⇒ i, if x is CONS i j

etc. etc.

Eventually,then our functional program will just become an expression in combinatory logic, and we will have several reduction rules. And the problem will be to reduce the expression to normal form.

For lazy evaluation we want to do head contraction only except i) if the head is strict (this means that it needs the values of its arguments, for example, predefineds like + need the numbers they are going to add (not expressions that evaluate to numbers), or ii) a subexpression is shared.

Usual to extend combinatory logic for functional programming by adding all the 'delta reductions'. Then it becomes the same as the extended lambda calculus.

Enjoy!


An aside on Raymond Smullyan, Birds, and Combinatory Logic

Wikipedia: Raymond Smullyan (*love the photo*)

Fifty years ago, yes it is that long, when we studied tableaux we used Smullyan's First-Order Logic as our text. It was excellent and I was somewhat astonished to learn that apparently Smullyan had been a professional magician until he was forty and then he became a logician. Sometime later I became aware of Smullyan's books of logical puzzles and conundrums. I was keen to use some of the individual puzzles as splash screens in my logic software. So I wrote to him, sending him my software, and asking him if I could do that. He handwrote me a most gracious reply, roughly to the effect that computers and software were a complete mystery to him but that I was welcome to use whatever I wished. Sometime after that I become interested in combinatory logic, and I became aware of Smullyan's [1985] To Mock a Mockingbird. Wikipedia: To Mock a Mockingbird. In this he explains combinatory logic via birds and the calls they make one to another. And there are sources for this on the web. You can try Chris Rathman Combinator Birds or Mockingbird Fun [this takes ages to load, over a minute on my broadband]. [I must say I never really understood the Mockingbirds, maybe I need to revisit them.]