###### 6/6/12

### Reading

[These might help, you need only scan them.]

K.H.Blasius et al. eds.[1990] *Sorts and Types in Artificial Intelligence*

Maria Manzano [1996] *Extensions of First-Order Logic*

John Sowa [2000] *Knowledge Representation*

### Introduction:

We know from Completeness, Compactness, the Lowenheim Skolem Theorem, and the Lindstrom Theorems [P. Lindstrom, On extensions of elementary logic, *Theoria* 35 (1969) 1-11. ] that First Order Predicate Calculus (FOPC) is pretty well where we want to be. [The first three theorems establish good properties for FOPC, and then Lindstrom shows that no stronger theory than, or extension to, FOPC has these good properties-- so FOPC is the strongest good logic.]

That FOPC is good does not mean that there are not advantages to dressing it up in certain ways to make it more amenable to humans, perhaps by adding what the computer scientists call 'syntactic sugar'.

One obvious opportunity concerns domains, ie what the variables range over. In FOPC we have the notion of an 'individual constant' and an 'individual variable' and the variables range over 'everything'. But this is a little at stress with how we think about many subject areas and how we reason much of the time. Beginning students are regularly baffled by universal statements like 'All men are mortal' , in particular what the 'all' ranges over and how the statement is to be symbolized. We, and the beginners, think that the universal statement is about men; then we learn that actually it is about everything, and, in particular, it should be rendered 'whatever x you choose, if x is a man, etc.'. Then, once we start deriving with our symbolized statements we make heavy weather of such inferences as ascending a hierarchy (such as 'All men are mortal', 'All mortal men are eventually die' therefore 'All men eventually die') or flat inferences on a common domain (such as 'All men are mortal', 'All men are kind' therefore 'All men are mortal and kind'). What slows us is dealing with the 'if-clause' all the time ('if x is a man').

Then, independently of this, many subject matters seem to have separate domains (or 'mixed domains'). For example, our trees are part of graph theory, and in the graphs of graph theory there are nodes, one domain, and there are links, a second domain. FOPC can deal with graph theory, using statements like 'if x is a node, etc. ' or 'if x is a link, etc.' but this is not the most natural. And this phenomenon is common place:-

- in geometry, there are points and lines;
- in an employment workplace there are people and there are their salaries ie different kinds of things.
- in database design, there regularly is analysis in terms of entitites and their relationships, and this leads to different domains; for example, a simple database for a library will have the domain of patrons (borrowers/users ie people) and the second domain of books.
- everywhere in computer science; for example, in any modern typed programming language there might be 'booleans'. 'integers', 'strings' etc., all of these are different domains.

There are two common approaches here: type labels or sorted logic. [The word 'types' gets used in different areas in logic-- here it is being used to identify what is known in computer science as 'data types'.]

Type labels are one way to approach what seem to be different domains in use in the same reasoning. What we are going to do here is to tag the variables with a type to indicate that they are subject to a particular monadic (one-place) predicate applying to what they pick out. An example will help, say 'All men die' is symbolized

∀x(M(x) →D(x))

and our domain of reasoning (our interest, our ontology) has a lot to do with men (eg All man this, Some men that, etc.); we will let the monadic predicate M(x) (x is a man) become a type tag on a quantified variable, thus

∀x:m(D(x))

The notation ∀x:m is read 'Whatever x you choose of type m' . **For our software system, we need here some convenient correspondence between the monadic predicates and the type labels. We have as monadic predicates A(a), B(x), C(x).... Z(x) ie we use the upper case letters A through Z. For types, we will just use the lower case letters a through z, and let the upper case predicate correspond to the lower case version of the same letter.** So, as examples of this notation, here are some equivalences between the short form and the long form

∀x:m(D(x))≡∀x(M(x)→D(x))

∀x:a(D(x))≡∀x(A(x)→D(x))

∀x:b(D(x)≡∀x(B(x)→D(x))

This then allows us to write inferences like 'Socrates is a man, all men are wise, therefore, Socrates is wise' as follows:

M(s), ∀x:m(W(x))∴W(s)

[The attentive reader might wonder why we seem to have an extra pair of parantheses, ∀x:m(W(x)) instead of ∀x:mW(x). The problem is one of our own making, at least within this system. We are allowing terms to have any length, so 'm' is a term and so too is 'mW'. Now with ∀x:mW(x) we do not want 'mW' to be read as a term, we want to 'W' to be a predicate... hence the extra brackets. With other parsers, where terms automatically have length one, we would not do this.]

Much the same can be done with existentially quantified formulas. For example, say 'Some men die' is symbolized

∃x(M(x)&D(x))

and our domain of reasoning (our interest, our ontology) has a lot to do with men (eg All man this, Some men that, etc.); we will let the monadic predicate Mx (x is a man) become a type tag on a quantified variable, thus

∃x:m(D(x))

The notation ∃x:m is read 'There is an x of type m such that ...' . =

∃x:m(D(x))≡∃x(M(x)&D(x))

∃x:a(D(x))≡∃x(A(x)&D(x))

∃x:b(D(x))≡∃x(B(x)&D(x))

So, conceptually at least, a database for a library might contain statements like

∀x:p∀y:b(B(x,y)) 'Any patron is permitted to borrow any book.'

Here are the type label expansions in trees.

Sorted Logic

(Many) sorted logic, with sorts or sort labels, is very similar in concept and execution to the types and type labels just discussed (in fact, many texts use the two terms interchangeably). There is, though, a conceptual difference. It is that whereas both ordinary logic and logic with type labels use one (homogenous) domain or universe, sorted logic uses a (usually stratified, heterogenous) domain which consists of two or more sub-domains.

An example will help.

Say we had a library with people borrowing books. The people are one type or sort of thing. And the books are another. In ordinary first order logic, the universe would consist of 'everything' and we would write such formulas as

B(a,w) to mean 'Arthur borrowed

Wuthering Heights'

Of course, Arthur is a person, and *Wuthering Heights* a book, but we probably would not bother about this, or if we did, we might write

P(a)&B(w)&B(a,w) to mean 'Arthur is a person and

Wuthering Heightsa book and Arthur borrowedWuthering Heights'

We would probably be more careful if quantifiers were involved, perhaps writing

∃x(P(a)&B(w)&B(a,w)) to mean 'somebody has borrowed the book

Wuthering Heights'

We could use type labels if we wished to type this

∃x:p(B(w)&B(a,w))

with the variable 'x' still ranging over the entire universe.

With sorts, we say that actually the universe consists of two separate universes: a universe of people and a universe of books. And, back in the formal language, each universe has its own set of variables that range over it, so people has its variables and books its own variables. We need to denote these somehow, one way would be to superscript them. In our plain vanilla first order logic, our variables are such terms as

w,x,y,z

we could introduce some new variables

w

^{p},x^{p},y^{p},z^{p}

which would range only over the people universe and

w

^{b},x^{b},y^{b},z^{b}

which are different variables which range over only the book universe. Then the earlier formula would become

∃x

^{p}(B(w)&B(x^{p},w))

The same treatment could be given to the constants. The constant for Arthur, a, is a constant that can range over people (and only over people) so we could write it a^{p}, similarly for w, which ranges over books, so a new version for it might be w^{b}. So we could end up with

∃x

^{p}(B(w^{b})&B(x^{p},w^{b}))

Mathematics seems to be full of sorts. For example, a mathematician will tell you that a vector space is a structure consisting of a set of scalars S and an entirely separate set of vectors V, some special elements which are scalars, another special element which is a vector, operations among scalars, operations among vectors, and some operations across both ie

a vector space = {S, V, etc.}

If we were looking for a 'natural' logic for this, there would have to be the temptation to have two domains (scalars, and vectors) and suitable constants and functions.

### Signatures

Thus far, with syntax, we have allowed in special variables w^{p},x^{p},y^{p},z^{p} and special constants a^{p},b^{p},c^{p}. But actually this needs to be carried through for all the predicates and functions. For example, we have used B(x,y)to mean '(person) x borrowed the (book) y'. So, really, B(x,y), conceptually or notationally, needs to be B^{p}^{b}(x,y )to mean that the predicate B(x,y) has a type or signature that requires that its first argument is a person and its second argument is a book. Then

B

^{p}^{b}(a^{p},t^{b )}

is a perfectly good formula which might mean 'The (person) Arthur borrowed the (book) *Tale of Two Cities*'

On the other hand a string of symbols like

B

^{p}^{b}(t^{b},a^{p})

is not permitted or ill-typed or not well formed or is nonsense because the predicate 'borrowed' requires a person as its first argument and a book as its second, but this seems to have a book as its first argument and a person as its second. It is trying to say that *Tale of Two Cities* borrowed Arthur.

This typing can or perhaps should be carried out through all the syntax. A problem is that the formulas look very messy and proofs become obscure merely because of the notation. A somewhat more relaxed alternative is just to not write out the signatures explicitly all the time, but to understand they are there is the background. [There are results from elsewhere that have some bearing here. There is the Hindley-Milner type inference algorithm, which can determine and check types automatically. There are ways of checking the types even if we do not always write them out.]

There is a more detailed explanation of types and sorts elsewhere.

If you decide to use the web application for the exercises you can launch it from here Deriver [default] — username 'logic' password 'logic'. Then either copy and paste the above formulas into the Journal or use the Deriver File Menu to Open Web Page with this address https://softoption.us/test/treesStandAlone/CombinedTutorialsDefault.html . Then select the desired formula(s) and Start Tree off the Actions Menu.

**Preferences**

You will need to set some Preferences for this.

- set
*identity*to true (and that will give you the identity rules) - set
*firstOrder*to true (to get first order theories) - set typeLabels to be true (for types);
- and you can check that the parser is set to default.