Recall that a goal or query Q is a possibly empty conjunction of ()-atoms and a -constraint, written as:
The purpose of a proof procedure is to show whether, for a given set of definite clauses , there is a satisfiable constraint , such that Q is valid. The proof procedure is a refutation procedure, which shows that the denial of some goal is inconsistent with the assumptions . The building blocks of proof procedures are inference rules. I will assume a top-down inference strategy. The initial goal is replaced by other goals via inference rules. If the empty goal is obtained, then a refutation has been discovered (where an empty goal is a goal without any atoms). The constraint associated with the empty goal is the desired answer.
In later chapters I will define proof procedures in which the inference rules are applied bottom-up. In such a proof procedure new assertions are derived from the assumptions until an assertion is derived that explicitly contradicts the denial of the goal.
The top-down refutation procedure is best introduced using an example. Consider the set of clauses in figure 2.5 which define the state of a hypothetical library.
which should be read as `for which Book, Book's author is greene and Book can be lent'. However, in the refutation procedure such a goal will be denied, i.e. read as `for all Book it is not the case that Book's author is greene and Book can be lent'. The inference rule in a top-down refutation procedure is a generalization of `modus tollens'. From the fact that `owning' plus `availability' implies `lendability', and from the goal ` Book cannot be lent' we can infer that it can neither be the case that Book is both owned and available, i.e. our original goal is replaced by:
Next we can argue as follows. If it is the case that Book is not both owned and available, and we know that a book with the title `the captain and the enemy', and status `in' is owned, then we infer that such a book is not available:
Finally, from this goal (reading: a book with title `the captain and the enemy', author `greene' and status `in' is not available) we can infer the empty clause because it is stated in the database that all things with status `in' in fact are available:
Hence we obtain a contradiction, thus the negation of the original goal turns out not to be true. Therefore the original goal is shown to be true. Furthermore we obtain a constraint on Book that can be viewed as a counter example to the negated goal and hence constitutes an answer to the goal.
The proof procedure sketched here will now be defined as follows. Note that I use the standard terminology as for example presented in . First I define the inference rule more precisely. Then I will define the computation rule which tells us on which atom the inference rule must be applied. Finally I define the search rule which tells us against which clause the inference rule has to be applied.
The inference rule I will define is called goal-reduction. The inference rule will select an atom from the goal and will replace this atom with the body of a clause defining this atom. Furthermore, the constraint associated with the new goal will be the conjunction of the constraints and , where is the constraint associated with the old goal and is the constraint associated with the body of the selected clause. Summarizing, for a goal p1...pn, goal reduction will replace one of the atoms pi in the goal by the atoms in the body of a clause defining pi as follows:
is a variant of a clause in , such that the variables in the clause do not occur in the original goal, except for the variables explicitly mentioned. As usual, a variant of a clause is obtained by consistently renaming its variables. For example, given the clauses in the library example above, consider the goal:
Suppose we want to try to apply the inference rule on the atom own(X3) against the third clause defining the predicate `own/1'. In order for own(X3) and own(B) to match, we first rename the variables in the clause, obtaining the variant:
and next the body of this clause replaces the selected atom in the goal, obtaining:
The inference rule is applied only in case the resulting constraint , is satisfiable, because clearly if this constraint is not satisfiable then we will not be able to produce a correct answer; a constraint remains unsatisfiable whatever information is added to it (i.e. whatever constraints we conjoin it with). Therefore each time the inference rule is applied, it is tested whether the resulting constraint is still satisfiable. This satisfiability test will use the procedure described in the previous section. Therefore, the result of an inference rule will either be a goal in which the -constraint is solved, or either be `failure' if the current part of the search space does not contain any solutions. The operation to test whether the conjunction of two constraints is satisfiable relates to the `unification' operation in Prolog and PATR II, or to `constraint-solving' in other formalisms.
Note that it is not strictly necessary to compute after each reduction
step whether the resulting constraint is satisfiable (as long as
proposed answers are checked to be satisfiable). This observation
gives rise to inference procedures where satisfiability checks are
`delayed' or performed less frequently. This may be useful if the
check costs a lot of overhead in comparison to the size of the search
space that could be avoided. For an application of this idea in a
linguistic setting, consider the work on Constraint Logic Grammars
The computation rule of the proof procedure determines which
atom in a given goal is to be reduced. As in Prolog, I assume that the
leftmost atom of a goal is always reduced; i.e. the computation rule
selects the leftmost atom in a goal. In later chapters I will argue
that for parsing and generation a computation rule which selects the
head may be much more suitable.
Finally, the search rule of a proof procedure determines in which order the search space is traversed. For a given selected atom several clauses may be available to reduce the atom with. Each of the possible reductions may lead to the desired result; hence this defines a search space which has the shape of a tree. For our toy example above, this tree looks as in figure 2.6, assuming the leftmost selection rule.
I assume, as in Prolog, that search trees are traversed in a
depth-first left-to-right order. Backtracking occurs if branches are
encountered which contain no refutation. Note that the search-rule is
quite independent of both the direction of the proof procedure
(bottom-up, top-down) and the computation rule. Apart from the usual
depth-first strategy I adopt here it is often possible to augment
proof procedures with well-formed and ill-formed subgoal tables. In
section 4.6.5 I will come back to this subject.
At this point it may be useful to introduce another type of tree,
which we may call `parse tree'. In such a tree a node represents an
atom in a goal; the daughters of a node are the atoms with which this
atom is reduced. A leaf in the parse tree thus represents a goal of which the
reduction yielded the empty goal. 2.4 A search tree thus represents
all possible refutations; a parse-tree represents only one refutation.
In examples of parse trees I usually replace the variables in the atom
with the constraints imposed on this variable (in matrix representation);
also I leave out the predicate symbol in the case of the predicate
sign/1. For example, for the goal ``which books by `greene'
can be lent?'' we have the following parse tree, leaving
out the constraints on the variable
The following goal is solved as in figure 2.9; the corresponding parse tree is presented in figure 2.10. Note that in the trace of the refutation I use the up-arrow to indicate that the constraints on the variable are identical to the constraints on that variable in the preceding goal.