next up previous contents
Next: Context-sensitive Translations Up: Reversible Machine Translation Previous: Translating reentrancies


Reversible transfer

The formalism also allows transfer grammars that define transfer relations that are not reversible, as is clear from the proof in section 2.5. The objective, though, is to build an reversible machine translation system. In this section I will define a simple condition on transfer rules such that a top-down interpreter is guaranteed to terminate for grammars of which the rules satisfy the condition.

The constraint I am about to propose embodies the hypothesis that translation is defined compositionally; i.e. the translation of some structure is defined in terms of the translations of the parts of that structure. In section 5.6 I show that certain types of non-compositional translation can still be handled by a reversible transfer grammar. Therefore I argue that the constraint to be proposed embodies an interesting compromise between expressive power and computability.

Assume the transfer rules define a relation between the paths p and q. I will require that for each rule the value of p of the mother node is strictly larger than the value of p of each of the daughters, and similarly, the value of q of the mother node is strictly larger than the value of q of the mother. I define these sizes in terms of the underlying feature graph models. The size of a feature graph simply is defined as the number of nodes of the graph. For a rule

\begin{displaymath}
\mbox{\it sign}(\mbox{\rm X}_{0}) \mbox{\tt :-} \mbox{\it sign}(\mbox{\rm X}_{1}) \dots \mbox{\it sign}(X_n),\phi.
\end{displaymath}
I require that for all assignments $ \alpha$ $ \in$ $ \phi^{\cal{I}}_{}$, and all j, 1 $ \leq$ j $ \leq$ n,

\prn\pred
\head{\alpha(\mbox{\rm X}_{0})^p > \alpha(X_j)^p }
\head{\alpha(\mbox{\rm X}_{0})^q > \alpha(X_j)^q }
\epred\eprn
The most straightforward way to satisfy this condition is for a mother node to share proper parts of its p and q values with the p and q values of each of its daughters (making the simplifying assumption that semantic structures are not cyclic). For example, all of the rules given so far satisfy the condition by sharing sub-parts of the value of the gb and nl attribute. A rule such as the following would fail to meet the condition:

\pr\pred
\head{sign(\avm[\mbox{\rm X}]{\mbox{\it gb}: \avm{\mbox{\it pred}: \mbo...
...t arg1}: \mbox{\rm N}_{1}\\
\mbox{\it arg2}: \mbox{\rm N}}}_{2}).}
\epred\epr
To see that this rule violates the condition, consider the assignment $ \alpha$ which assigns both X and Y to a graph whose subgraph at path p is the graph

\begin{displaymath}
(\mbox{\rm Z},\{\mbox{\rm Z}~\mbox{\it pred}~\mbox{\rm kus},...
...{\rm c}_{1},
\mbox{\rm Z}~\mbox{\it arg2}~\mbox{\rm c}_{2},\})
\end{displaymath}
Another way to understand this, is to see that the feature structure associated with the daughter node, in fact unifies with the feature structure associated with the mother node. Thus, even though the daughter node does not `mention' the pred attribute, this does not mean that there cannot be a value for this attribute. The rules in the previous sections of this chapter all satisfy the condition, as in these rules the daughter nodes are associated with proper sub-parts of the feature structure associated with the mother node.

A top-down interpreter for transfer grammars can be defined as the meta-interpreter defined in chapter 2, repeated here for convenience in figure 5.6.

Figure 5.6: Meta interpreter for $ \cal {R}$($ \cal {L}$)-grammars
\begin{figure}
\prn\pred
\head{\mbox{\it refutation}(\mbox{\rm Goal}) \mbox{\tt ...
...\rm H}),}
\body{ \mbox{\it refutations}(\mbox{\rm T}).}
\epred\eprn
\end{figure}

This algorithm thus simply performs a top-down expansion of the input sign. By the size condition we know however that each recursive call constitutes a smaller problem and hence transfer will terminate, given that the ordering on sizes is well-founded. Given the equivalence condition on the p-parsing problem, we know also that there is an upper limit to the possible size of either path p or path q (in principle the interpreter can check at each inference step whether or not it is hypothesizing a further instantiated value of p. Note that the current algorithm does not implement this coherence check; the algorithm can be modified to implement coherence and completeness, as discussed in the preceding chapter.


next up previous contents
Next: Context-sensitive Translations Up: Reversible Machine Translation Previous: Translating reentrancies
Noord G.J.M. van
1998-09-30