next up previous contents
Next: Overview Up: Introduction Previous: Introduction

Subsections


Discontinuous Constituency and Reversibility

Although most constraint-based formalisms in computational linguistics assume that phrases are built by concatenation (eg. as in PATR II, GPSG, LFG and most versions of Categorial Grammar) this assumption is sometimes challenged by allowing more powerful operations to construct strings. The linguistic motivation for such alternative conceptions of string combination are the analyses of so-called discontinuous constituency constructions. For example, [67] proposes several versions of `head wrapping'. In the analysis of the Australian free word-order language Guugu Yimidhirr, Mark Johnson uses a `combine' predicate in a DCG-like grammar that corresponds to the union of words [38]. Mike Reape uses an operation called `sequence union' to analyze Germanic semi-free word order constructions [71,72]. Other examples include Tree Adjoining Grammars [40,106], and versions of Categorial Grammar [8,114,15,31]. Apart from the motivation from the syntax of discontinuous constituency, non-concatenative grammatical formalisms may also be motivated from a semantic perspective, as it is expected that such formalisms facilitate a systematic, compositional construction of semantic structures.

The use of non-concatenative grammars is furthermore motivated by the desire to obtain reversible grammars. This motivation is essentially twofold.

Motivation from generation.

It is expected that the extra power available in non-concatenative formalisms, facilitates a systematic, compositional construction of semantic representations. Therefore, it will be easier to define generation algorithms. The semantic-head-driven generation strategy discussed in the previous chapter faces problems in case semantic heads are `displaced', and this displacement is analyzed using threading. However, in this chapter I sketch a simple analysis of verb-second (an example of a displacement of semantic heads) by an operation similar to head wrapping which a head-driven generator processes without any problems (or extensions) at all.

Motivation from parsing.

It is expected that non-concatenative grammars are useful for parsing as well. The parsing problem for grammars, written in concatenative formalisms such as PATR and DCG, is undecidable in general. Thus, the restriction that phrases are built by concatenation is not a `real' restriction from a formal point of view. Often, it is possible to see whether such a grammar in fact can be parsed effectively. The `dangerous' parts of a grammar are rules with an empty right-hand-side, and non-branching rules. Inspection of the grammar, and most notably its dangerous parts, sometimes may reveal that no problems arise. To analyze discontinuous constituency, the grammar writer is forced to use complicated `gap threading' mechanisms [63]. Gap threading, though, heavily uses the `dangerous' types of rule. For this reason, the more discontinuous constituency constructions are analyzed, the more difficult it becomes to see whether the resulting grammar can be used effectively for parsing. Furthermore, if at a certain moment the addition of a certain threading mechanism (say, for extraposition) does result in a grammar that is not effectively parsable anymore, it is unclear whether to blame the proposed extension to the grammar, or whether one of the other threading mechanisms should be blamed, (or whether the problem simply comes about because of the interaction of different threading mechanisms).

For this reason, non-concatenative grammars are motivated, because these grammars allow for more expressive power. This addition of expressive power may furthermore reduce the need of `dangerous' rules, and thus non-concatenative grammars are useful for extendability.


next up previous contents
Next: Overview Up: Introduction Previous: Introduction
Noord G.J.M. van
1998-09-30