next up previous
Next: Conclusion Up: Grammatical analysis in the Previous: Robust parsing

Subsections



Evaluation

This section evaluates the NLP component with respect to efficiency and accuracy.

Test set

We present a number of results to indicate how well the NLP component currently performs. We used a corpus of more than 20K word-graphs, output of a preliminary version of the speech recognizer, and typical of the intended application. The first 3800 word-graphs of this set are semantically annotated. This set is used in the experiments below. Some characteristics of this test set are given in Table 1. As can be seen from this table, this test set is considerably easier than the rest of this set. For this reason, we also present results (where applicable) for a set of 5000 arbitrarily selected word-graphs. At the time of the experiment, no further annotated corpus material was available to us.


Table 1: This table lists the number of transitions, the number of words of the actual utterances, the average number of transitions per word, and the average number of words per utterances.
  graphs transitions words t/w w/g
test 5000 54687 16020 3.4 3.2
test 3800 36074 13312 2.7 3.5
total 21288 242010 70872 3.4 3.3

Efficiency

We report on two different experiments. In the first experiment, the parser is given the utterance as it was actually spoken (to simulate a situation in which speech recognition is perfect). In the second experiment, the parser takes the full word-graph as its input. The results are then passed on to the robustness component. We report on a version of the robustness component which incorporates bigram-scores (other versions are substantially faster).

All experiments were performed on a HP-UX 9000/780 machine with more than enough core memory. Timings measure CPU-time and should be independent of the load on the machine. The timings include all phases of the NLP component (including lexical lookup, syntactic and semantic analysis, robustness, and the compilation of semantic representations into updates). The parser is a head-corner parser implemented (in SICStus Prolog) with selective memoization and goal-weakening as described in [10]. Table 2 summarizes the results of these two experiments.


Table 2: In the first table we list respectively the total number of milliseconds CPU-time required for all 3800 word-graphs, the average number of milliseconds per word-graph, and the maximum number of milliseconds for a word-graph. The final column lists the maximum space requirements (per word-graph, in Kbytes). For word-graphs the average CPU-times are actually quite misleading because CPU-times vary enormously for different word-graphs. For this reason, we present in the second table the proportion of word-graphs that can be treated by the NLP component within a given amount of CPU-time (in milliseconds).
  mode total msec msec/sent max msec max kbytes
3800 graphs: user utterance 125290 32 330 86
  word-graph 303550 80 8910 1461
5000 graphs: user utterance 152940 30 630 192
  word-graph 477920 95 10980 4786
  100 200 500 1000 2000 5000
3800 graphs: 80.6 92.4 98.2 99.5 99.9 99.9
5000 graphs: 81.3 91.2 96.9 98.7 99.5 99.9

From the experiments we can conclude that almost all input word-graphs can be treated fast enough for practical applications. In fact, we have found that the few word-graphs which cannot be treated efficiently almost exclusively represent cases where speech recognition completely fails and no useful combinations of edges can be found in the word-graph. As a result, ignoring these few cases does not seem to result in a degradation of practical system performance.

Accuracy

In order to evaluate the accuracy of the NLP component, we used the same test set of 3800 word-graphs. For each of these graphs we know the corresponding actual utterances and the update as assigned by the annotators. We report on word and sentence accuracy, which is an indication of how well we are able to choose the best path from the given word-graph, and on concept accuracy, which indicates how often the analyses are correct.

The string comparison on which sentence accuracy and word accuracy are based is defined by the minimal number of substitutions, deletions and insertions that is required to turn the first string into the second (Levenshtein distance). The string that is being compared with the actual utterance is defined as the best path through the word-graph, given the best-first search procedure defined in the previous section. Word accuracy is defined as 1 - $ {\frac{d}{n}}$ where n is the length of the actual utterance and d is the distance as defined above.

In order to characterize the test sets somewhat further, Table 3 lists the word and sentence accuracy both of the best path through the word-graph (using acoustic scores only), the best possible path through the word-graph, and a combination of the acoustic score and a bigram language model. The first two of these can be seen as natural upper and lower boundaries.


Table 3: Word accuracy and sentence accuracy based on acoustic score only (Acoustic); using the best possible path through the word-graph, based on acoustic scores only (Possible); a combination of acoustic score and bigram score (Acoustic + Bigram), as reported by the current version of the system.
  method WA SA
3800 graphs: Acoustic 78.9 60.6
  Possible 92.6 82.7
  Acoustic + Bigram 86.3 74.3
5000 graphs: Acoustic 72.7 57.6
  Possible 89.8 81.7
  Acoustic + Bigram 82.3 74.0

Concept Accuracy

Word accuracy provides a measure for the extent to which linguistic processing contributes to speech recognition. However, since the main task of the linguistic component is to analyze utterances semantically, an equally important measure is concept accuracy, i.e. the extent to which semantic analysis corresponds with the meaning of the utterance that was actually produced by the user.

For determining concept accuracy, we have used a semantically annotated corpus of 3800 user responses. Each user response was annotated with an update representing the meaning of the utterance that was actually spoken. The annotations were made by our project partners in Amsterdam, in accordance with the guidelines given in [11].

Updates take the form described in Section 3. An update is a logical formula which can be evaluated against an information state and which gives rise to a new, updated information state. The most straightforward method for evaluating concept accuracy in this setting is to compare (the normal form of) the update produced by the grammar with (the normal form of) the annotated update. A major obstacle for this approach, however, is the fact that very fine-grained semantic distinctions can be made in the update-language. While these distinctions are relevant semantically (i.e. in certain cases they may lead to slightly different updates of an information state), they often can be ignored by a dialogue manager. For instance, the update below is semantically not equivalent to the one given in Section 3, as the ground-focus distinction is slightly different.

userwants.travel.destination.place
                  ([# town.leiden];
                   [! town.abcoude])

However, the dialogue manager will decide in both cases that this is a correction of the destination town.

Since semantic analysis is the input for the dialogue manager, we have therefore measured concept accuracy in terms of a simplified version of the update language. Following the proposal in [4], we translate each update into a set of semantic units, were a unit in our case is a triple $ \langle$CommunicativeFunction, Slot, Value$ \rangle$. For instance, the example above, as well as the example in Section 3, translates as

$ \langle$ denial, destination_town, leiden $ \rangle$
$ \langle$ correction, destination_town, abcoude $ \rangle$

Both the updates in the annotated corpus and the updates produced by the system were translated into semantic units of the form given above.

Semantic accuracy is given in the following tables according to four different definitions. Firstly, we list the proportion of utterances for which the corresponding semantic units exactly match the semantic units of the annotation (match). Furthermore we calculate precision (the number of correct semantic units divided by the number of semantic units which were produced) and recall (the number of correct semantic units divided by the number of semantic units of the annotation). Finally, following [4], we also present concept accuracy as

CA = 100$\displaystyle \left(\vphantom{ 1 - \frac{SU_S + SU_I + SU_D}{SU} }\right.$1 - $\displaystyle {\frac{SU_S + SU_I + SU_D}{SU}}$ $\displaystyle \left.\vphantom{ 1 - \frac{SU_S + SU_I + SU_D}{SU} }\right)$%

where SU is the total number of semantic units in the translated corpus annotation, and SUS, SUI, and SUD are the number of substitutions, insertions, and deletions that are necessary to make the translated grammar update equivalent to the translation of the corpus update.

We obtained the results given in Table 4.


Table 4: Evaluation of the NLP component with respect to word accuracy, sentence accuracy and concept accuracy. Semantic accuracy consists of the percentage of graphs which receive a fully correct analysis (match), percentages for precision and recall of semantic slots, and concept accuracy. The first row presents the results if the parser is given the actual user utterance (obviously WA and SA are meaningless in this case). The second and third rows present the results for word-graphs. In the third row bigram information is incorporated in the robustness component.
  Method WA SA Semantic accuracy
        match precision recall CA
3800 graphs: user utterance     97.9 99.2 98.5 98.5
  word-graphs 85.3 72.9 81.0 84.7 86.6 84.4
  word-graphs (+bigram) 86.5 75.1 81.8 85.5 87.4 85.2
5000 graphs: word-graphs 79.5 70.0        
  word-graphs (+bigram) 82.4 74.2        

The following reservations should be made with respect to the numbers given above.

Even if we take into account these reservations, it seems that we can conclude that the robustness component adequately extracts useful information even in cases where no full parse is possible: concept accuracy is (luckily) much higher than sentence accuracy.


next up previous
Next: Conclusion Up: Grammatical analysis in the Previous: Robust parsing
Noord G.J.M. van
1998-09-24