next up previous
Next: Example: nlp_speech_trigram method. Up: Robust parsing of word-graphs Previous: Search algorithm


Complications for Ngrams

In this section we want to extend the nlp_speech method to take into account Ngram probabilities. Obviously, we can extend the weight tuple with a further argument which expresses the accumulated weight according to the Ngram probabilities. However, a potential problem arises. If we extend a given path by using the transition labeled w, then we want to take into account the probability of reading this word w given the previous N-1 words. However note that in the definition of the annotated word-graph given above these words are not readily available. Even if we make sure that each path is associated with the last words seen so far, we must be careful that weights remain uniform.

The solution to this problem is to alter the definition of the graph, in such a way that the relevant N-1 words are part of the vertex. If we want to take into account Ngram probabilities ($N=2,3, \dots$), then the vertices of the graph will be tuples $(v,w_1\dots w_{N-1})$ where v is a state of the word-graph as before, and $w_1\dots
w_{N-1}$ are the previous N-1 words. For example, in the case of bigrams (N=2), vertices are pairs consisting of a word-graph state and a word (the previous word). A number of special symbols $y_{N-1}
\dots y_1$ is introduced as beginning-of-sentence markers. The start vertex is now $(v_0,y_{N-1}\dots y_1)$. The notation x:k is used to refer to the last k elements of x.

The annotated word-graph for Ngrams is a weighted graph (V,E) and some fixed N, such that:



Subsections
next up previous
Next: Example: nlp_speech_trigram method. Up: Robust parsing of word-graphs Previous: Search algorithm

2000-07-10