Документ взят из кэша поисковой машины. Адрес оригинального документа : http://www.intsys.msu.ru/en/invest/speech/articles/lexers.htm
Дата изменения: Unknown
Дата индексирования: Sun Apr 10 00:40:51 2016
Кодировка:
Intelligent Systems :: Research :: :: Articles

Using lexical analyzers and parsers in recognition engines for natural languages

Dmitry Babin; Alexander Kholodenko
Faculty of Mechanics and Mathematics,
Department of Mathematical Theory of Intelligent Systems (MaTIS).

The problem

     The main problem is increasing the recognition rate of some recognition engines, working in some restricted domains, such as telephone time-table question-answer systems and so on.
     Our algorithms can be used in OCR systems to increase recognition rate, when we are using OCR for recognition of some strongly structured texts, such as computer programs, for example.
     Our algorithms allow user to get recognition variants, which are right sentences in some formal language.
     This approach is important, because it allow us to effectively construct specialized recognition engines with high speed.

The original contribution of our work in comparison to our previous work and to the work of others

     The main difference between our work and the "main stream" of works in the field of text/speech recognition engines is usage of deterministic approach instead of probabilistic approach.
     This method allow us to build accurate many-component systems which can compute ALL variants of recognition in special form. This allow us to recognize sentence even if all components of sentence (words or some other elements) from the previous step of recognition are wrong.
     The main difference between this paper and our previous papers is the following. Now we added context-free grammars to our approach to use syntactic information in recognition process.

The evaluation of the results presented in the paper

     We use our approach to build experimental OCR system, which can recognize sentences about weather in very high noise conditions.
     Dictionary was created automatically from the set of sentences about weather.
     System works more accurate, than the human operator does.

 

To increase recognition rate of the existing recognition engine we can use some additional information, which consists of dictionaries and grammar rules.

Let us suppose that we have a simple speech or text recognition engine (base recognition engine, BRE). Additionally, we have complete information about dictionary and grammar rules of the user (for example, we want to build a speech recognition engine for some restricted domain, such as a timetable question-answer voice system).

We suppose that BRE can produce a set of recognition variants in the form of the "complex chain".

Definition. Complex chain over alphabet A is an oriented graph without oriented cycles. Each edge of the complex chain have symbol from alphabet A and (may be) non-negative real number (penalty).

Definition. Length of complex chain is the number of vertexes in its graph.

Oriented graph without oriented cycles have a vertex without incoming edges (start vertex) and a vertex without outgoing edges (finish vertex).

Definition. A sequence of vertexes is called path, if the first vertex of the sequence is the start vertex, last vertex of the sequence is the finish vertex and for any two consecutive vertexes there is an edge from the first vertex of this pair to the second one.

Now we can formulate our main task.

Let us suppose that we have a complex chain and some grammar. We must say, if there is a path with pure chain, derivable in the giving grammar. Or, we must say, if there is a path with pure chain, which is a sequence of some subchains, and each subchain is derivable in the giving grammar.

If such path exists, we should find one (optimal, with minimal sum of penalties) path or all such paths (and their decompositions).

In other words, we want to find recognition variants, which is grammatically right sentence or variants, which is a sequence of the right words from fixed (limited or unlimited) dictionary.

This problem was solved for regular grammars and context-free grammars.

The algorithms for these grammars were developed.

The next table shows computation speed of these algorithms.

Table 1. 

Regular grammars
Path derivability, 1 variant const*n^2
Path derivability, all variants const*n^2
Decomposition, 1 variant const*n^3
Decomposition, all variants const*n^4
Context-free grammars
Path derivability, 1 variant const*n^3
Path derivability, all variants const*n^4
Decomposition, 1 variant const*n^4
Decomposition, all variants const*n^5

Here n represents the length of the complex chain.

When we want to build all recognition variants, algorithm builds a complex chain over the alphabet of all right words, so we can use a sequence of corrector blocks.

If we have limited dictionary, we can increase calculation speed of the algorithm.

To show the power of these ideas, we wrote demo program, which simulates scanned text recognition in hard noise conditions. This program consists of two main parts. The first part is a auxiliary program, which manage dictionaries, grammars and simulate the scanning process and the second one is the main recognition engine. Dictionary manager can automatically (or semi-automatically, for better results) create dictionaries from a set of texts and extract some simple grammar information. This information uses later in recognition process to achieve grate results.

This approach was tested on a set of short sentences about weather. Operator entered new sentence in machine, add some noise to its graphic representation and machine successfully recognized this sentence. Usually this program shows better results, than another human operator does.