Title :
Evaluating speech recognizers
Author_Institution :
University of College, Londen, England
fDate :
4/1/1977 12:00:00 AM
Abstract :
Although automatic word recognition systems have existed for some twenty-five years there is still no suitable standard for evaluating their relative performances. Currently, the merits of two systems cannot be meaningfully compared unless they have been tested with at least the same vocabulary or, preferably, with the same acoustic samples. This paper develops a standard for comparing the performance of different recognizers on arbitrary vocabularies based on a human word recognition model. This standard allows recognition results to be normalized for comparison according to two intuitively meaningful figures of merit: 1) the noise level necessary to achieve comparable human performance and 2) the deviation of the pattern of confusions from human performance. Examples are given of recognizers evaluated in this way, and the role of these performance measures in automatic speech recognition and other related areas is discussed.
Keywords :
Acoustic testing; Automatic speech recognition; Humans; Pattern recognition; Performance evaluation; Speech analysis; Speech recognition; Standards development; System testing; Vocabulary;
Journal_Title :
Acoustics, Speech and Signal Processing, IEEE Transactions on
DOI :
10.1109/TASSP.1977.1162916