DocumentCode :
1089031
Title :
Evaluating speech recognizers
Author :
Moore, Roger K.
Author_Institution :
University of College, Londen, England
Volume :
25
Issue :
2
fYear :
1977
fDate :
4/1/1977 12:00:00 AM
Firstpage :
178
Lastpage :
183
Abstract :
Although automatic word recognition systems have existed for some twenty-five years there is still no suitable standard for evaluating their relative performances. Currently, the merits of two systems cannot be meaningfully compared unless they have been tested with at least the same vocabulary or, preferably, with the same acoustic samples. This paper develops a standard for comparing the performance of different recognizers on arbitrary vocabularies based on a human word recognition model. This standard allows recognition results to be normalized for comparison according to two intuitively meaningful figures of merit: 1) the noise level necessary to achieve comparable human performance and 2) the deviation of the pattern of confusions from human performance. Examples are given of recognizers evaluated in this way, and the role of these performance measures in automatic speech recognition and other related areas is discussed.
Keywords :
Acoustic testing; Automatic speech recognition; Humans; Pattern recognition; Performance evaluation; Speech analysis; Speech recognition; Standards development; System testing; Vocabulary;
fLanguage :
English
Journal_Title :
Acoustics, Speech and Signal Processing, IEEE Transactions on
Publisher :
ieee
ISSN :
0096-3518
Type :
jour
DOI :
10.1109/TASSP.1977.1162916
Filename :
1162916
Link To Document :
بازگشت