DocumentCode :
1259654
Title :
Links between Markov models and multilayer perceptrons
Author :
Bourlard, Herve ; Welleken, Christian J.
Author_Institution :
Philips Res. Lab., Louvain-la-Neuve, Belgium
Volume :
12
Issue :
12
fYear :
1990
fDate :
12/1/1990 12:00:00 AM
Firstpage :
1167
Lastpage :
1178
Abstract :
The statistical use of a particular classic form of a connectionist system, the multilayer perceptron (MLP), is described in the context of the recognition of continuous speech. A discriminant hidden Markov model (HMM) is defined, and it is shown how a particular MLP with contextual and extra feedback input units can be considered as a general form of such a Markov model. A link between these discriminant HMMs, trained along the Viterbi algorithm, and any other approach based on least mean square minimization of an error function (LMSE) is established. It is shown theoretically and experimentally that the outputs of the MLP (when trained along the LMSE or the entropy criterion) approximate the probability distribution over output classes conditioned on the input, i.e. the maximum a posteriori probabilities. Results of a series of speech recognition experiments are reported. The possibility of embedding MLP into HMM is described. Relations with other recurrent networks are also explained
Keywords :
Markov processes; minimisation; neural nets; probability; speech recognition; Markov models; Viterbi algorithm; connectionist system; discriminant hidden Markov model; error function; least mean square minimization; multilayer perceptrons; probability; speech recognition; Context modeling; Entropy; Feedback; Hidden Markov models; Least squares approximation; Minimization methods; Multilayer perceptrons; Probability distribution; Speech recognition; Viterbi algorithm;
fLanguage :
English
Journal_Title :
Pattern Analysis and Machine Intelligence, IEEE Transactions on
Publisher :
ieee
ISSN :
0162-8828
Type :
jour
DOI :
10.1109/34.62605
Filename :
62605
Link To Document :
بازگشت