DocumentCode :
2694402
Title :
Recurrent neural networks, hidden Markov models and stochastic grammars
Author :
Sun, G.Z. ; Chen, H.H. ; Lee, Y.C. ; Giles, C.L.
fYear :
1990
fDate :
17-21 June 1990
Firstpage :
729
Abstract :
A discussion is presented of the advantage of using a linear recurrent network to encode and recognize sequential data. The hidden Markov model (HMM) is shown to be a special case of such linear recurrent second-order neural networks. The Baum-Welch reestimation formula, which has proved very useful in training HMM, can also be used to learn a linear recurrent network. As an example, a network has successfully learned the stochastic Reber grammar with only a few hundred sample strings in about 14 iterations. The relative merits and limitations of the Baum-Welch optimal ascent algorithm in comparison with the error correction-gradient descent-learning algorithm are discussed
Keywords :
Markov processes; grammars; learning systems; neural nets; Baum-Welch optimal ascent algorithm; Baum-Welch reestimation formula; hidden Markov model; linear recurrent network; linear recurrent second-order neural networks; stochastic Reber grammar; supervised learning;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1990., 1990 IJCNN International Joint Conference on
Conference_Location :
San Diego, CA, USA
Type :
conf
DOI :
10.1109/IJCNN.1990.137655
Filename :
5726615
Link To Document :
بازگشت