DocumentCode :
1819214
Title :
A learning method for recurrent networks based on minimization of finite automata
Author :
Noda, Itsuki ; Nagao, Makoto
Author_Institution :
Fac. of Eng., Kyoto Univ., Japan
Volume :
1
fYear :
1992
fDate :
7-11 Jun 1992
Firstpage :
27
Abstract :
A novel network model and a learning algorithm based on symbol processing theory are described. The algorithm is derived from the minimization method of finite automata under the correspondence between Elman networks and finite automata. An attempt was made to learn context-free grammars by the new model network. Even though this learning method was derived under the correspondence to finite automata, the network can learn the subgrammar, which is the important feature for distinguishing context-free grammars and finite state automata
Keywords :
context-free grammars; finite automata; learning (artificial intelligence); recurrent neural nets; Elman networks; context-free grammars; finite automata minimisation; finite state automata; learning algorithm; recurrent networks; subgrammar; symbol processing theory; Artificial intelligence; Birth disorders; Control theory; Learning automata; Learning systems; Minimization methods; Neural networks; Optimal control; Prediction algorithms; Recurrent neural networks;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Neural Networks, 1992. IJCNN., International Joint Conference on
Conference_Location :
Baltimore, MD
Print_ISBN :
0-7803-0559-0
Type :
conf
DOI :
10.1109/IJCNN.1992.287211
Filename :
287211
Link To Document :
بازگشت