DocumentCode :
3066221
Title :
Learning capabilities of recurrent neural networks
Author :
DasGupta, Bhaskar
Author_Institution :
Dept. of Comput. Sci., Pennsylvania State Univ., University Park, PA, USA
fYear :
1992
fDate :
12-15 Apr 1992
Firstpage :
822
Abstract :
The author relates the power of recurrent neural networks to those of other conventional models of computation like Turing machines and finite automata, and proves results about their learning capabilities. Specifically, it is shown that (a) probabilistic recurrent networks and probabilistic Turing machine models are equivalent; (b) probabilistic recurrent networks with bounded error probabilities are not more powerful than deterministic finite automata: (c) deterministic recurrent networks have the capability of learning P-complete language problems; and (d) restricting the weight-threshold relationship in deterministic recurrent networks may allow the network to learn only weaker classes of languages
Keywords :
deterministic automata; finite automata; learning (artificial intelligence); recurrent neural nets; P-complete language problems; Turing machines; bounded error probabilities; deterministic finite automata; deterministic recurrent networks; learning capabilities; probabilistic Turing machine models; probabilistic recurrent networks; recurrent neural networks; weight-threshold relationship; Computational modeling; Computer networks; Computer science; Error probability; LAN interconnection; Learning automata; Machine learning; Polynomials; Recurrent neural networks; Turing machines;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Southeastcon '92, Proceedings., IEEE
Conference_Location :
Birmingham, AL
Print_ISBN :
0-7803-0494-2
Type :
conf
DOI :
10.1109/SECON.1992.202248
Filename :
202248
Link To Document :
بازگشت