DocumentCode :
1197035
Title :
Global Reinforcement Learning in Neural Networks
Author :
Ma, Xiaolong ; Likharev, Konstantin K.
Author_Institution :
Stony Brook Univ., NY
Volume :
18
Issue :
2
fYear :
2007
fDate :
3/1/2007 12:00:00 AM
Firstpage :
573
Lastpage :
577
Abstract :
In this letter, we have found a more general formulation of the REward Increment = Nonnegative Factor times Offset Reinforcement times Characteristic Eligibility (REINFORCE) learning principle first suggested by Williams. The new formulation has enabled us to apply the principle to global reinforcement learning in networks with various sources of randomness, and to suggest several simple local rules for such networks. Numerical simulations have shown that for simple classification and reinforcement learning tasks, at least one family of the new learning rules gives results comparable to those provided by the famous Rules Ar-i and Ar-p for the Boltzmann machines
Keywords :
Boltzmann machines; learning (artificial intelligence); Boltzmann machine; characteristic eligibility learning; global reinforcement learning; neural networks; nonnegative factor; offset reinforcement; reward increment; Control systems; Equations; Hardware; Machine learning; Multidimensional systems; Neural networks; Numerical simulation; Signal processing; Stochastic processes; Stochastic systems; Neural networks (NNs); reinforcement learning; stochastic weights; Algorithms; Artificial Intelligence; Computer Simulation; Feedback; Information Storage and Retrieval; Models, Theoretical; Neural Networks (Computer); Pattern Recognition, Automated;
fLanguage :
English
Journal_Title :
Neural Networks, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9227
Type :
jour
DOI :
10.1109/TNN.2006.888376
Filename :
4118269
Link To Document :
بازگشت