DocumentCode :
2028975
Title :
Evolutionary computation versus reinforcement learning
Author :
Schmidhuber, Jürgen
Author_Institution :
IDSIA, Manno, Switzerland
Volume :
4
fYear :
2000
fDate :
2000
Firstpage :
2992
Abstract :
Many applications of reinforcement learning (RL) and evolutionary computation (EC) are addressing the same problem, namely, to maximize some agent´s fitness function in a potentially unknown environment. The most challenging open issues in such applications include partial observability of the agent´s environment, hierarchical and other types of abstract credit assignment, and the learning of credit assignment algorithms. I summarize why EC provides a more natural framework for addressing these issues than RL based on value functions and dynamic programming. Then I point out fundamental drawbacks of traditional EC methods in case of stochastic environments, stochastic policies, and unknown temporal delays between actions and observable effects. I discuss a remedy called the success-story algorithm which combines aspects of RL and EC
Keywords :
dynamic programming; evolutionary computation; learning (artificial intelligence); abstract credit assignment; agent fitness function; dynamic programming; evolutionary computation; partial observability; reinforcement learning; stochastic environments; stochastic policies; success-story algorithm; unknown environment; unknown temporal delays; value functions; Ambient intelligence; Evolutionary computation; Layout; Learning; Petroleum; Tiles;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Industrial Electronics Society, 2000. IECON 2000. 26th Annual Confjerence of the IEEE
Conference_Location :
Nagoya
Print_ISBN :
0-7803-6456-2
Type :
conf
DOI :
10.1109/IECON.2000.972474
Filename :
972474
Link To Document :
بازگشت