DocumentCode :
2376866
Title :
Optimality principle broken by considering structured plant variation and relevant robust reinforcement learning
Author :
Senda, Kei ; Tani, Yurika
Author_Institution :
Dept. of Aeronaut. & Astronaut., Kyoto Univ., Kyoto, Japan
fYear :
2011
fDate :
9-12 Oct. 2011
Firstpage :
477
Lastpage :
483
Abstract :
In a general reinforcement learning problem, a plant (state transition probabilities) is estimated and a learning policy for the estimated plant is applied to a real plant. If there are differences between the estimated plant and the real plant, the obtained policy may not work for the real plant. Therefore, a set of plants with variations is used for learning in order to obtain a robust policy against variations. Bellman´s principle of optimality does not hold when the set of plants is used, and a typical dynamic programming algorithm cannot solve the problem. This study shows the reason why the principle of optimality does not hold. It then makes some relaxed problems whose solutions can be obtained. Moreover, this study proposes solutions to learn feasible policies efficiently. The effectiveness of the proposed method is demonstrated by applying to simple examples.
Keywords :
dynamic programming; learning (artificial intelligence); Bellman principle; dynamic programming; general reinforcement learning problem; learning policy; optimality principle; robust reinforcement learning; state transition probabilities; structured plant variation; Correlation; Equations; Estimation; Game theory; Games; Learning; Robustness; optimality principle breaking; reinforcement learning; robust optimal policy; structured plant variation;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Systems, Man, and Cybernetics (SMC), 2011 IEEE International Conference on
Conference_Location :
Anchorage, AK
ISSN :
1062-922X
Print_ISBN :
978-1-4577-0652-3
Type :
conf
DOI :
10.1109/ICSMC.2011.6083711
Filename :
6083711
Link To Document :
بازگشت