Title :
Optimality principle broken by considering structured plant variation and relevant robust reinforcement learning
Author :
Senda, Kei ; Tani, Yurika
Author_Institution :
Dept. of Aeronaut. & Astronaut., Kyoto Univ., Kyoto, Japan
Abstract :
In a general reinforcement learning problem, a plant (state transition probabilities) is estimated and a learning policy for the estimated plant is applied to a real plant. If there are differences between the estimated plant and the real plant, the obtained policy may not work for the real plant. Therefore, a set of plants with variations is used for learning in order to obtain a robust policy against variations. Bellman´s principle of optimality does not hold when the set of plants is used, and a typical dynamic programming algorithm cannot solve the problem. This study shows the reason why the principle of optimality does not hold. It then makes some relaxed problems whose solutions can be obtained. Moreover, this study proposes solutions to learn feasible policies efficiently. The effectiveness of the proposed method is demonstrated by applying to simple examples.
Keywords :
dynamic programming; learning (artificial intelligence); Bellman principle; dynamic programming; general reinforcement learning problem; learning policy; optimality principle; robust reinforcement learning; state transition probabilities; structured plant variation; Correlation; Equations; Estimation; Game theory; Games; Learning; Robustness; optimality principle breaking; reinforcement learning; robust optimal policy; structured plant variation;
Conference_Titel :
Systems, Man, and Cybernetics (SMC), 2011 IEEE International Conference on
Conference_Location :
Anchorage, AK
Print_ISBN :
978-1-4577-0652-3
DOI :
10.1109/ICSMC.2011.6083711