Title :
Challenges for the policy representation when applying reinforcement learning in robotics
Author :
Kormushev, Petar ; Calinon, Sylvain ; Caldwell, Darwin G. ; Ugurlu, Barkan
Author_Institution :
Dept. of Adv. Robot., Ist. Italiano di Tecnol., Genova, Italy
Abstract :
A summary of the state-of-the-art reinforcement learning in robotics is given, in terms of both algorithms and policy representations. Numerous challenges faced by the policy representation in robotics are identified. Two recent examples for application of reinforcement learning to robots are described: pancake flipping task and bipedal walking energy minimization task. In both examples, a state-of-the-art Expectation-Maximization-based reinforcement learning algorithm is used, but different policy representations are proposed and evaluated for each task. The two proposed policy representations offer viable solutions to four rarely-addressed challenges in policy representations: correlations, adaptability, multi-resolution, and globality. Both the successes and the practical difficulties encountered in these examples are discussed.
Keywords :
control engineering computing; expectation-maximisation algorithm; learning (artificial intelligence); legged locomotion; bipedal walking energy minimization task; expectation-maximization-based reinforcement learning algorithm; pancake flipping task; policy representation; robotics; Correlation; Couplings; Learning; Legged locomotion; Minimization; Trajectory;
Conference_Titel :
Neural Networks (IJCNN), The 2012 International Joint Conference on
Conference_Location :
Brisbane, QLD
Print_ISBN :
978-1-4673-1488-6
Electronic_ISBN :
2161-4393
DOI :
10.1109/IJCNN.2012.6252758