DocumentCode
2954479
Title
An incremental approach to learning generalizable robot tasks from human demonstration
Author
Ghalamzan E, Amir M. ; Paxton, Chris ; Hager, Gregory D. ; Bascetta, Luca
Author_Institution
Dept. of Electron., Politec. di Milano, Milan, Italy
fYear
2015
fDate
26-30 May 2015
Firstpage
5616
Lastpage
5621
Abstract
Dynamic Movement Primitives (DMPs) are a common method for learning a control policy for a task from demonstration. This control policy consists of differential equations that can create a smooth trajectory to a new goal point. However, DMPs only have a limited ability to generalize the demonstration to new environments and solve problems such as obstacle avoidance. Moreover, standard DMP learning does not cope with the noise inherent to human demonstrations. Here, we propose an approach for robot learning from demonstration that can generalize noisy task demonstrations to a new goal point and to an environment with obstacles. This strategy for robot learning from demonstration results in a control policy that incorporates different types of learning from demonstration, which correspond to different types of observational learning as outlined in developmental psychology.
Keywords
collision avoidance; dexterous manipulators; differential equations; intelligent robots; optimal control; trajectory control; DMP; control policy learning; developmental psychology; differential equations; dynamic movement primitives; generalizable robot task learning; goal point; human demonstration; incremental approach; noisy task demonstrations; observational learning; obstacle avoidance; smooth trajectory; Computational modeling; Emulation; Noise; Optimal control; Robots; Training; Trajectory;
fLanguage
English
Publisher
ieee
Conference_Titel
Robotics and Automation (ICRA), 2015 IEEE International Conference on
Conference_Location
Seattle, WA
Type
conf
DOI
10.1109/ICRA.2015.7139985
Filename
7139985
Link To Document