Title :
Using reinforcement learning to improve exploration trajectories for error minimization
Author :
Kollar, Thomas ; Roy, Nicholas
Author_Institution :
Comput. Sci. & AI Lab., MIT, Cambridge, MA
Abstract :
The mapping and localization problems have received considerable attention in robotics recently. The exploration problem that drives mapping has started to generate similar attention, as the ease of construction and quality of map is strongly dependent on the strategy used to acquire sensor data for the map. Most exploration strategies concentrate on selecting the next best measurement to take, trading off information gathering for regular relocalization. What has not been studied so far is the effect the robot controller has on the map quality while executing exploration plans. Certain kinds of robot motion (e.g, sharp turns) are hard to estimate correctly, and increase the likelihood of errors in the mapping process. We show how reinforcement learning can be used to generate good motion control while executing a simple information gathering exploration strategy. We show that the learned policy reduces the overall map uncertainty by reducing the amount of uncertainty generated by robot motion
Keywords :
learning (artificial intelligence); mobile robots; motion control; path planning; position control; error minimization; exploration trajectories; mapping quality; motion control; regular relocalization; reinforcement learning; robot motion; Artificial intelligence; Computer errors; Computer science; Current measurement; Gain measurement; Learning; Robot control; Robot sensing systems; Simultaneous localization and mapping; Trajectory;
Conference_Titel :
Robotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEE International Conference on
Conference_Location :
Orlando, FL
Print_ISBN :
0-7803-9505-0
DOI :
10.1109/ROBOT.2006.1642211