DocumentCode :
617731
Title :
An improved Q-learning algorithm for an autonomous mobile robot navigation problem
Author :
Muhammad, Jawad ; Bucak, Omur
Author_Institution :
Comput. Eng. Dept., Mevlana (Rumi) Univ., Konya, Turkey
fYear :
2013
fDate :
9-11 May 2013
Firstpage :
239
Lastpage :
243
Abstract :
This work applies the popular reinforcement learning methodology of Q learning in a typical robot control navigation problem. It is a two dimensional (2D) set-up where a robot tries to learn its path through its environment by avoiding any obstacles that may be encountered on its way from its home to a final destination (a goal state). During the navigation, trajectory of all the state-action pairs is stored and is replayed in a backward direction to propagate the refined Q values from any state to a goal state. This effort greatly reduces the convergence rate for the Q-table as the results obtained from the simulations indicate an excellent level of performance once compared with the traditional Q-Iearning.
Keywords :
collision avoidance; learning (artificial intelligence); mobile robots; 2D set-up; Q-learning algorithm; Q-table; autonomous mobile robot navigation problem; convergence rate reduction; obstacle avoidance; reinforcement learning methodology; robot control navigation problem; state-action pair trajectory; two dimensional set-up; Adaptation models; Indexes; Navigation; Robots; Trajectory; Mobile Robot Navigation; Q-learning; Reinforcement learning; Robot Control;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Technological Advances in Electrical, Electronics and Computer Engineering (TAEECE), 2013 International Conference on
Conference_Location :
Konya
Print_ISBN :
978-1-4673-5612-1
Type :
conf
DOI :
10.1109/TAEECE.2013.6557278
Filename :
6557278
Link To Document :
بازگشت