DocumentCode :
3478340
Title :
Neural Q-learning in motion planning for mobile robot
Author :
Qin, Zheng ; Gu, Jason
Author_Institution :
Dept. of Electr. & Comput. Eng., Dalhousie Univ., Halifax, NS, Canada
fYear :
2009
fDate :
5-7 Aug. 2009
Firstpage :
1024
Lastpage :
1028
Abstract :
In order to solve the bad convergence property of neural network which is used to generalize reinforcement learning, the neural and case based Q-learning (NCQL) algorithm is proposed. The basic principle of NCQL is that the reinforcement learning is generalized by NN, and the convergence property and learning efficiency are promoted by cases. The detail elements of the learning algorithm are fulfilled according to the application of motion planning for mobile robot. The simulation results show the validility and practicability of the NCQL algorithm.
Keywords :
convergence; learning (artificial intelligence); mobile robots; neurocontrollers; path planning; case based Q-learning algorithm; convergence property; learning efficiency; mobile robot; motion planning; neural network; reinforcement learning; Convergence; Interference; Learning; Logistics; Mobile robots; Motion planning; Multi-layer neural network; Neural networks; Robotics and automation; Sampling methods; Reinforcement learning; mobile robot; motion planning; neural network;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Automation and Logistics, 2009. ICAL '09. IEEE International Conference on
Conference_Location :
Shenyang
Print_ISBN :
978-1-4244-4794-7
Electronic_ISBN :
978-1-4244-4795-4
Type :
conf
DOI :
10.1109/ICAL.2009.5262570
Filename :
5262570
Link To Document :
بازگشت