DocumentCode :
387073
Title :
Residual-gradient-based neural reinforcement learning for the optimal control of an acrobot
Author :
Xu, Xin ; He, Han-gen
Author_Institution :
Dept. of Autom. Control, Nat. Univ. of Defense Technol., Changsha, China
fYear :
2002
fDate :
2002
Firstpage :
758
Lastpage :
763
Abstract :
Based on the idea of dynamic programming, reinforcement learning (RL) has become an important model-free method to solve difficult optimal control problems. In this paper, a novel neural RL method is proposed to solve the time-optimal control problem of a class of under-actuated robots, which is called the acrobot. The RL method uses a modified residual gradient reinforcement learning algorithm called RGNP (residual gradient with nonstationary policy). The RGNP algorithm not only has guaranteed convergence under certain conditions but also can ensure the performance of the approximated optimal policy, which is superior to the previous residual gradient algorithms. Simulation results of the learning control of the acrobot illustrate the effectiveness of the proposed method.
Keywords :
convergence; dynamic programming; gradient methods; learning (artificial intelligence); manipulators; neurocontrollers; time optimal control; RGNP; acrobot; dynamic programming; guaranteed convergence; modified residual gradient reinforcement learning algorithm; neural RL method; nonstationary policy; optimal control; residual gradient algorithms; residual-gradient-based neural reinforcement learning; time-optimal control problem; two-link manipulator; under-actuated robots; Algorithm design and analysis; Approximation algorithms; Convergence; Dynamic programming; Electronic mail; Function approximation; Helium; Learning; Optimal control; Robots;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Intelligent Control, 2002. Proceedings of the 2002 IEEE International Symposium on
ISSN :
2158-9860
Print_ISBN :
0-7803-7620-X
Type :
conf
DOI :
10.1109/ISIC.2002.1157857
Filename :
1157857
Link To Document :
بازگشت