DocumentCode
458910
Title
The Optimality Analysis of Hybrid Reinforcement Learning Combined with SVMs
Author
Wang, Xue-ning ; Chen, Wei ; Liu, Da-Xue ; Wu, Tao ; He, Han-gen
Author_Institution
Coll. of Mechatronics Eng. & Autom., Nat. Univ. of Defense Technol., ChangSha
Volume
1
fYear
2006
fDate
16-18 Oct. 2006
Firstpage
936
Lastpage
941
Abstract
To reduce the learning time of reinforcement learning (RL), hybrid algorithms that combine reinforcement learning with various supervised learning methods have attracted many research interests. However, the global convergence and optimality become one of the main problems for hybrid reinforcement learning algorithms. In this paper, the convergence of a hybrid RL algorithm, which is combined with support vector machines (SVMs) is analyzed theoretically. It is shown that by making use of policy gradient learning and the SVM regression, the hybrid algorithm can easily escape from local optima
Keywords
convergence; gradient methods; learning (artificial intelligence); regression analysis; support vector machines; SVM regression; global convergence; hybrid reinforcement learning; optimality analysis; policy gradient learning; support vector machine; Algorithm design and analysis; Convergence; Educational institutions; Gradient methods; Helium; Learning; State estimation; State-space methods; Stochastic processes; Support vector machines;
fLanguage
English
Publisher
ieee
Conference_Titel
Intelligent Systems Design and Applications, 2006. ISDA '06. Sixth International Conference on
Conference_Location
Jinan
Print_ISBN
0-7695-2528-8
Type
conf
DOI
10.1109/ISDA.2006.268
Filename
4021565
Link To Document