DocumentCode :
441594
Title :
Multi-Step Truncated Q Learning Algorithm
Author :
Chen, Sheng-Lei ; Wu, Hui-Zhong ; Han, Xiang-Lan ; Xiao, Liang
Volume :
1
fYear :
2005
fDate :
18-21 Aug. 2005
Firstpage :
194
Lastpage :
198
Abstract :
Q learning is of great importance in reinforcement learning. To compensate the drawbacks of Q learning and Q(λ) algorithm, MTQ algorithm is proposed in this paper. It makes use of future information of k steps to update current Q value. Thus it can consider more long-term benefit and the computation complexity is also decreased. Good balance is achieved between update speed and computation complexity. Experiments demonstrate effectiveness of this algorithm.
Keywords :
MTQ algorithm; Q learning; Q(λ) algorithm; Reinforcement learning; Artificial intelligence; Artificial neural networks; Computer science; Humans; Intelligent networks; Learning systems; Machine learning; Machine learning algorithms; Supervised learning; Unsupervised learning; MTQ algorithm; Q learning; Q(λ) algorithm; Reinforcement learning;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Machine Learning and Cybernetics, 2005. Proceedings of 2005 International Conference on
Conference_Location :
Guangzhou, China
Print_ISBN :
0-7803-9091-1
Type :
conf
DOI :
10.1109/ICMLC.2005.1526943
Filename :
1526943
Link To Document :
بازگشت