Title :
Receding Horizon Cache and Extreme Learning Machine based Reinforcement Learning
Author :
Zhifei Shao ; Meng Joo Er ; Guang-Bin Huang
Author_Institution :
Sch. of Electr. & Electron. Eng., Nanyang Technol. Univ., Singapore, Singapore
Abstract :
Function approximators have been extensively used in Reinforcement Learning (RL) to deal with large or continuous space problems. However, batch learning Neural Networks (NN), one of the most common approximators, has been rarely applied to RL. In this paper, possible reasons for this are laid out and a solution is proposed. Specifically, a Receding Horizon Cache (RHC) structure is designed to collect training data for NN by dynamically archiving state-action pairs and actively updating their Q-values, which makes batch learning NN much easier to implement. Together with Extreme Learning Machine (ELM), a new RL with function approximation algorithm termed as RHC and ELM based RL (RHC-ELM-RL) is proposed. A mountain car task was carried out to test RHC-ELM-RL and compare its performance with other algorithms.
Keywords :
approximation theory; learning (artificial intelligence); neural nets; ELM; NN; RHC structure; RL; batch learning Neural Networks; continuous space problems; extreme learning machine based reinforcement learning; function approximation algorithm; mountain car task; receding horizon cache structure; Approximation algorithms; Artificial neural networks; Educational institutions; Function approximation; Heuristic algorithms; Training; Training data;
Conference_Titel :
Control Automation Robotics & Vision (ICARCV), 2012 12th International Conference on
Conference_Location :
Guangzhou
Print_ISBN :
978-1-4673-1871-6
Electronic_ISBN :
978-1-4673-1870-9
DOI :
10.1109/ICARCV.2012.6485384