Title :
MEC—A Near-Optimal Online Reinforcement Learning Algorithm for Continuous Deterministic Systems
Author :
Dongbin Zhao ; Yuanheng Zhu
Author_Institution :
State Key Lab. of Manage. & Control for Complex Syst., Inst. of Autom., Beijing, China
Abstract :
In this paper, the first probably approximately correct (PAC) algorithm for continuous deterministic systems without relying on any system dynamics is proposed. It combines the state aggregation technique and the efficient exploration principle, and makes high utilization of online observed samples. We use a grid to partition the continuous state space into different cells to save samples. A near-upper Q operator is defined to produce a near-upper Q function using samples in each cell. The corresponding greedy policy effectively balances between exploration and exploitation. With the rigorous analysis, we prove that there is a polynomial time bound of executing nonoptimal actions in our algorithm. After finite steps, the final policy reaches near optimal in the framework of PAC. The implementation requires no knowledge of systems and has less computation complexity. Simulation studies confirm that it is a better performance than other similar PAC algorithms.
Keywords :
computational complexity; continuous systems; learning (artificial intelligence); MEC algorithm; PAC algorithm; continuous deterministic system; exploration principle; greedy policy; near-optimal online reinforcement learning algorithm; near-upper Q function; near-upper Q operator; polynomial time bound; probably approximately correct algorithm; state aggregation technique; system dynamics; Algorithm design and analysis; Approximation algorithms; Heuristic algorithms; Learning systems; Partitioning algorithms; Polynomials; Upper bound; Efficient exploration; probably approximately correct (PAC); reinforcement learning (RL); state aggregation; state aggregation.;
Journal_Title :
Neural Networks and Learning Systems, IEEE Transactions on
DOI :
10.1109/TNNLS.2014.2371046