Title :
Integral reinforcement learning with explorations for continuous-time nonlinear systems
Author :
Lee, Jae Young ; Park, Jin Bae ; Choi, Yoon Ho
Author_Institution :
Sch. of Electr. & Electron. Eng., Yonsei Univ., Seoul, South Korea
Abstract :
This paper focuses on the integral reinforcement learning (I-RL) for input-affine continuous-time (CT) nonlinear systems where a known time-varying signal called an exploration is injected through the control input. First, we propose a modified I-RL method which effectively eliminates the effects of the explorations on the algorithm. Next, based on the result, an actor-critic I-RL technique is presented for the same nonlinear systems with completely unknown dynamics. Finally, the least-squares implementation method with the exact parameterizations is presented for each proposed one which can be solved under the given persistently exciting (PE) conditions. A simulation example is given to verify the effectiveness of the proposed methods.
Keywords :
continuous time systems; learning (artificial intelligence); nonlinear control systems; time-varying systems; actor-critic I-RL technique; exploration signal; input-affine continuous-time nonlinear systems; integral reinforcement learning; least-squares implementation method; modified I-RL method; persistently exciting conditions; time-varying signal; Convergence; Educational institutions; Equations; Heuristic algorithms; Mathematical model; Nonlinear systems; Optimal control;
Conference_Titel :
Neural Networks (IJCNN), The 2012 International Joint Conference on
Conference_Location :
Brisbane, QLD
Print_ISBN :
978-1-4673-1488-6
Electronic_ISBN :
2161-4393
DOI :
10.1109/IJCNN.2012.6252508