Title :
Event-triggered reinforcement learning approach for unknown nonlinear continuous-time system
Author :
Xiangnan Zhong ; Zhen Ni ; Haibo He ; Xin Xu ; Dongbin Zhao
Author_Institution :
Dept. of Electr., Comput. & Biomed. Eng., Univ. of Rhode Island, Kingston, RI, USA
Abstract :
This paper provides an adaptive event-triggered method using adaptive dynamic programming (ADP) for the nonlinear continuous-time system. Comparing to the traditional method with fixed sampling period, the event-triggered method samples the state only when an event is triggered and therefore the computational cost is reduced. We demonstrate the theoretical analysis on the stability of the event-triggered method, and integrate it with the ADP approach. The system dynamics are assumed unknown. The corresponding ADP algorithm is given and the neural network techniques are applied to implement this method. The simulation results verify the theoretical analysis and justify the efficiency of the proposed event-triggered technique using the ADP approach.
Keywords :
adaptive systems; continuous time systems; dynamic programming; learning (artificial intelligence); neurocontrollers; nonlinear dynamical systems; stability; ADP approach; adaptive dynamic programming; adaptive event triggered method; computational cost reduction; event triggered reinforcement learning approach; neural network technique; stability; system dynamics; unknown nonlinear continuous time system; Approximation algorithms; Approximation methods; Equations; Heuristic algorithms; Neural networks; Performance analysis; Stability analysis;
Conference_Titel :
Neural Networks (IJCNN), 2014 International Joint Conference on
Conference_Location :
Beijing
Print_ISBN :
978-1-4799-6627-1
DOI :
10.1109/IJCNN.2014.6889787