DocumentCode :
3726540
Title :
Distributed Adaptive Optimal Regulation of Uncertain Large-Scale Linear Networked Control Systems Using Q-Learning
Author :
Vignesh Narayanan;S. Jagannathan
Author_Institution :
Dept. of Electr. &
fYear :
2015
Firstpage :
587
Lastpage :
592
Abstract :
A novel Q-learning approach is presented for the design of a linear adaptive regulator for a large-scale interconnected system. The subsystems communicate with each other through a communication network while another communication network is inserted within the feedback loop of each subsystem. The network induced random delays and data drop out are modeled along with the system dynamics. Stochastic Q-learning is used to adaptively learn the Q-function parameters with periodic and intermittent feedback. For efficient parameter learning with event-sampled feedback, a novel hybrid learning algorithm is proposed. Boundedness of the estimated parameters and asymptotic convergence of state vector in the mean square is achieved and it is demonstrated using Lyapunov stability analysis. Moreover, if the regression function of the QFE is persistently exciting (PE), the estimated parameters converge to their expected target values. The proposed analytical design is validated using a numerical example via simulation.
Keywords :
"System dynamics","Interconnected systems","Optimal control","Stochastic processes","Cost function","Delays","Large-scale systems"
Publisher :
ieee
Conference_Titel :
Computational Intelligence, 2015 IEEE Symposium Series on
Print_ISBN :
978-1-4799-7560-0
Type :
conf
DOI :
10.1109/SSCI.2015.92
Filename :
7376665
Link To Document :
بازگشت