DocumentCode
1328489
Title
Approximate Dynamic Programming for Optimal Stationary Control With Control-Dependent Noise
Author
Yu Jiang ; Zhong-Ping Jiang
Author_Institution
Dept. of Electr. & Comput. Eng., Polytech. Inst. of New York Univ., Brooklyn, NY, USA
Volume
22
Issue
12
fYear
2011
Firstpage
2392
Lastpage
2398
Abstract
This brief studies the stochastic optimal control problem via reinforcement learning and approximate/adaptive dynamic programming (ADP). A policy iteration algorithm is derived in the presence of both additive and multiplicative noise using Itô calculus. The expectation of the approximated cost matrix is guaranteed to converge to the solution of some algebraic Riccati equation that gives rise to the optimal cost value. Moreover, the covariance of the approximated cost matrix can be reduced by increasing the length of time interval between two consecutive iterations. Finally, a numerical example is given to illustrate the efficiency of the proposed ADP methodology.
Keywords
Riccati equations; approximation theory; covariance matrices; dynamic programming; iterative methods; learning (artificial intelligence); optimal control; stochastic systems; Ito calculus; additive noise; algebraic Riccati equation; approximate dynamic programming; approximated cost matrix; control-dependent noise; covariance matrix; multiplicative noise; optimal cost value; optimal stationary control; policy iteration algorithm; reinforcement learning; stochastic optimal control problem; Approximation algorithms; Covariance matrix; Dynamic programming; Learning; Optimal control; Steady-state; Symmetric matrices; Approximate dynamic programming; control-dependent noise; optimal stationary control; stochastic systems; Artificial Intelligence; Data Mining; Databases, Factual; Feedback; Models, Theoretical; Programming, Linear;
fLanguage
English
Journal_Title
Neural Networks, IEEE Transactions on
Publisher
ieee
ISSN
1045-9227
Type
jour
DOI
10.1109/TNN.2011.2165729
Filename
6026952
Link To Document