DocumentCode :
3658854
Title :
H optimal control of unknown linear discrete-time systems: An off-policy reinforcement learning approach
Author :
Bahare Kiumarsi;Hamidreza Modares;Frank L. Lewis;Zhong-Ping Jiang
Author_Institution :
UTA Research Institute UTARI, The University of Texas at Arlington, Ft. Worth, TX 76118, USA
fYear :
2015
fDate :
7/1/2015 12:00:00 AM
Firstpage :
41
Lastpage :
46
Abstract :
This paper proposes a model-free H control design for linear discrete-time systems using reinforcement learning (RL). A novel off-policy RL algorithm is used to solve the game algebraic Riccati equation (GARE) online using the measured data along the system trajectories. The proposed RL algorithm has the following advantages compared to existing model-free RL methods for solving H control problem: 1) It is data efficient and fast since a stream of experiences which is obtained from executing a fixed behavioral policy is reused to update many value functions correspond to different leaning policies sequentially. 2) The disturbance input does not need to be adjusted in a specific manner. 3) There is no bias as a result of adding a probing noise to the control input to maintain persistence of excitation conditions. A simulation example is used to verify the effectiveness of the proposed control scheme.
Keywords :
"Decision support systems","Conferences","Random access memory"
Publisher :
ieee
Conference_Titel :
Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM), 2015 IEEE 7th International Conference on
Print_ISBN :
978-1-4673-7337-1
Electronic_ISBN :
2326-8239
Type :
conf
DOI :
10.1109/ICCIS.2015.7274545
Filename :
7274545
Link To Document :
بازگشت