Title :
Near optimal control policy for controlling power system stabilizers using reinforcement learning
Author :
Hadidi, Ramtin ; Jeyasurya, Benjamin
Author_Institution :
Fac. of Eng. & Appl. Sci., Memorial Univ. of Newfoundland, St. John´´s, NL, Canada
Abstract :
In this paper, a reinforcement learning method called Q-learning is applied to find a near optimal control policy for controlling power system stabilizers (PSS). The single agent approach is used, but the design procedure can be expanded to a multi-agent system. The objective of the control policy is to enhance the stability of a multi-machine power system by increasing the damping ratio of the least damped modes. By learning a near optimal policy, not only the design parameters of the PSSs are simultaneously optimized, but also the agents can track the system changes and update the parameters of PSSs. The off-line mode of operation is used in this paper after achieving the near optimal policy. The validity of proposed method has been tested on a 2 area, 4 machines power system.
Keywords :
learning (artificial intelligence); multi-agent systems; optimal control; power engineering computing; power system control; power system stability; Q-learning; multiagent system; multimachine power system; off-line operation mode; optimal control policy; power system stabilizer control; reinforcement learning method called; single agent approach; Control systems; Damping; Design optimization; Learning; Multiagent systems; Optimal control; Power system control; Power system stability; Power systems; System testing; Agent; Q-Learning; optimal control; power system stability; reinforcement learning;
Conference_Titel :
Power & Energy Society General Meeting, 2009. PES '09. IEEE
Conference_Location :
Calgary, AB
Print_ISBN :
978-1-4244-4241-6
DOI :
10.1109/PES.2009.5275575