Title :
Value function approximation and model predictive control
Author :
Mingyuan Zhong ; Johnson, Mark ; Tassa, Yuval ; Erez, Tom ; Todorov, Emo
Author_Institution :
Dept. of Appl. Math., Univ. of Washington, Seattle, WA, USA
Abstract :
Both global methods and on-line trajectory optimization methods are powerful techniques for solving optimal control problems; however, each has limitations. In order to mitigate the undesirable properties of each, we explore the possibility of combining the two. We explore two methods of deriving a descriptive final cost function to assist model predictive control (MPC) in selecting a good policy without having to plan as far into the future or having to fine-tune delicate cost functions. First, we exploit the large amount of data which is generated in MPC simulations (based on the receding horizon iterative LQG method) to learn, off-line, the global optimal value function for use as a final cost. We demonstrate that, while the global function approximation matches the value function well on some problems, there is relatively little improvement to the original MPC. Alternatively, we solve the Bellman equation directly using aggregation methods for linearly-solvable Markov Decision Processes to obtain an approximation to the value function and the optimal policy. Using both pieces of information in the MPC framework, we find controller performance of similar quality to MPC alone with long horizon, but now we may drastically shorten the horizon. Implementation of these methods shows that Bellman equation-based methods and on-line trajectory methods can be combined in real applications to the benefit of both.
Keywords :
Markov processes; control system synthesis; controllers; cost optimal control; decision making; function approximation; iterative methods; learning (artificial intelligence); linear quadratic Gaussian control; predictive control; Bellman equation-based methods; MPC framework; MPC simulations; aggregation methods; controller performance; delicate cost functions; final cost function; global function approximation methods; global optimal value function methods; linearly-solvable Markov decision processes; model predictive control; on-line trajectory optimization methods; optimal control problems; receding horizon iterative LQG method; value function approximation; Function approximation; Mathematical model; Optimization; Polynomials; Trajectory;
Conference_Titel :
Adaptive Dynamic Programming And Reinforcement Learning (ADPRL), 2013 IEEE Symposium on
Conference_Location :
Singapore
DOI :
10.1109/ADPRL.2013.6614995