Title :
A uniform-grid discretization algorithm for stochastic optimal control with risk constraints
Author :
Yin-Lam Chow ; Pavone, Marco
Author_Institution :
Dept. of Aeronaut. & Astronaut., Stanford Univ., Stanford, CA, USA
Abstract :
In this paper, we present a discretization algorithm for the solution of stochastic optimal control problems with dynamic, time-consistent risk constraints. Previous works have shown that such problems can be cast as Markov decision problems (MDPs) on an augmented state space where a “constrained” form of Bellman´s recursion can be applied. However, even if both the state space and action spaces for the original optimization problem are finite, the augmented state in the induced MDP problem contains state variables that are continuous. Our approach is to apply a uniform-grid discretization scheme for the augmented state. To prove the correctness of this approach, we develop novel Lipschitz bounds for “constrained” dynamic programming operators. We show that convergence to the optimal value functions is linear in the step size, which is the same convergence rate for discretization algorithms for unconstrained dynamic programming operators. Simulation experiments are presented and discussed.
Keywords :
Markov processes; dynamic programming; optimal control; state-space methods; stochastic systems; Bellman recursion; Lipschitz bounds; MDP; Markov decision problem; action space; augmented state space; convergence rate; dynamic constraint; optimization problem; stochastic optimal control; time-consistent risk constraint; unconstrained dynamic programming; uniform-grid discretization algorithm; Approximation methods; Dynamic programming; Equations; Heuristic algorithms; Markov processes; Measurement; Optimal control;
Conference_Titel :
Decision and Control (CDC), 2013 IEEE 52nd Annual Conference on
Conference_Location :
Firenze
Print_ISBN :
978-1-4673-5714-2
DOI :
10.1109/CDC.2013.6760250