DocumentCode :
2817531
Title :
Study of convergence rates of numerical methods for stochastic control problems
Author :
Song, Q.S. ; Yin, G.
Author_Institution :
Univ. of Southern California, Los Angeles
fYear :
2007
fDate :
12-14 Dec. 2007
Firstpage :
3108
Lastpage :
3113
Abstract :
This work is concerned with convergence rates of numerical methods for stochastic control problems with a stopping time. We use Markov chain approximation techniques. Although the convergence rates may be studied by considering the convergence rates of finite difference schemes for Hamilton-Jacobi-Bellman (HJB) equations, in the current problem, there is an additional difficulty due to the boundary condition, which requires the continuity of the first exit time with respect to the discrete parameter. To prove the convergence of the algorithm by Markov chain approximation method, there might be a problem known as tangency problem. Convergence rate is achieved by boundary perturbation under certain assumption. Convergence rates of Markov chain approximation for certain controlled diffusion problems are verified.
Keywords :
Markov processes; approximation theory; convergence of numerical methods; finite difference methods; stochastic systems; Hamilton-Jacobi-Bellman equations; Markov chain approximation; finite difference schemes; numerical methods convergence rates; stochastic control problems; Approximation algorithms; Approximation methods; Boundary conditions; Convergence of numerical methods; Costs; Difference equations; Finite difference methods; Mathematics; Stochastic processes; USA Councils;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Decision and Control, 2007 46th IEEE Conference on
Conference_Location :
New Orleans, LA
ISSN :
0191-2216
Print_ISBN :
978-1-4244-1497-0
Electronic_ISBN :
0191-2216
Type :
conf
DOI :
10.1109/CDC.2007.4434198
Filename :
4434198
Link To Document :
بازگشت