• DocumentCode
    337653
  • Title

    Stability and convergence of stochastic approximation using the ODE method

  • Author

    Borkar, V.S. ; Meyn, S.P.

  • Author_Institution
    Dept. of Comput. Sci. & Autom., Indian Inst. of Sci., Bangalore, India
  • Volume
    1
  • fYear
    1998
  • fDate
    1998
  • Firstpage
    277
  • Abstract
    It is shown that the stability of the stochastic approximation algorithm is implied by the asymptotic stability of the origin for an associated ODE. This in turn implies convergence of the algorithm. Several specific classes of algorithms are considered as applications. It is found that the results provide: 1) a simpler derivation of known results for reinforcement learning algorithms; 2) a proof for the first time that a class of asynchronous stochastic approximation algorithms are convergent without using any a priori assumption of stability; and 3) a proof for the first time that asynchronous adaptive critic and Q-learning algorithms are convergent for the average cost optimal control problem
  • Keywords
    Markov processes; approximation theory; asymptotic stability; convergence; decision theory; differential equations; learning (artificial intelligence); optimal control; stochastic processes; Markov decision process; adaptive control; asymptotic stability; convergence; optimal control; reinforcement learning; stochastic approximation; Application software; Approximation algorithms; Asymptotic stability; Automation; Computer science; Convergence; Cost function; Learning; Optimal control; Stochastic processes;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Decision and Control, 1998. Proceedings of the 37th IEEE Conference on
  • Conference_Location
    Tampa, FL
  • ISSN
    0191-2216
  • Print_ISBN
    0-7803-4394-8
  • Type

    conf

  • DOI
    10.1109/CDC.1998.760684
  • Filename
    760684