• DocumentCode
    2708288
  • Title

    A retrospective on Adaptive Dynamic Programming for control

  • Author

    Lendaris, George G.

  • Author_Institution
    Syst. Sci. Grad. Program, Portland State Univ., Portland, OR, USA
  • fYear
    2009
  • fDate
    14-19 June 2009
  • Firstpage
    1750
  • Lastpage
    1757
  • Abstract
    Some three decades ago, certain computational intelligence methods of reinforcement learning were recognized as implementing an approximation of Bellman´s Dynamic Programming method, which is known in the controls community as an important tool for designing optimal control policies for nonlinear plants and sequential decision making. Significant theoretical and practical developments have occurred within this arena, mostly in the past decade, with the methodology now usually referred to as Adaptive Dynamic Programming (ADP). The objective of this paper is to provide a retrospective of selected threads of such developments. In addition, a commentary is offered concerning present status of ADP, and threads for future research and development within the controls field are suggested.
  • Keywords
    control system synthesis; decision making; dynamic programming; learning (artificial intelligence); optimal control; Bellman dynamic programming method; adaptive dynamic programming; computational intelligence methods; nonlinear plants; optimal control policy; reinforcement learning; sequential decision making; Adaptive control; Dynamic programming; Learning; Modems; Nonlinear equations; Optimal control; Programmable control; Riccati equations; Stochastic processes; Yarn;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 2009. IJCNN 2009. International Joint Conference on
  • Conference_Location
    Atlanta, GA
  • ISSN
    1098-7576
  • Print_ISBN
    978-1-4244-3548-7
  • Electronic_ISBN
    1098-7576
  • Type

    conf

  • DOI
    10.1109/IJCNN.2009.5178716
  • Filename
    5178716