• DocumentCode
    3693096
  • Title

    Primal-dual algorithms for discounted Markov decision processes

  • Author

    Randy Cogill

  • Author_Institution
    IBM Research Ireland, Ireland
  • fYear
    2015
  • fDate
    7/1/2015 12:00:00 AM
  • Firstpage
    260
  • Lastpage
    265
  • Abstract
    Several well-known algorithms in the field of combinatorial optimization can be interpreted in terms of the primal-dual method for solving linear programs. For example, Dijkstra´s algorithm, the Ford-Fulkerson algorithm, and the Hungarian algorithm can all be viewed as the primal-dual method applied to the linear programming formulations of their respective optimization problems. Roughly speaking, successfully applying the primal-dual method to an optimization problem that can be posed as a linear program relies on the ability to find a simple characterization of the optimal solutions to a related linear program, called the `dual of the restricted primal´ (DRP). This paper is motivated by the following question: What is the algorithm we obtain if we apply the primal-dual method to a linear programming formulation of a discounted cost Markov decision process? We will first show that a widely-used variant of the value iteration algorithm for Markov decision processes can be interpreted in terms of the primal-dual method, where the value function is updated with suboptimal solutions to the DRP in each iteration. We then provide the optimal solution to the DRP in closed-form, and present the algorithm that results when using this solution to update the value function in each iteration. Unlike the algorithms obtained from suboptimal DRP updates, this algorithm is guaranteed to yield the optimal value function in a finite number of iterations. Finally, we show that the iterations of the primal-dual algorithm can be interpreted as repeated application of the policy iteration algorithm to a special class of Markov decision processes. When considered alongside recent results characterizing the computational complexity of the policy iteration algorithm, this observation could provide new insights into the computational complexity of solving discounted-cost Markov decision processes.
  • Keywords
    "Markov processes","Convergence","Optimization","Polynomials","Computational complexity","Linear programming","Computational modeling"
  • Publisher
    ieee
  • Conference_Titel
    Control Conference (ECC), 2015 European
  • Type

    conf

  • DOI
    10.1109/ECC.2015.7330554
  • Filename
    7330554