Title :
Random Sampling of States in Dynamic Programming
Author :
Atkeson, Christopher G. ; Stephens, Benjamin J.
Author_Institution :
Robot. Inst., Carnegie Mellon Univ., Pittsburgh, PA
Abstract :
We combine three threads of research on approximate dynamic programming: sparse random sampling of states, value function and policy approximation using local models, and using local trajectory optimizers to globally optimize a policy and associated value function. Our focus is on finding steady-state policies for deterministic time-invariant discrete time control problems with continuous states and actions often found in robotics. In this paper, we describe our approach and provide initial results on several simulated robotics problems.
Keywords :
continuous systems; discrete time systems; dynamic programming; optimal control; random processes; robots; sampling methods; associated value function; continuous states; deterministic time-invariant discrete time control; dynamic programming; local trajectory optimizers; policy approximation; random state sampling; robotics; sparse random sampling; steady-state policy; Cost function; Dynamic programming; Optimal control; Robot programming; Robustness; Sampling methods; State estimation; Steady-state; Torque; Dynamic programming; optimal control; random sampling; Computer Simulation; Feedback; Models, Statistical; Nonlinear Dynamics; Programming, Linear; Robotics; Systems Theory;
Journal_Title :
Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on
DOI :
10.1109/TSMCB.2008.926610