DocumentCode :
2498469
Title :
Model-building semi-Markov adaptive critics
Author :
Gosavi, Abhijit ; Murray, Susan L. ; Hu, Jiaqiao
Author_Institution :
Dept. of Eng., Manage. & Syst. Eng., Missouri S & T, Rolla, MO, USA
fYear :
2011
fDate :
11-15 April 2011
Firstpage :
170
Lastpage :
175
Abstract :
Adaptive or actor critics are a class of reinforcement learning (RL) or approximate dynamic programming (ADP) algorithms in which one searches over stochastic policies in order to determine the optimal deterministic policy. Classically, these algorithms have been studied for Markov decision processes (MDPs) in the context of model-free updates in which transition probabilities are avoided altogether. A model-free version for the semi-MDP (SMDP) for discounted reward in which the transition time of each transition can be a random variable was proposed in Gosavi. In this paper, we propose a variant in which the transition probability model is built simultaneously with the value function and action-probability functions. While our new algorithm does not require the transition probabilities apriori, it generates them along with the estimation of the value function and the action-probability functions required in adaptive critics. Model-building and model-based versions of algorithms have numerous advantages in contrast to their model-free counterparts. In particular, they are more stable and may require less training. However the additional steps of building the model may require increased storage in the computer´s memory. In addition to enumerating potential application areas for our algorithm, we will analyze the advantages and disadvantages of model building.
Keywords :
Markov processes; decision theory; dynamic programming; learning (artificial intelligence); probability; random processes; ADP algorithms; Gosavi; Markov decision processes; RL; SMDP; action-probability functions; actor critics; approximate dynamic programming algorithms; computer memory; discounted reward; model-building semi-Markov adaptive critics; model-free counterparts; model-free updates; model-free version; optimal deterministic policy; random variable; reinforcement learning; semiMDP; stochastic policy; transition probabilities apriori; transition probability model; value function estimation; Adaptation models; Approximation algorithms; Buildings; Computational modeling; Convergence; Heuristic algorithms; Mathematical model;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Adaptive Dynamic Programming And Reinforcement Learning (ADPRL), 2011 IEEE Symposium on
Conference_Location :
Paris
Print_ISBN :
978-1-4244-9887-1
Type :
conf
DOI :
10.1109/ADPRL.2011.5967374
Filename :
5967374
Link To Document :
بازگشت