DocumentCode :
586551
Title :
What good are actions? Accelerating learning using learned action priors
Author :
Rosman, Benjamin ; Ramamoorthy, Subramanian
Author_Institution :
Sch. of Inf., Univ. of Edinburgh, Edinburgh, UK
fYear :
2012
fDate :
7-9 Nov. 2012
Firstpage :
1
Lastpage :
6
Abstract :
The computational complexity of learning in sequential decision problems grows exponentially with the number of actions available to the agent at each state. We present a method for accelerating this process by learning action priors that express the usefulness of each action in each state. These are learned from a set of different optimal policies from many tasks in the same state space, and are used to bias exploration away from less useful actions. This is shown to improve performance for tasks in the same domain but with different goals. We extend our method to base action priors on perceptual cues rather than absolute states, allowing the transfer of these priors between tasks with differing state spaces and transition functions, and demonstrate experimentally the advantages of learning with action priors in a reinforcement learning context.
Keywords :
computational complexity; decision theory; learning (artificial intelligence); learned action priors; learning acceleration; learning computational complexity; optimal policies; reinforcement learning context; sequential decision problems; state spaces; Acceleration; Context; Educational institutions; Learning; Robots; Standards; Training;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Development and Learning and Epigenetic Robotics (ICDL), 2012 IEEE International Conference on
Conference_Location :
San Diego, CA
Print_ISBN :
978-1-4673-4964-2
Electronic_ISBN :
978-1-4673-4963-5
Type :
conf
DOI :
10.1109/DevLrn.2012.6400810
Filename :
6400810
Link To Document :
بازگشت