DocumentCode :
2465748
Title :
Symbol generation and feature selection for reinforcement learning agents using affordances and U-Trees
Author :
Oladell, Marcus ; Huber, Manfred
Author_Institution :
Dept. of Comput. Sci. & Eng., Univ. of Texas at Arlington, Arlington, TX, USA
fYear :
2012
fDate :
14-17 Oct. 2012
Firstpage :
657
Lastpage :
662
Abstract :
One of the challenges for artificial agents is managing the complexity of their environment and task domain as they learn increasingly difficult tasks. This is especially true of agents that are grounded in the physical world, which contains a vast number of features and potentially very complex dynamics. A scalable solution to this problem in terms of forming, managing, and re-using compact, grounded representations in order to address the state explosion problem is thus a prerequisite of physically grounded, agent-based systems that can apply their past experience to new tasks and communicate that experience with other agents. To achieve this, it is essential that agents can form conceptual features that are relevant for and re-usable in their task domain without outside intervention and that these agents can effectively focus their attention on only the relevant features and concepts for the task at hand. This paper presents a framework for managing state complexity by automatically constructing abstract, symbolic features which encode important, task and domain-relevant properties and partition the raw feature space such that the agent need only consider a compressed view of the environment when learning new tasks. To exploit this, the framework during learning of new tasks uses U-Trees to construct minimal feature sets and thus compact state representations for these new tasks, allowing for potentially significant improvements in learning times.
Keywords :
computational complexity; learning (artificial intelligence); multi-agent systems; compact state representations; domain-relevant properties; environment complexity; feature selection; grounded representations; physically grounded agent-based systems; reinforcement learning agents; state complexity; state explosion problem; symbol generation; u-trees; Abstracts; Complexity theory; Context; Grippers; Learning; Robots; Training data; Affordance-Based Learning; Reinforcement Learning; State Abstraction; Symbol Grounding;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Systems, Man, and Cybernetics (SMC), 2012 IEEE International Conference on
Conference_Location :
Seoul
Print_ISBN :
978-1-4673-1713-9
Electronic_ISBN :
978-1-4673-1712-2
Type :
conf
DOI :
10.1109/ICSMC.2012.6377801
Filename :
6377801
Link To Document :
بازگشت