DocumentCode :
2522988
Title :
Grounding abstraction in sensory experience
Author :
Rastegar, Farzad ; Ahmadabadi, Majid Nili
Author_Institution :
Univ. of Tehran, Tehran
fYear :
2007
fDate :
4-7 Sept. 2007
Firstpage :
1
Lastpage :
8
Abstract :
In order to make appropriate decisions, intelligent creatures narrow their sensory information down to high-level, abstract knowledge. Inspired by recent findings in neuroscience on the role of mirror neurons in action-based abstraction, in this study we propose a two-phase framework whereby a reinforcement learning (RL) agent attempts to understand its environment via meaningful temporally extended concepts in an unsupervised way. Throughout the process of concept extraction, the reinforcement learning agent makes use of a model of short-term and long-term memory to retrieve meaningful concepts from its environment. Empirical results by using e-puck robots reveal the capability of the proposed approach in retrieving meaningful concepts from environments. Moreover, simulation results show that the proposed approach can help an agent learn once and apply its knowledge in other environments which have similar structure without any further learning.
Keywords :
multi-agent systems; unsupervised learning; abstract knowledge; concept extraction; e-puck robot; intelligent creature; long-term memory; reinforcement learning agent; sensory information; short-term memory; unsupervised learning; Animals; Data mining; Decision making; Grounding; Intelligent sensors; Learning; Mirrors; Neurons; Neuroscience; Robots;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Advanced intelligent mechatronics, 2007 IEEE/ASME international conference on
Conference_Location :
Zurich
Print_ISBN :
978-1-4244-1263-1
Electronic_ISBN :
978-1-4244-1264-8
Type :
conf
DOI :
10.1109/AIM.2007.4412565
Filename :
4412565
Link To Document :
بازگشت