DocumentCode :
2522341
Title :
A Bayesian approach to conceptualization using reinforcement learning
Author :
Amizadeh, Saeed ; Ahmadabadi, Majid Nili ; Araabi, Babak N. ; Siegwart, Roland
Author_Institution :
Univ. of Tehran, Tehran
fYear :
2007
fDate :
4-7 Sept. 2007
Firstpage :
1
Lastpage :
7
Abstract :
Abstraction provides cognition economy and generalization skill in addition to facilitating knowledge communication for learning agents situated in real world. Concept learning introduces a way of abstraction which maps the continuous state and action spaces into entities called concepts. Of computational concept learning approaches, action-based conceptualization is favored because of its simplicity and mirror neuron foundations in neuroscience. In this paper, a new biologically inspired concept learning approach based on the Bayesian framework is proposed. This approach exploits and extends the mirror neuron´s role in conceptualization for a reinforcement learning agent in nondeterministic environments. In the proposed method, an agent sequentially learns the concepts from both of its successes and its failures through interaction with the environment. These characteristics as a whole distinguish the proposed learning algorithm from positive sample learning. Simulation results show the correct formation of concepts´ distributions in perceptual space in addition to benefits of utilizing both successes and failures in terms of convergence speed as well as asymptotic behavior. Experimental results, on the other hand, show the applicability and effectiveness of our method for a real robotic task such as wall-following.
Keywords :
Bayes methods; generalisation (artificial intelligence); learning (artificial intelligence); multi-agent systems; neural nets; probability; Bayesian approach; action-based conceptualization; biologically inspired concept learning approach; cognition economy; generalization skill; knowledge communication; mirror neuron; nondeterministic environment; probability method; reinforcement learning agent; robotic wall-following task; Bayesian methods; Biological information theory; Decision making; Encoding; Learning; Mirrors; Neurons; Orbital robotics; Signal processing; Uncertainty;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Advanced intelligent mechatronics, 2007 IEEE/ASME international conference on
Conference_Location :
Zurich
Print_ISBN :
978-1-4244-1263-1
Electronic_ISBN :
978-1-4244-1264-8
Type :
conf
DOI :
10.1109/AIM.2007.4412531
Filename :
4412531
Link To Document :
بازگشت