DocumentCode :
2692923
Title :
Learning to deal with objects
Author :
Malfaz, María ; Salichs, Miguel A.
Author_Institution :
RoboticsLab., Carlos III Univ. of Madrid, Leganes, Spain
fYear :
2009
fDate :
5-7 June 2009
Firstpage :
1
Lastpage :
6
Abstract :
In this paper, a modification of the standard learning algorithm Q-learning is presented: Object Q-learning (OQ-learning). An autonomous agent should be able to decide its own goals and behaviours in order to fulfil these goals. When the agent has no previous knowledge, it must learn what to do in every state (policy of behaviour). If the agent uses Q-learning, this implies that it learns the utility value Q of each action-state pair. Typically, an autonomous agent living in a complex environment has to interact with different objects present in that world. In this case, the number of states of the agent in relation to those objects may increase as the number of objects increases, making the learning process difficult to deal with. The proposed modification appears as a solution in order to cope with this problem. The experimental results prove the usefulness of the OQ-learning in this situation, in comparison with the standard Q-learning algorithm.
Keywords :
learning (artificial intelligence); action-state pair; autonomous agent; complex environment; learning process; object Q-learning; standard learning algorithm; utility value; Artificial intelligence; Autonomous agents; Decision making; Learning; Standards development; Q-Learning; autonomous agents; decision making; objects;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Development and Learning, 2009. ICDL 2009. IEEE 8th International Conference on
Conference_Location :
Shanghai
Print_ISBN :
978-1-4244-4117-4
Electronic_ISBN :
978-1-4244-4118-1
Type :
conf
DOI :
10.1109/DEVLRN.2009.5175508
Filename :
5175508
Link To Document :
بازگشت