DocumentCode :
1227179
Title :
A Study on Expertise of Agents and Its Effects on Cooperative Q -Learning
Author :
Araabi, Babak Nadjar ; Mastoureshgh, Sahar ; Ahmadabadi, Majid Nili
Author_Institution :
Electr. & Comput. Eng. Dept., Tehran Univ.
Volume :
37
Issue :
2
fYear :
2007
fDate :
4/1/2007 12:00:00 AM
Firstpage :
398
Lastpage :
409
Abstract :
Cooperation in learning (CL) can be realized in a multiagent system, if agents are capable of learning from both their own experiments and other agents\´ knowledge and expertise. Extra resources are exploited into higher efficiency and faster learning in CL as compared to that of individual learning (IL). In the real world, however, implementation of CL is not a straightforward task, in part due to possible differences in area of expertise (AOE). In this paper, reinforcement-learning homogenous agents are considered in an environment with multiple goals or tasks. As a result, they become expert in different domains with different amounts of expertness. Each agent uses a one-step Q-learning algorithm and is capable of exchanging its Q-table with those of its teammates. Two crucial questions are addressed in this paper: "How the AOE of an agent can be extracted?" and "How agents can improve their performance in CL by knowing their AOEs?" An algorithm is developed to extract the AOE based on state transitions as a gold standard from a behavioral point of view. Moreover, it is discussed that the AOE can be implicitly obtained through agents\´ expertness in the state level. Three new methods for CL through the combination of Q-tables are developed and examined for overall performance after CL. The performances of developed methods are compared with that of IL, strategy sharing (SS), and weighted SS (WSS). Obtained results show the superior performance of AOE-based methods as compared to that of existing CL methods, which do not use the notion of AOE. These results are very encouraging in support of the idea that "cooperation based on the AOE" performs better than the general CL methods
Keywords :
learning (artificial intelligence); multi-agent systems; Q-table; cooperative q-learning; multiagent system; reinforcement-learning homogenous agent; Data mining; Gold; Humans; Intelligent agent; Intelligent control; Learning systems; Multiagent systems; Process control; Routing; Standards development; Area of expertise (AOE); cooperative $Q$ -learning agents; cooperative $Q$ -learning using AOE; extraction of AOE; multiagent systems (MASs); Algorithms; Artificial Intelligence; Computer Simulation; Cooperative Behavior; Decision Support Techniques; Expert Systems; Models, Theoretical; Pattern Recognition, Automated;
fLanguage :
English
Journal_Title :
Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on
Publisher :
ieee
ISSN :
1083-4419
Type :
jour
DOI :
10.1109/TSMCB.2006.883264
Filename :
4126273
Link To Document :
بازگشت