Title of article :
Perspectives on multiagent learning Original Research Article
Author/Authors :
Tuomas Sandholm، نويسنده ,
Issue Information :
روزنامه با شماره پیاپی سال 2007
Pages :
10
From page :
382
To page :
391
Abstract :
I lay out a slight refinement of Shoham et al.ʹs taxonomy of agendas that I consider sensible for multiagent learning (MAL) research. It is not intended to be rigid: senseless work can be done within these agendas and additional sensible agendas may arise. Within each agenda, I identify issues and suggest directions. In the computational agenda, direct algorithms are often more efficient, but MAL plays a role especially when the rules of the game are unknown or direct algorithms are not known for the class of games. In the descriptive agenda, more emphasis should be placed on establishing what classes of learning rules actually model learning by multiple humans or animals. Also, the agenda is, in a way, circular. This has a positive side too: it can be used to verify the learning models. In the prescriptive agendas, the desiderata need to be made clear and should guide the design of MAL algorithms. The algorithms need not mimic humansʹ or animalsʹ learning. I discuss some worthy desiderata; some from the literature do not seem well motivated. The learning problem is interesting both in cooperative and noncooperative settings, but the concerns are quite different. For many, if not most, noncooperative settings, future work should increasingly consider the learning itself strategically.
Keywords :
Multiagent learning , Learning in games , Reinforcement learning , Game theory
Journal title :
Artificial Intelligence
Serial Year :
2007
Journal title :
Artificial Intelligence
Record number :
1207533
Link To Document :
بازگشت