Title :
Socially augmented hierarchical reinforcement learning for reducing complexity in cooperative multi-agent systems
Author :
Sun, Xueqing ; Ray, Laura E. ; Kralik, Jerald D. ; Shi, Dongqing
Author_Institution :
Thayer Sch. of Eng., Dartmouth Coll., Hanover, NH, USA
Abstract :
This paper addresses the inherent complexity in coordinating learned behavioral strategies of multiple agents working towards a common goal. Because of the interactions among the agents, a primary challenge of policy learning is escalating computational complexity with increasing number of agents and the size of the task space (including action choices and world states). We employ an approach that incorporates social constructs based on analogs from biological systems of high functioning mammals in order to constrain state-action choices in reinforcement learning. Additionally, we use state-space abstraction and a hierarchical learning structure to improve learning efficiency. Theoretical results bound the reduction in computational complexity due to state abstraction, hierarchical learning, and socially-constrained action selection in learning problems that can be described as decentralized Markov decision processes. Simulation results show that these theoretical bounds hold and that satisficing multi-agent coordination policies emerge, which reduce task completion time, computational cost, and memory resources compared to learning with no social knowledge.
Keywords :
Markov processes; computational complexity; decision making; learning (artificial intelligence); multi-agent systems; Markov decision processes; biological systems; computational complexity; cooperative multiagent system; socially augmented hierarchical reinforcement learning; socially constrained action selection; state space abstraction;
Conference_Titel :
Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on
Conference_Location :
Taipei
Print_ISBN :
978-1-4244-6674-0
DOI :
10.1109/IROS.2010.5652923