Title of article :
Collaborative multi-agent reinforcement learning based on a novel coordination tree frame with dynamic partition
Author/Authors :
Fang، نويسنده , , Min and Groen، نويسنده , , Frans C.A. and Li، نويسنده , , Hao and Zhang، نويسنده , , Jujie، نويسنده ,
Abstract :
In the research of team Markov games, computing the coordinate team dynamically and determining the joint action policy are the main problems. To deal with the first problem, a dynamic team partitioning method is proposed based on a novel coordinate tree frame. We build a coordinate tree with coordinate agent subset and define two breaching weights to represent the weights of an agent to corporate with the agent subset. Each agent chooses the agent subset with a minimum cost as the coordinate team based on coordinate tree. The Q-learning based on belief allocation studies multi-agents joint action policy which helps corporative multi-agents joint action policy to converge to the optimum solution. We perform experiments on multiple simulation environments and compare the proposed algorithm with similar ones. Experimental results show that the proposed algorithms are able to dynamically compute the corporative teams and design the optimum joint action policy for corporative teams.
Keywords :
multi-agent , Coordination tree , Markov games , belief propagation , Q learning
Journal title :
Astroparticle Physics