DocumentCode
174239
Title
Multi-agent path planning in unknown environment with reinforcement learning and neural network
Author
Luviano Cruz, David ; Wen Yu
Author_Institution
Dept. de Control Automatico, CINVESTAV-IPN, Mexico City, Mexico
fYear
2014
fDate
5-8 Oct. 2014
Firstpage
3458
Lastpage
3463
Abstract
Path planning of multi-agent is much harder than single-agent. Reinforcement learning (RL) is a popular method for it. However, it cannot solve the path planning problem directly in unknown environment. In this paper, neural network (NN) is applied to estimate the unvisited space. The traditional multi-agent reinforcement learning is modified by the neural approximation. The path planning of this paper includes two stages: we first use RL to generate training samples for NN; then the trained NN gives an approximate action to agents. The advantage of this method is we do not need to repeat RL for the unvisited state. Experiment results show the proposed algorithm can generate suboptimal paths in the unknown environment for multiple agents.
Keywords
approximation theory; control engineering computing; learning (artificial intelligence); mobile robots; multi-agent systems; multi-robot systems; neural nets; path planning; NN; RL; multiagent path planning; neural approximation; neural network; reinforcement learning; unknown environment; Artificial neural networks; Learning (artificial intelligence); Multi-agent systems; Neurons; Path planning; Training;
fLanguage
English
Publisher
ieee
Conference_Titel
Systems, Man and Cybernetics (SMC), 2014 IEEE International Conference on
Conference_Location
San Diego, CA
Type
conf
DOI
10.1109/SMC.2014.6974464
Filename
6974464
Link To Document