DocumentCode :
2731692
Title :
Evolving autonomous agent control in the Xpilot environment
Author :
Parker, Gary B. ; Parker, Matt ; Johnson, Steven D.
Author_Institution :
Comput. Sci., Connecticut Coll., New London, CT, USA
Volume :
3
fYear :
2005
fDate :
2-5 Sept. 2005
Firstpage :
2416
Abstract :
Interactive combat games are useful as test-beds for learning systems employing evolutionary computation. Of particular value are games that can be modified to accommodate differing levels of complexity. In this paper, the authors presented the use of Xpilot as a learning environment that can be used to evolve primitive reactive behaviors, yet can be complex enough to require combat strategies and team cooperation. In addition, this environment was used with a genetic algorithm to learn the weights for an artificial neural network controller that provides both offensive and defensive reactive control for an autonomous agent.
Keywords :
computer games; evolutionary computation; learning (artificial intelligence); Xpilot; artificial neural network controller; evolutionary computation; evolving autonomous agent control; genetic algorithm; interactive combat games; learning systems; Artificial neural networks; Autonomous agents; Computer science; Educational institutions; Evolutionary computation; Genetic algorithms; Military computing; Neural networks; Robots; System testing;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Evolutionary Computation, 2005. The 2005 IEEE Congress on
Print_ISBN :
0-7803-9363-5
Type :
conf
DOI :
10.1109/CEC.2005.1554996
Filename :
1554996
Link To Document :
بازگشت