DocumentCode :
2461462
Title :
Learning Control for Xpilot Agents in the Core
Author :
Parker, Matt ; Parker, Gary B.
Author_Institution :
Indiana Univ., Bloomington
fYear :
0
fDate :
0-0 0
Firstpage :
800
Lastpage :
807
Abstract :
Xpilot, a network game where agents engage in space combat, has been shown to be a good test bed for controller learning systems. In this paper, we introduce the Core, an Xpilot learning environment where a population of learning agents interact locally through tournament selection, crossover, and mutation to produce offspring in the evolution of controllers. The system does not require the researcher to develop a fitness function or suitable agents to engage with the evolving agent. Instead, it employs a form of co-evolution where the environment, made up of the population of agents, evolves to continually challenge individual agents evolving within it. Tests show its successful use in evolving controllers for combat agents in Xpilot.
Keywords :
computer games; controllers; learning (artificial intelligence); Core; Xpilot agents; controller learning systems; crossover; learning control; mutation; network game; space combat; tournament selection; Adaptive control; Autonomous agents; Computational modeling; Control systems; Distributed computing; Genetic mutations; Intelligent networks; Learning systems; Programmable control; System testing;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Evolutionary Computation, 2006. CEC 2006. IEEE Congress on
Conference_Location :
Vancouver, BC
Print_ISBN :
0-7803-9487-9
Type :
conf
DOI :
10.1109/CEC.2006.1688393
Filename :
1688393
Link To Document :
بازگشت