Title :
Imitative learning for real-time strategy games
Author :
Gemine, Quentin ; Safadi, Firas ; Fonteneau, Raphaël ; Ernst, Damien
Abstract :
Over the past decades, video games have become increasingly popular and complex. Virtual worlds have gone a long way since the first arcades and so have the artificial intelligence (AI) techniques used to control agents in these growing environments. Tasks such as world exploration, constrained pathfinding or team tactics and coordination just to name a few are now default requirements for contemporary video games. However, despite its recent advances, video game AI still lacks the ability to learn. In this paper, we attempt to break the barrier between video game AI and machine learning and propose a generic method allowing real-time strategy (RTS) agents to learn production strategies from a set of recorded games using supervised learning. We test this imitative learning approach on the popular RTS title StarCraft II® and successfully teach a Terran agent facing a Protoss opponent new production strategies.
Keywords :
computer games; learning (artificial intelligence); multi-agent systems; AI techniques; Protoss opponent; RTS agents; RTS title StarCraft; Terran agent; artificial intelligence techniques; constrained pathfinding; coordination; imitative learning; machine learning; production strategies; real-time strategy games; supervised learning; team tactics; video games; virtual worlds; world exploration; Games; Humans; Learning systems; Production; Vectors;
Conference_Titel :
Computational Intelligence and Games (CIG), 2012 IEEE Conference on
Conference_Location :
Granada
Print_ISBN :
978-1-4673-1193-9
Electronic_ISBN :
978-1-4673-1192-2
DOI :
10.1109/CIG.2012.6374186