Abstract
The real-time strategy game StarCraft has proven to be a challenging environment for artificial intelligence techniques, and as a result, current state-of-the-art solutions consist of numerous hand-crafted modules. In this paper, we show how macromanagement decisions in StarCraft can be learned directly from game replays using deep learning. Neural networks are trained on 789,571 state-action pairs extracted from 2,005 replays of highly skilled players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting the next build action. By integrating the trained network into UAlbertaBot, an open source StarCraft bot, the system can significantly outperform the game’s built-in Terran bot, and play competitively against UAlbertaBot with a fixed rush strategy. To our knowledge, this is the first time macromanagement tasks are learned directly from replays in StarCraft. While the best hand-crafted strategies are still the state-of-the-art, the deep network approach is able to express a wide range of different strategies and thus improving the network’s performance further with deep reinforcement learning is an immediately promising avenue for future research. Ultimately this approach could lead to strong StarCraft bots that are less reliant on hard-coded strategies.
Originalsprog | Engelsk |
---|---|
Titel | Computational Intelligence and Games (CIG), 2017 IEEE Conference on |
Antal sider | 8 |
Forlag | IEEE |
Publikationsdato | 2017 |
Sider | 162-169 |
ISBN (Trykt) | 978-1-5386-3233-8 |
DOI | |
Status | Udgivet - 2017 |
Emneord
- Real-time strategy games
- Artificial intelligence
- Deep learning
- Neural networks
- StarCraft macromanagement