Learning macromanagement in starcraft from replays using deep learning

Niels Justesen, Sebastian Risi

    Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

    Abstract

    The real-time strategy game StarCraft has proven to be a challenging environment for artificial intelligence techniques, and as a result, current state-of-the-art solutions consist of numerous hand-crafted modules. In this paper, we show how macromanagement decisions in StarCraft can be learned directly from game replays using deep learning. Neural networks are trained on 789,571 state-action pairs extracted from 2,005 replays of highly skilled players, achieving top-1 and top-3 error rates of 54.6% and 22.9% in predicting the next build action. By integrating the trained network into UAlbertaBot, an open source StarCraft bot, the system can significantly outperform the game’s built-in Terran bot, and play competitively against UAlbertaBot with a fixed rush strategy. To our knowledge, this is the first time macromanagement tasks are learned directly from replays in StarCraft. While the best hand-crafted strategies are still the state-of-the-art, the deep network approach is able to express a wide range of different strategies and thus improving the network’s performance further with deep reinforcement learning is an immediately promising avenue for future research. Ultimately this approach could lead to strong StarCraft bots that are less reliant on hard-coded strategies.
    Original languageEnglish
    Title of host publicationComputational Intelligence and Games (CIG), 2017 IEEE Conference on
    Number of pages8
    PublisherIEEE
    Publication date2017
    Pages162-169
    ISBN (Print)978-1-5386-3233-8
    DOIs
    Publication statusPublished - 2017

    Keywords

    • Real-time strategy games
    • Artificial intelligence
    • Deep learning
    • Neural networks
    • StarCraft macromanagement

    Fingerprint

    Dive into the research topics of 'Learning macromanagement in starcraft from replays using deep learning'. Together they form a unique fingerprint.

    Cite this