Playing Multi-Action Adversarial Games: Online Evolutionary Planning versus Tree Search

Niels Justesen, Tobias Mahlmann, Sebastian Risi, Julian Togelius

Research output: Journal Article or Conference Article in JournalJournal articleResearchpeer-review


We address the problem of playing turn-based multi-action adversarial games, which include many strategy games with extremely high branching factors as players take multiple actions each turn. This leads to the breakdown of standard tree search methods, including Monte Carlo Tree Search (MCTS), as they become unable to reach a sufficient depth in the game tree. In this paper, we introduce Online Evolutionary Planning (OEP) to address this challenge, which searches for combinations of actions to perform during a single turn guided by a fitness function that evaluates the quality of a particular state. We compare OEP to different MCTS variations that constrain the exploration to deal with the high branching factor in the turn-based multi-action game Hero Academy. While the constrained MCTS variations outperform the vanilla MCTS implementation by a large margin, OEP is able to search the space of plans more efficiently than any of the tested tree search methods as it has a relative advantage when the number of actions per turn increases.
Original languageEnglish
JournalIEEE Transactions on Computational Intelligence and AI in Games
Pages (from-to)281-291
Number of pages10
Publication statusPublished - 2017


Dive into the research topics of 'Playing Multi-Action Adversarial Games: Online Evolutionary Planning versus Tree Search'. Together they form a unique fingerprint.

Cite this