Abstract
Imitation Learning (IL) is a machine learning approach to learn a policy from a set of demonstrations. IL can be useful to kick-start learning before applying reinforcement learning (RL) but it can also be useful on its own, e.g. to learn to imitate human players in video games. Despite the success of systems that use IL and RL, how such systems can adapt in-between game rounds is a neglected area of study but an important aspect of many strategy games. In this paper, we present a new approach called Behavioral Repertoire Imitation Learning (BRIL) that learns a repertoire of behaviors from a set of demonstrations by augmenting the state-action pairs with behavioral descriptions. The outcome of this approach is a single neural network policy conditioned on a behavior description that can be precisely modulated. We apply this approach to train a policy on 7,777 human demonstrations for the build-order planning task in StarCraft II. Dimensionality reduction is applied to construct a low-dimensional behavioral space from a high-dimensional description of the army unit composition of each human replay. The results demonstrate that the learned policy can be effectively manipulated to express distinct behaviors. Additionally, by applying the UCB1 algorithm, the policy can adapt its behavior - in-between games - to reach a performance beyond that of the traditional IL baseline approach.
Original language | English |
---|---|
Title of host publication | Proceedings of the 2020 IEEE Conference on Games |
Number of pages | 8 |
Publisher | IEEE |
Publication date | 2020 |
Pages | 383-390 |
DOIs | |
Publication status | Published - 2020 |
Event | IEEE Conference on Games 2020 - Duration: 24 Aug 2020 → 27 Nov 2020 https://ieee-cog.org/2020/ |
Conference
Conference | IEEE Conference on Games 2020 |
---|---|
Period | 24/08/2020 → 27/11/2020 |
Internet address |
Keywords
- Imitation Learning (IL)
- Reinforcement Learning (RL)
- Behavioral Repertoire Imitation Learning (BRIL)
- Policy Modulation
- Dimensionality Reduction