Abstract
Deep reinforcement learning (RL) has shown impressive results in a variety of domains, learning directly from high-dimensional sensory streams. However, when neural networks are trained in a fixed environment, such as a single level in a video game, they will usually overfit and fail to generalize to new levels. When RL models overfit, even slight modifications to the environment can result in poor agent performance. This paper explores how procedurally generated levels during training can increase generality. We show that for some games procedural level generation enables generalization to new levels within the same distribution. Additionally, it is possible to achieve better performance with less data by manipulating the difficulty of the levels in response to the performance of the agent. The generality of the learned behaviors is also evaluated on a set of human-designed levels. The results suggest that the ability to generalize to human-designed levels highly depends on the design of the level generators. We apply dimensionality reduction and clustering techniques to visualize the generators’ distributions of levels and analyze to what degree they can produce levels similar to those designed by a human.
Original language | English |
---|---|
Publication date | 2018 |
Publication status | Published - 2018 |
Event | NeurIPS Workshop on Deep Reinforcement Learning Workshop - Palais des Congrès de Montréal, Montréal, Canada Duration: 7 Dec 2018 → 7 Dec 2018 https://sites.google.com/view/deep-rl-workshop-nips-2018/home |
Conference
Conference | NeurIPS Workshop on Deep Reinforcement Learning Workshop |
---|---|
Location | Palais des Congrès de Montréal |
Country/Territory | Canada |
City | Montréal |
Period | 07/12/2018 → 07/12/2018 |
Internet address |
Keywords
- Deep Reinforcement Learning
- Procedural Level Generation
- Generalization
- Neural Networks
- Dimensionality Reduction and Clustering Techniques