Abstract
Neural architectures inspired by our own human cognitive system, such as the recently introduced world models, have been shown to outperform traditional deep reinforcement learning (RL) methods in a variety of different domains. Instead of the relatively simple architectures employed in most RL experiments, world models rely on multiple different neural components that are responsible for visual information processing, memory, and decision-making. However, so far the components of these models have to be trained separately and through a variety of specialized training methods. This paper demonstrates the surprising finding that models with the same precise parts can be instead efficiently trained end-to-end through a genetic algorithm (GA), reaching a comparable performance to the original world model by solving a challenging car racing task. An analysis of the evolved visual and memory system indicates that they include a similar effective representation to the system trained through gradient descent. Additionally, in contrast to gradient descent methods that struggle with discrete variables, GAs also work directly with such representations, opening up opportunities for classical planning in latent space. This paper adds additional evidence on the effectiveness of deep neuroevolution for tasks that require the intricate orchestration of multiple components in complex heterogeneous architectures.
Original language | English |
---|---|
Title of host publication | GECCO '19: Proceedings of the Genetic and Evolutionary Computation Conference |
Publisher | Association for Computing Machinery |
Publication date | 2019 |
Pages | 456-462 |
DOIs | |
Publication status | Published - 2019 |
Keywords
- Neural architectures
- Cognitive systems
- Deep reinforcement learning
- World models
- Genetic algorithms