DLNE: A hybridization of deep learning and neuroevolution for visual control

Andreas Precht Poulsen, Mark Thorhauge, Mikkel Hvilshj Funch, Sebastian Risi

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

This paper investigates the potential of combining deep learning and neuroevolution to create a bot for a simple first person shooter (FPS) game capable of aiming and shooting based on high-dimensional raw pixel input. The deep learning component is responsible for visual recognition and translating raw pixels to compact feature representations, while the evolving network takes those features as inputs to infer actions. Two types of feature representations are evaluated in terms of (1) how precise they allow the deep network to recognize the position of the enemy, (2) their effect on evolution, and (3) how well they allow the deep network and evolved network to interface with each other. Overall, the results suggest that combining deep learning and neuroevolution in a hybrid approach is a promising research direction that could make complex visual domains directly accessible to networks trained through evolution.
Original languageEnglish
Title of host publicationComputational Intelligence and Games (CIG), 2017 IEEE Conference on
Number of pages8
PublisherIEEE Press
Publication date2017
Pages256-263
ISBN (Electronic)978-1-5386-3233-8
DOIs
Publication statusPublished - 2017

Keywords

  • Deep Learning
  • Neuroevolution
  • First Person Shooter (FPS)
  • Visual Recognition
  • Feature Representations

Fingerprint

Dive into the research topics of 'DLNE: A hybridization of deep learning and neuroevolution for visual control'. Together they form a unique fingerprint.

Cite this