ITU

A Neuroevolution Approach to General Atari Game Playing

Research output: Journal Article or Conference Article in JournalJournal articleResearchpeer-review

Standard

A Neuroevolution Approach to General Atari Game Playing. / Hausknecht, Matthew; Lehman, Joel; Miikkulainen, Risto; Stone, Peter.

In: IEEE Transactions on Computational Intelligence and AI in Games, Vol. PP, No. 99, 2014, p. 1-1.

Research output: Journal Article or Conference Article in JournalJournal articleResearchpeer-review

Harvard

APA

Vancouver

Author

Hausknecht, Matthew ; Lehman, Joel ; Miikkulainen, Risto ; Stone, Peter. / A Neuroevolution Approach to General Atari Game Playing. In: IEEE Transactions on Computational Intelligence and AI in Games. 2014 ; Vol. PP, No. 99. pp. 1-1.

Bibtex

@article{8d1295b02430455091b866f894830a7d,
title = "A Neuroevolution Approach to General Atari Game Playing",
abstract = "This article addresses the challenge of learning to play many dierent video games with little domain- specic knowledge. Specically, it introduces a neuro-evolution approach to general Atari 2600 game playing. Four neuro-evolution algorithms were paired with three dierent state representations and evaluated on a set of 61 Atari games. The neuro-evolution agents represent dierent points along the spectrum of algorithmic sophistication - including weight evolution on topologically xed neural net- works (Conventional Neuro-evolution), Covariance Matrix Adaptation Evolution Strategy (CMA-ES), evolution of network topology and weights (NEAT), and indirect network encoding (HyperNEAT). State representations include an object representation of the game screen, the raw pixels of the game screen, and seeded noise (a comparative baseline). Results indicate that direct-encoding methods work best on compact state representations while indirect-encoding methods (i.e. HyperNEAT) allow scaling to higher-dimensional representations (i.e. the raw game screen). Previous approaches based on temporal- dierence learning had trouble dealing with the large state spaces and sparse reward gradients often found in Atari games. Neuro-evolution ameliorates these problems and evolved policies achieve state-of-the-art results, even surpassing human high scores on three games. These results suggest that neuro-evolution is a promising approach to general video game playing.",
author = "Matthew Hausknecht and Joel Lehman and Risto Miikkulainen and Peter Stone",
year = "2014",
doi = "10.1109/TCIAIG.2013.2294713",
language = "English",
volume = "PP",
pages = "1--1",
journal = "I E E E Transactions on Computational Intelligence and A I in Games",
issn = "1943-068X",
publisher = "institute of electrical and electronics engineers (ieee)",
number = "99",

}

RIS

TY - JOUR

T1 - A Neuroevolution Approach to General Atari Game Playing

AU - Hausknecht, Matthew

AU - Lehman, Joel

AU - Miikkulainen, Risto

AU - Stone, Peter

PY - 2014

Y1 - 2014

N2 - This article addresses the challenge of learning to play many dierent video games with little domain- specic knowledge. Specically, it introduces a neuro-evolution approach to general Atari 2600 game playing. Four neuro-evolution algorithms were paired with three dierent state representations and evaluated on a set of 61 Atari games. The neuro-evolution agents represent dierent points along the spectrum of algorithmic sophistication - including weight evolution on topologically xed neural net- works (Conventional Neuro-evolution), Covariance Matrix Adaptation Evolution Strategy (CMA-ES), evolution of network topology and weights (NEAT), and indirect network encoding (HyperNEAT). State representations include an object representation of the game screen, the raw pixels of the game screen, and seeded noise (a comparative baseline). Results indicate that direct-encoding methods work best on compact state representations while indirect-encoding methods (i.e. HyperNEAT) allow scaling to higher-dimensional representations (i.e. the raw game screen). Previous approaches based on temporal- dierence learning had trouble dealing with the large state spaces and sparse reward gradients often found in Atari games. Neuro-evolution ameliorates these problems and evolved policies achieve state-of-the-art results, even surpassing human high scores on three games. These results suggest that neuro-evolution is a promising approach to general video game playing.

AB - This article addresses the challenge of learning to play many dierent video games with little domain- specic knowledge. Specically, it introduces a neuro-evolution approach to general Atari 2600 game playing. Four neuro-evolution algorithms were paired with three dierent state representations and evaluated on a set of 61 Atari games. The neuro-evolution agents represent dierent points along the spectrum of algorithmic sophistication - including weight evolution on topologically xed neural net- works (Conventional Neuro-evolution), Covariance Matrix Adaptation Evolution Strategy (CMA-ES), evolution of network topology and weights (NEAT), and indirect network encoding (HyperNEAT). State representations include an object representation of the game screen, the raw pixels of the game screen, and seeded noise (a comparative baseline). Results indicate that direct-encoding methods work best on compact state representations while indirect-encoding methods (i.e. HyperNEAT) allow scaling to higher-dimensional representations (i.e. the raw game screen). Previous approaches based on temporal- dierence learning had trouble dealing with the large state spaces and sparse reward gradients often found in Atari games. Neuro-evolution ameliorates these problems and evolved policies achieve state-of-the-art results, even surpassing human high scores on three games. These results suggest that neuro-evolution is a promising approach to general video game playing.

U2 - 10.1109/TCIAIG.2013.2294713

DO - 10.1109/TCIAIG.2013.2294713

M3 - Journal article

VL - PP

SP - 1

EP - 1

JO - I E E E Transactions on Computational Intelligence and A I in Games

JF - I E E E Transactions on Computational Intelligence and A I in Games

SN - 1943-068X

IS - 99

ER -

ID: 80651943