Evolving plastic neural networks with novelty search

Sebastian Risi, Charles E Hughes, Kenneth O Stanley

Publikation: Artikel i tidsskrift og konference artikel i tidsskriftTidsskriftartikelForskningpeer review

Abstract

Biological brains can adapt and learn from past experience. Yet neuroevolution, that is, automatically creating artificial neural networks (ANNs) through evolutionary algorithms, has sometimes focused on static ANNs that cannot change their weights during their lifetime. A profound problem with evolving adaptive systems is that learning to learn is highly deceptive. Because it is easier at first to improve fitness without evolving the ability to learn, evolution is likely to exploit domain-dependent static (i.e., nonadaptive) heuristics. This article analyzes this inherent deceptiveness in a variety of different dynamic, reward-based learning tasks, and proposes a way to escape the deceptive trap of static policies based on the novelty search algorithm. The main idea in novelty search is to abandon objective-based fitness and instead simply search only for novel behavior, which avoids deception entirely. A series of experiments and an in-depth analysis show how behaviors that could potentially serve as a stepping stone to finding adaptive solutions are discovered by novelty search yet are missed by fitness-based search. The conclusion is that novelty search has the potential to foster the emergence of adaptive behavior in reward-based learning tasks, thereby opening a new direction for research in evolving plastic ANNs.
OriginalsprogUdefineret/Ukendt
TidsskriftAdaptive Behavior
Vol/bind18
Udgave nummer6
Sider (fra-til)470-491
Antal sider22
ISSN1059-7123
StatusUdgivet - 2010

Emneord

  • Novelty search
  • neural networks
  • adaptation
  • learning
  • neuromodulation
  • neuroevolution

Citationsformater