Evolving plastic neural networks with novelty search

Sebastian Risi, Charles E Hughes, Kenneth O Stanley

Research output: Journal Article or Conference Article in JournalJournal articleResearchpeer-review

Abstract

Biological brains can adapt and learn from past experience. Yet neuroevolution, that is, automatically creating artificial neural networks (ANNs) through evolutionary algorithms, has sometimes focused on static ANNs that cannot change their weights during their lifetime. A profound problem with evolving adaptive systems is that learning to learn is highly deceptive. Because it is easier at first to improve fitness without evolving the ability to learn, evolution is likely to exploit domain-dependent static (i.e., nonadaptive) heuristics. This article analyzes this inherent deceptiveness in a variety of different dynamic, reward-based learning tasks, and proposes a way to escape the deceptive trap of static policies based on the novelty search algorithm. The main idea in novelty search is to abandon objective-based fitness and instead simply search only for novel behavior, which avoids deception entirely. A series of experiments and an in-depth analysis show how behaviors that could potentially serve as a stepping stone to finding adaptive solutions are discovered by novelty search yet are missed by fitness-based search. The conclusion is that novelty search has the potential to foster the emergence of adaptive behavior in reward-based learning tasks, thereby opening a new direction for research in evolving plastic ANNs.
Original languageUndefined/Unknown
JournalAdaptive Behavior
Volume18
Issue number6
Pages (from-to)470-491
Number of pages22
ISSN1059-7123
Publication statusPublished - 2010

Keywords

  • Novelty search
  • neural networks
  • adaptation
  • learning
  • neuromodulation
  • neuroevolution

Cite this