Abstract
Evolved representations in evolutionary computation
are often fragile, which can impede representation-dependent
mechanisms such as self-adaptation. In contrast, evolved representations
in nature are robust, evolvable, and creatively exploit
available representational features. This paper provides evidence
that this disparity may partially result from a key difference
between natural evolution and most evolutionary algorithms:
Natural evolution has no overarching objective. That is, nature
tends to continually accumulate novel forms without any final
goal, while most evolutionary algorithms eventually converge to
a point in the search space that locally maximizes the fitness
function. The problem is that individuals that maximize fitness
do not need good representations because a representation’s
future potential is not reflected by its current fitness. In contrast,
search methods without explicit objectives that are consequently
divergent may implicitly reward lineages that continually diverge,
thereby indirectly selecting for evolvable representations that
are better able to diverge further. This paper reviews a range
of past results that support such a hypothesis from a method
called novelty search, which explicitly rewards novelty, i.e.
behaviors that diverge from previously encountered behaviors.
In many experiments, novelty search demonstrates significant
representational advantages over traditional fitness-based search,
such as evolving more compact solutions, uncovering more evolvable
representations, and more fully exploiting representational
features. The conclusion is that divergent evolutionary algorithms
like novelty search may exert selection pressure towards higher
quality representations than traditional convergent approaches
to search.
are often fragile, which can impede representation-dependent
mechanisms such as self-adaptation. In contrast, evolved representations
in nature are robust, evolvable, and creatively exploit
available representational features. This paper provides evidence
that this disparity may partially result from a key difference
between natural evolution and most evolutionary algorithms:
Natural evolution has no overarching objective. That is, nature
tends to continually accumulate novel forms without any final
goal, while most evolutionary algorithms eventually converge to
a point in the search space that locally maximizes the fitness
function. The problem is that individuals that maximize fitness
do not need good representations because a representation’s
future potential is not reflected by its current fitness. In contrast,
search methods without explicit objectives that are consequently
divergent may implicitly reward lineages that continually diverge,
thereby indirectly selecting for evolvable representations that
are better able to diverge further. This paper reviews a range
of past results that support such a hypothesis from a method
called novelty search, which explicitly rewards novelty, i.e.
behaviors that diverge from previously encountered behaviors.
In many experiments, novelty search demonstrates significant
representational advantages over traditional fitness-based search,
such as evolving more compact solutions, uncovering more evolvable
representations, and more fully exploiting representational
features. The conclusion is that divergent evolutionary algorithms
like novelty search may exert selection pressure towards higher
quality representations than traditional convergent approaches
to search.
Original language | English |
---|---|
Title of host publication | Proceedings of the EvoNet2012 Workshop at the Thirteenth International Conference on Artificial Life (ALIFE XIII) |
Number of pages | 4 |
Publication date | 2012 |
Publication status | Published - 2012 |
Series | Proceedings of the EvoNet 2012 Workshop at ALIFE XIII |
---|
Keywords
- evolutionary computation
- self-adaptation
- natural evolution
- fitness function
- novelty search