Abstract
ABSTRACT
Organisms in nature have evolved to exhibit flexibility in face of
changes to the environment and/or to themselves. Artificial neural
networks (ANNs) have proven useful for controlling of artificial
agents acting in environments. However, most ANN models used for
reinforcement learning-type tasks have a rigid structure that does
not allow for varying input sizes. Further, they fail catastrophically
if inputs are presented in an ordering unseen during optimization.
We find that these two ANN inflexibilities can be mitigated and their
solutions are simple and highly related. For permutation invariance,
no optimized parameters can be tied to a specific index of the
input elements. For size invariance, inputs must be projected onto a
common space that does not grow with the number of projections.
Based on these restrictions, we construct a conceptually simple
model that exhibit flexibility most ANNs lack. We demonstrate the
model’s properties on multiple control problems, and show that it
can cope with even very rapid permutations of input indices, as
well as changes in input size. Ablation studies show that is possible
to achieve these properties with simple feedforward structures, but
that it is much easier to optimize recurrent structures.
Organisms in nature have evolved to exhibit flexibility in face of
changes to the environment and/or to themselves. Artificial neural
networks (ANNs) have proven useful for controlling of artificial
agents acting in environments. However, most ANN models used for
reinforcement learning-type tasks have a rigid structure that does
not allow for varying input sizes. Further, they fail catastrophically
if inputs are presented in an ordering unseen during optimization.
We find that these two ANN inflexibilities can be mitigated and their
solutions are simple and highly related. For permutation invariance,
no optimized parameters can be tied to a specific index of the
input elements. For size invariance, inputs must be projected onto a
common space that does not grow with the number of projections.
Based on these restrictions, we construct a conceptually simple
model that exhibit flexibility most ANNs lack. We demonstrate the
model’s properties on multiple control problems, and show that it
can cope with even very rapid permutations of input indices, as
well as changes in input size. Ablation studies show that is possible
to achieve these properties with simple feedforward structures, but
that it is much easier to optimize recurrent structures.
Original language | English |
---|---|
Publication date | 9 Jul 2022 |
Publication status | Published - 9 Jul 2022 |
Event | GECCO '22: Genetic and Evolutionary Computation Conference - Boston, United States Duration: 9 Jul 2022 → 13 Jul 2022 |
Conference
Conference | GECCO '22: Genetic and Evolutionary Computation Conference |
---|---|
Country/Territory | United States |
City | Boston |
Period | 09/07/2022 → 13/07/2022 |
Keywords
- Flexibility in Artificial Agents
- Artificial Neural Networks
- Reinforcement Learning
- Permutation Invariance
- Size Invariance
- Feedforward Structures
- Recurrent Neural Networks