Abstract
ABSTRACT
Organisms in nature have evolved to exhibit flexibility in face of
changes to the environment and/or to themselves. Artificial neural
networks (ANNs) have proven useful for controlling of artificial
agents acting in environments. However, most ANN models used for
reinforcement learning-type tasks have a rigid structure that does
not allow for varying input sizes. Further, they fail catastrophically
if inputs are presented in an ordering unseen during optimization.
We find that these two ANN inflexibilities can be mitigated and their
solutions are simple and highly related. For permutation invariance,
no optimized parameters can be tied to a specific index of the
input elements. For size invariance, inputs must be projected onto a
common space that does not grow with the number of projections.
Based on these restrictions, we construct a conceptually simple
model that exhibit flexibility most ANNs lack. We demonstrate the
model’s properties on multiple control problems, and show that it
can cope with even very rapid permutations of input indices, as
well as changes in input size. Ablation studies show that is possible
to achieve these properties with simple feedforward structures, but
that it is much easier to optimize recurrent structures.
Organisms in nature have evolved to exhibit flexibility in face of
changes to the environment and/or to themselves. Artificial neural
networks (ANNs) have proven useful for controlling of artificial
agents acting in environments. However, most ANN models used for
reinforcement learning-type tasks have a rigid structure that does
not allow for varying input sizes. Further, they fail catastrophically
if inputs are presented in an ordering unseen during optimization.
We find that these two ANN inflexibilities can be mitigated and their
solutions are simple and highly related. For permutation invariance,
no optimized parameters can be tied to a specific index of the
input elements. For size invariance, inputs must be projected onto a
common space that does not grow with the number of projections.
Based on these restrictions, we construct a conceptually simple
model that exhibit flexibility most ANNs lack. We demonstrate the
model’s properties on multiple control problems, and show that it
can cope with even very rapid permutations of input indices, as
well as changes in input size. Ablation studies show that is possible
to achieve these properties with simple feedforward structures, but
that it is much easier to optimize recurrent structures.
Originalsprog | Engelsk |
---|---|
Publikationsdato | 9 jul. 2022 |
Status | Udgivet - 9 jul. 2022 |
Begivenhed | GECCO '22: Genetic and Evolutionary Computation Conference - Boston, USA Varighed: 9 jul. 2022 → 13 jul. 2022 |
Konference
Konference | GECCO '22: Genetic and Evolutionary Computation Conference |
---|---|
Land/Område | USA |
By | Boston |
Periode | 09/07/2022 → 13/07/2022 |
Emneord
- Flexibility in Artificial Agents
- Artificial Neural Networks
- Reinforcement Learning
- Permutation Invariance
- Size Invariance
- Feedforward Structures
- Recurrent Neural Networks