Minimal Neural Network Models for Permutation Invariant Agents

Publikation: Konferencebidrag - EJ publiceret i proceeding eller tidsskriftPaperForskningpeer review


Organisms in nature have evolved to exhibit flexibility in face of
changes to the environment and/or to themselves. Artificial neural
networks (ANNs) have proven useful for controlling of artificial
agents acting in environments. However, most ANN models used for
reinforcement learning-type tasks have a rigid structure that does
not allow for varying input sizes. Further, they fail catastrophically
if inputs are presented in an ordering unseen during optimization.
We find that these two ANN inflexibilities can be mitigated and their
solutions are simple and highly related. For permutation invariance,
no optimized parameters can be tied to a specific index of the
input elements. For size invariance, inputs must be projected onto a
common space that does not grow with the number of projections.
Based on these restrictions, we construct a conceptually simple
model that exhibit flexibility most ANNs lack. We demonstrate the
model’s properties on multiple control problems, and show that it
can cope with even very rapid permutations of input indices, as
well as changes in input size. Ablation studies show that is possible
to achieve these properties with simple feedforward structures, but
that it is much easier to optimize recurrent structures.
Publikationsdato9 jul. 2022
StatusUdgivet - 9 jul. 2022
BegivenhedGECCO '22: Genetic and Evolutionary Computation Conference - Boston, USA
Varighed: 9 jul. 202213 jul. 2022


KonferenceGECCO '22: Genetic and Evolutionary Computation Conference


Dyk ned i forskningsemnerne om 'Minimal Neural Network Models for Permutation Invariant Agents'. Sammen danner de et unikt fingeraftryk.