Abstract
Recent work has shown promising results using Hebbian meta-learning to solve hard reinforcement learning problems and adapt-to a limited degree-to changes in the environment. In previous works each synapse has its own learning rule. This allows each synapse to learn very specific learning rules and we hypothesize this limits the ability to discover generally useful Hebbian learning rules. We hypothesize that limiting the number of Hebbian learning rules through a "genomic bottleneck" can act as a regularizer leading to better generalization across changes to the environment. We test this hypothesis by decoupling the number of Hebbian learning rules from the number of synapses and systematically varying the number of Hebbian learning rules. We thoroughly explore how well these Hebbian meta-learning networks adapt to changes in their environment.
Originalsprog | Engelsk |
---|---|
Tidsskrift | Proceedings of Machine Learning Research |
Vol/bind | 148 |
ISSN | 2640-3498 |
DOI | |
Status | Udgivet - 2021 |
Emneord
- Hebbian learning
- Meta learning
- Genomic Bottleneck