Abstract
Continual learning, i.e. the ability to sequentially learn tasks without catastrophic
forgetting of previously learned ones, is an important open challenge in machine
learning. In this paper we take a step in this direction by showing that the recently
proposed Evolving Neural Turing Machine (ENTM) approach is able to perform
one-shot learning in a reinforcement learning task without catastrophic forgetting
of previously stored associations.
forgetting of previously learned ones, is an important open challenge in machine
learning. In this paper we take a step in this direction by showing that the recently
proposed Evolving Neural Turing Machine (ENTM) approach is able to perform
one-shot learning in a reinforcement learning task without catastrophic forgetting
of previously stored associations.
Original language | English |
---|---|
Publication date | 2016 |
Number of pages | 5 |
Publication status | Published - 2016 |
Keywords
- Continual learning
- Catastrophic forgetting
- Evolving Neural Turing Machine
- One-shot learning
- Reinforcement learning