Towards Continual Reinforcement Learning through Evolutionary Meta-Learning

Publikation: Konference artikel i Proceeding eller bog/rapport kapitelKonferencebidrag i proceedingsForskningpeer review

Abstract

In continual learning, an agent is exposed to a changing environment, requiring it to adapt during execution time. While traditional reinforcement learning (RL) methods have shown impressive results in various domains, there has been less progress in addressing the challenge of continual learning. Current RL approaches do not al-low the agent to adapt during execution but only during a dedicated training phase. Here we study the problem of continual learning ina 2D bipedal walker domain, in which the legs of the walker grow over its lifetime, requiring the agent to adapt. The introduced approach combines neuroevolution, to determine the starting weights of a deep neural network, and a version of deep reinforcement learning that is continually running during execution time. The proof-of-concept results show that the combined approach gives abetter generalization performance when compared to evolution or reinforcement learning alone. The hybridization of reinforcement learning and evolution opens up exciting new research directions for continually learning agents that can benefit from suitable priors determined by an evolutionary process.
OriginalsprogEngelsk
TitelTowards Continual Reinforcement Learning through Evolutionary Meta-Learning
Antal sider2
Vol/bindProceedings of the Genetic and Evolutionary Computation Conference Companion
UdgivelsesstedNew York, NY, USA
ForlagAssociation for Computing Machinery
Publikationsdato17 jul. 2019
Udgave2019
Sider119-120
ISBN (Elektronisk)978-1-4503-6748-6
DOI
StatusUdgivet - 17 jul. 2019

Emneord

  • Reinforcement learning
  • Continual learning
  • Meta-learning

Fingeraftryk

Dyk ned i forskningsemnerne om 'Towards Continual Reinforcement Learning through Evolutionary Meta-Learning'. Sammen danner de et unikt fingeraftryk.

Citationsformater