Automated Curriculum Learning by Rewarding Temporally Rare Events

Niels Justesen, Sebastian Risi

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

Reward shaping allows reinforcement learning (RL) agents to accelerate learning by receiving additional reward signals. However, these signals can be difficult to design manually, especially for complex RL tasks. We propose a simple and general approach that determines the reward of pre-defined events by their rarity alone. Here events become less rewarding as they are experienced more often, which encourages the agent to continually explore new types of events as it learns. The adaptiveness of this reward function results in a form of automated curriculum learning that does not have to be specified by the experimenter. We demonstrate that this Rarity of Events (RoE) approach enables the agent to succeed in challenging VizDoom scenarios without access to the extrinsic reward from the environment. Furthermore, the results demonstrate that RoE learns a more versatile policy that adapts well to critical changes in the environment. Rewarding events based on their rarity could help in many unsolved RL environments that are characterized by sparse extrinsic rewards but a plethora of known event types.
Original languageEnglish
Title of host publication2018 IEEE Conference on Computational Intelligence and Games
Number of pages8
PublisherIEEE
Publication date2018
Pages293-300
ISBN (Print)978-1-5386-4359-4
ISBN (Electronic)978-1-5386-4359-0
Publication statusPublished - 2018

Keywords

  • Reinforcement Learning
  • Reward Shaping
  • Automated Curriculum Learning
  • Rarity of Events
  • Exploration in RL

Fingerprint

Dive into the research topics of 'Automated Curriculum Learning by Rewarding Temporally Rare Events'. Together they form a unique fingerprint.

Cite this