Learning Multiple Timescales in Recurrent Neural Networks

Tayfun Alpay, Stefan Heinrich, Stefan Wermter

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

Recurrent Neural Networks (RNNs) are powerful architectures for sequence learning. Recent advances on the vanishing gradient problem have led to improved results and an increased research interest. Among recent proposals are architectural innovations that allow the emergence of multiple timescales during training. This paper explores a number of architectures for sequence generation and prediction tasks with long-term relationships. We compare the Simple Recurrent Network (SRN) and Long Short-Term Memory (LSTM) with the recently proposed Clockwork RNN (CWRNN), Structurally Constrained Recurrent Network (SCRN), and Recurrent Plausibility Network (RPN) with regard to their capabilities of learning multiple timescales. Our results show that partitioning hidden layers under distinct temporal constraints enables the learning of multiple timescales, which contributes to the understanding of the fundamental conditions that allow RNNs to self-organize to accurate temporal abstractions.
Original languageEnglish
Title of host publicationProceedings of the 25th International Conference on Artificial Neural Networks (ICANN2016)
EditorsAlessandro E.P. Villa, Paolo Masulli, Javier Antonio Pons Rivero
Number of pages8
Volume9886
Place of PublicationBarcelona, ES
PublisherSpringer International Publishing, Switzerland
Publication date1 Sept 2016
Pages132-139
DOIs
Publication statusPublished - 1 Sept 2016
Externally publishedYes
SeriesLecture Notes in Computer Science

Keywords

  • Leaky Activation
  • Multiple Timescales
  • Sequence Learning
  • Clocked Activation
  • Recurrent Neural Networks,
  • Learning Multiple Timescales in Recurrent Neural Networks

Fingerprint

Dive into the research topics of 'Learning Multiple Timescales in Recurrent Neural Networks'. Together they form a unique fingerprint.

Cite this