Skip to main navigation Skip to search Skip to main content

Interactive Language Understanding with Multiple Timescale Recurrent Neural Networks

  • University of Hamburg

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

Natural language processing in the human brain is complex and dynamic. Models for understanding, how the brain’s architecture acquires language, need to take into account the temporal dynamics of verbal utterances as well as of action and visual embodied perception. We propose an architecture based on three Multiple Timescale Recurrent Neural Networks (MTRNNs) interlinked in a cell assembly that learns verbal utterances grounded in dynamic proprioceptive and visual information. Results show that the architecture is able to describe novel dynamic actions with correct novel utterances, and they also indicate that multi-modal integration allows for a disambiguation of concepts.
Original languageEnglish
Title of host publicationProceedings of the 24th International Conference on Artificial Neural Networks (ICANN2014)
EditorsStefan Wermter, Cornelius Weber, Włodzisław Duch, Timo Honkela, Petia Koprinkova-Hristova, Sven Magg, Günther Palm, Alessandro E.P. Villa
Number of pages8
Volume8681
PublisherSpringer
Publication date1 Sept 2014
Pages193-200
ISBN (Print)978-3-319-11178-0
DOIs
Publication statusPublished - 1 Sept 2014
Externally publishedYes
SeriesLecture Notes in Computer Science
ISSN0302-9743

Keywords

  • Grounded language
  • Multimodal integration
  • Temporal dynamics
  • Multiple Timescale Recurrent Neural Networks
  • Embodied cognition

Fingerprint

Dive into the research topics of 'Interactive Language Understanding with Multiple Timescale Recurrent Neural Networks'. Together they form a unique fingerprint.

Cite this