Measuring Catastrophic Forgetting in Visual Question Answering

Claudio Greco, Barbara Plank, Raquel Fernandez, Raffaella Bernardi

Research output: Conference Article in Proceeding or Book/Report chapterBook chapterResearchpeer-review

Abstract

Catastrophic forgetting is a ubiquitous problem for the current generation of Artificial Neural Networks: When a network is asked to learn multiple tasks in a sequence, it fails dramatically as it tends to forget past knowledge. Little is known on how far multimodal conversational agents suffer from this phenomenon. In this paper, we study the problem of catastrophic forgetting in Visual Question Answering (VQA) and propose experiments in which we analyze pairs of tasks based on CLEVR, a dataset requiring different skills which involve visual or linguistic knowledge. Our results show that dramatic forgetting is at place in VQA, calling for studies on how multimodal models can be enhanced with continual learning methods.
Original languageEnglish
Title of host publicationTenth International Workshop on Spoken Dialogue Systems Technology (IWSDS) 2019 : Lecture Notes in Electrical Engineering book series (LNEE, volume 714)
Number of pages387
PublisherSpringer
Publication date2019
Pages381
DOIs
Publication statusPublished - 2019
SeriesLecture Notes in Electrical Engineering
Volume714
ISSN1876-1100

Keywords

  • Catastrophic Forgetting
  • Artificial Neural Networks
  • Multimodal Conversational Agents
  • Visual Question Answering
  • Continual Learning Methods

Fingerprint

Dive into the research topics of 'Measuring Catastrophic Forgetting in Visual Question Answering'. Together they form a unique fingerprint.

Cite this