Bootstrapping Knowledge Graphs From Images and Text

Jiayuan Mao, Yuan Yao, Stefan Heinrich, Tobias Hinz, Cornelius Weber, Stefan Wermter, Zhiyuan Liu, Maosong Sun

Research output: Journal Article or Conference Article in JournalJournal articleResearchpeer-review

Abstract

The problem of generating structured Knowledge Graphs (KGs) is difficult and open but relevant to a range of tasks related to decision making and information augmentation. A promising approach is to study generating KGs as a relational representation of inputs (e.g., textual paragraphs or natural images), where nodes represent the entities and edges represent the relations. This procedure is naturally a mixture of two phases: extracting primary relations from input, and completing the KG with reasoning. In this paper, we propose a hybrid KG builder that combines these two phases in a unified framework and generates KGs from scratch. Specifically, we employ a neural relation extractor resolving primary relations from input and a differentiable inductive logic programming (ILP) model that iteratively completes the KG. We evaluate our framework in both textual and visual domains and achieve comparable performance on relation extraction datasets based on Wikidata and the Visual Genome. The framework surpasses neural baselines by a noticeable gap in reasoning out dense KGs and overall performs particularly well for rare relations.
Original languageEnglish
JournalFrontiers in Neurorobotics
Volume13
Number of pages12
DOIs
Publication statusPublished - 1 Nov 2019
Externally publishedYes

Keywords

  • relation learning
  • relation prediction
  • information extraction
  • knowledge graphs
  • inductive logic programming

Fingerprint

Dive into the research topics of 'Bootstrapping Knowledge Graphs From Images and Text'. Together they form a unique fingerprint.

Cite this