Skip to main navigation Skip to search Skip to main content

Data-to-text Generation with Macro Planning.

Research output: Journal Article or Conference Article in JournalJournal articleResearchpeer-review

Abstract

Recent approaches to data-to-text generation have adopted the very successful encoder-decoder architecture or variants thereof. These models generate text that is fluent (but often imprecise) and perform quite poorly at selecting appropriate content and ordering it coherently. To overcome some of these issues, we propose a neural model with a macro planning stage followed by a generation stage reminiscent of traditional methods which embrace separate modules for planning and surface realization. Macro plans represent high level organization of important content such as entities, events, and their interactions; they are learned from data and given as input to the generator. Extensive experiments on two data-to-text benchmarks (RotoWire and MLB) show that our approach outperforms competitive baselines in terms of automatic and human evaluation.
Original languageEnglish
JournalTransactions of the Association for Computational Linguistics
Volume9
Pages (from-to)510-527
ISSN2307-387X
DOIs
Publication statusPublished - 2021
Externally publishedYes

Fingerprint

Dive into the research topics of 'Data-to-text Generation with Macro Planning.'. Together they form a unique fingerprint.

Cite this