Data-to-text Generation with Variational Sequential Planning.

Ratish Puduppully, Yao Fu, Mirella Lapata

Publikation: Artikel i tidsskrift og konference artikel i tidsskriftTidsskriftartikelForskningpeer review

Abstract

We consider the task of data-to-text generation, which aims to create textual output from non-linguistic input. We focus on generating long-form text, that is, documents with multiple paragraphs, and propose a neural model enhanced with a planning component responsible for organizing high-level information in a coherent and meaningful way. We infer latent plans sequentially with a structured variational model, while interleaving the steps of planning and generation. Text is generated by conditioning on previous variational decisions and previously generated text. Experiments on two data-to-text benchmarks (RotoWire and MLB) show that our model outperforms strong baselines and is sample-efficient in the face of limited training data (e.g., a few hundred instances).
OriginalsprogEngelsk
TidsskriftTransactions of the Association for Computational Linguistics
Vol/bind10
Sider (fra-til)697-715
ISSN2307-387X
DOI
StatusUdgivet - 2022
Udgivet eksterntJa

Fingeraftryk

Dyk ned i forskningsemnerne om 'Data-to-text Generation with Variational Sequential Planning.'. Sammen danner de et unikt fingeraftryk.

Citationsformater