Skip to main navigation Skip to search Skip to main content

Multi-Document Summarization with Centroid-Based Pretraining.

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

In Multi-Document Summarization (MDS), the input can be modeled as a set of documents, and the output is its summary. In this paper, we focus on pretraining objectives for MDS. Specifically, we introduce a novel pretraining objective, which involves selecting the ROUGE-based centroid of each document cluster as a proxy for its summary. Our objective thus does not require human written summaries and can be utilized for pretraining on a dataset consisting solely of document sets. Through zero-shot, few-shot, and fully supervised experiments on multiple MDS datasets, we show that our model Centrum is better or comparable to a state-of-the-art model. We make the pretrained and fine-tuned models freely available to the research community
Original languageEnglish
Title of host publicationProceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
PublisherAssociation for Computational Linguistics
Publication date2023
Pages128-138
DOIs
Publication statusPublished - 2023
Externally publishedYes
EventAnnual Meeting of the Association for Computational Linguistics - Toronto, Canada
Duration: 9 Jul 202314 Jul 2023
Conference number: 61
https://aclanthology.org/volumes/2023.acl-short/
https://2023.aclweb.org/

Conference

ConferenceAnnual Meeting of the Association for Computational Linguistics
Number61
Country/TerritoryCanada
CityToronto
Period09/07/202314/07/2023
Internet address

Keywords

  • Multi-Document Summarization
  • Pretraining Objective
  • ROUGE-based Centroid
  • Zero-shot Learning
  • Centrum

Fingerprint

Dive into the research topics of 'Multi-Document Summarization with Centroid-Based Pretraining.'. Together they form a unique fingerprint.

Cite this