Abstract
In Multi-Document Summarization (MDS), the input can be modeled as a set of documents, and the output is its summary. In this paper, we focus on pretraining objectives for MDS. Specifically, we introduce a novel pretraining objective, which involves selecting the ROUGE-based centroid of each document cluster as a proxy for its summary. Our objective thus does not require human written summaries and can be utilized for pretraining on a dataset consisting solely of document sets. Through zero-shot, few-shot, and fully supervised experiments on multiple MDS datasets, we show that our model Centrum is better or comparable to a state-of-the-art model. We make the pretrained and fine-tuned models freely available to the research community
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) |
| Publisher | Association for Computational Linguistics |
| Publication date | 2023 |
| Pages | 128-138 |
| DOIs | |
| Publication status | Published - 2023 |
| Externally published | Yes |
| Event | Annual Meeting of the Association for Computational Linguistics - Toronto, Canada Duration: 9 Jul 2023 → 14 Jul 2023 Conference number: 61 https://aclanthology.org/volumes/2023.acl-short/ https://2023.aclweb.org/ |
Conference
| Conference | Annual Meeting of the Association for Computational Linguistics |
|---|---|
| Number | 61 |
| Country/Territory | Canada |
| City | Toronto |
| Period | 09/07/2023 → 14/07/2023 |
| Internet address |
Keywords
- Multi-Document Summarization
- Pretraining Objective
- ROUGE-based Centroid
- Zero-shot Learning
- Centrum
Fingerprint
Dive into the research topics of 'Multi-Document Summarization with Centroid-Based Pretraining.'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver