Abstract
This paper discusses some central caveats of summarisation, incurred in the use of
the ROUGE metric for evaluation, with respect to optimal solutions. The task is NPhard, of which we give the first proof. Still, as we show empirically for three central benchmark datasets for the task, greedy algorithms empirically seem to perform optimally according to the metric. Additionally, overall quality assurance is problematic: there is no natural upper bound on the quality of summarisation systems, and even humans are excluded from performing optimal summarisation.
the ROUGE metric for evaluation, with respect to optimal solutions. The task is NPhard, of which we give the first proof. Still, as we show empirically for three central benchmark datasets for the task, greedy algorithms empirically seem to perform optimally according to the metric. Additionally, overall quality assurance is problematic: there is no natural upper bound on the quality of summarisation systems, and even humans are excluded from performing optimal summarisation.
Originalsprog | Engelsk |
---|---|
Titel | Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics |
Antal sider | 5 |
Vol/bind | 2 |
Forlag | Association for Computational Linguistics |
Publikationsdato | 2017 |
Sider | 41–45 |
ISBN (Trykt) | 978-1-945626-34-0 |
Status | Udgivet - 2017 |
Begivenhed | The 15th Conference of the European Chapter of the Association for Computational Linguistics - Valencia, Spanien Varighed: 3 apr. 2017 → 7 apr. 2017 http://eacl2017.org/ |
Konference
Konference | The 15th Conference of the European Chapter of the Association for Computational Linguistics |
---|---|
Land/Område | Spanien |
By | Valencia |
Periode | 03/04/2017 → 07/04/2017 |
Internetadresse |