Abstract
Recent work in cross-topic argument mining attempts to learn models that generalise across topics rather than merely relying on within-topic spurious correlations. We examine the effectiveness of this approach by analysing the output of single-task and multi-task models for cross-topic argument mining, through a combination of linear approximations of their decision boundaries, manual feature grouping, challenge examples, and ablations across the input vocabulary. Surprisingly, we show that cross-topic models still rely mostly on spurious correlations and only generalise within closely related topics, e.g., a model trained only on closed-class words and a few common open-class words outperforms a state-of-the-art cross-topic model on distant target topics.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of *SEM 2021 : The Tenth Joint Conference on Lexical and Computational Semantics |
| Number of pages | 9 |
| Publisher | Association for Computational Linguistics |
| Publication date | 2021 |
| Pages | 263-277 |
| DOIs | |
| Publication status | Published - 2021 |
| Event | Conference on Lexical and Computational Semantics - VIRTUAL, Thailand Duration: 5 Aug 2021 → 6 Aug 2021 Conference number: 10 |
Conference
| Conference | Conference on Lexical and Computational Semantics |
|---|---|
| Number | 10 |
| Country/Territory | Thailand |
| City | VIRTUAL |
| Period | 05/08/2021 → 06/08/2021 |
Keywords
- cross-topic argument mining
- spurious correlations
- multi-task models
- linear approximations
- input vocabulary ablations
Fingerprint
Dive into the research topics of 'Spurious Correlations in Cross-Topic Argument Mining'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver