Spurious Correlations in Cross-Topic Argument Mining

Terne Sasha Thorn Jakobsen, Maria Jung Barrett, Anders Søgaard

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review


Recent work in cross-topic argument mining attempts to learn models that generalise across topics rather than merely relying on within-topic spurious correlations. We examine the effectiveness of this approach by analysing the output of single-task and multi-task models for cross-topic argument mining, through a combination of linear approximations of their decision boundaries, manual feature grouping, challenge examples, and ablations across the input vocabulary. Surprisingly, we show that cross-topic models still rely mostly on spurious correlations and only generalise within closely related topics, e.g., a model trained only on closed-class words and a few common open-class words outperforms a state-of-the-art cross-topic model on distant target topics.
Original languageEnglish
Title of host publicationProceedings of *SEM 2021 : The Tenth Joint Conference on Lexical and Computational Semantics
Number of pages9
PublisherAssociation for Computational Linguistics
Publication date2021
Publication statusPublished - 2021


  • cross-topic argument mining
  • spurious correlations
  • multi-task models
  • linear approximations
  • input vocabulary ablations


Dive into the research topics of 'Spurious Correlations in Cross-Topic Argument Mining'. Together they form a unique fingerprint.

Cite this