Spurious Correlations in Cross-Topic Argument Mining

Terne Sasha Thorn Jakobsen, Maria Jung Barrett, Anders Søgaard

Publikation: Konference artikel i Proceeding eller bog/rapport kapitelKonferencebidrag i proceedingsForskningpeer review

Abstract

Recent work in cross-topic argument mining attempts to learn models that generalise across topics rather than merely relying on withintopic spurious correlations. We examine the effectiveness of this approach by analysing the
output of single-task and multi-task models for cross-topic argument mining through a combination of linear approximations of their decision boundaries, manual feature grouping, challenge examples, and ablations across the
input vocabulary. Surprisingly, we show that cross-topic models still rely mostly on spurious correlations and only generalise within closely related topics, e.g., a model trained only on closed-class words and a few common open-class words outperforms a state-of-theart cross-topic model on distant target topics.
OriginalsprogEngelsk
TitelProceedings of *SEM 2021 : The Tenth Joint Conference on Lexical and Computational Semantics
Antal sider9
ForlagAssociation for Computational Linguistics
Publikationsdato2021
Sider263-277
DOI
StatusUdgivet - 2021

Emneord

  • cross-topic argument mining
  • spurious correlations
  • multi-task models
  • linear approximations
  • input vocabulary ablations

Fingeraftryk

Dyk ned i forskningsemnerne om 'Spurious Correlations in Cross-Topic Argument Mining'. Sammen danner de et unikt fingeraftryk.

Citationsformater