Joint Rumour Stance and Veracity Prediction

Anders Edelbo Lillie, Emil Refsgaard Middelboe, Leon Derczynski

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

The net is rife with rumours that spread through microblogs and social media. Not all the claims in these can be verified. However, recent work has shown that the stances alone that commenters take toward claims can be sufficiently good indicators of claim veracity, using e.g. an HMM that takes conversational stance sequences as the only input. Existing results are monolingual (English) and mono-platform (Twitter). This paper introduces a stance-annotated Reddit dataset for the Danish language, and describes various implementations of stance classification models. Of these, a Linear SVM provides predicts stance best, with 0.76 accuracy / 0.42 macro F1. Stance labels are then used to predict veracity across platforms and also across languages, training on conversations held in one language and using the model on conversations held in another. In our experiments, monolinugal scores reach stance-based veracity accuracy of 0.83 (F1 0.68); applying the model across languages predicts veracity of claims with an accuracy of 0.82 (F1 0.67). This demonstrates the surprising and powerful viability of transferring stance-based veracity prediction across languages.
Original languageEnglish
Title of host publicationNordic Conference of Computational Linguistics (2019)
PublisherLinköping University Electronic Press
Publication date2019
Pages208–221
ISBN (Electronic)978-91-7929-995-8
Publication statusPublished - 2019
SeriesNEALT (Northern European Association of Language Technology) Proceedings Series
ISSN1736-6305

Keywords

  • Rumour detection
  • Stance classification
  • Cross-lingual transfer
  • Linear SVM
  • Social media analysis

Fingerprint

Dive into the research topics of 'Joint Rumour Stance and Veracity Prediction'. Together they form a unique fingerprint.

Cite this