NLP North at WNUT-2020 Task 2: Pre-training versus Ensembling for Detection of Informative COVID-19 English Tweets

Anders Giovanni Møller, Rob van der Goot, Barbara Plank

Publikation: Konference artikel i Proceeding eller bog/rapport kapitelKonferencebidrag i proceedingsForskningpeer review

Abstract

With the COVID-19 pandemic raging world-wide since the beginning of the 2020 decade,the need for monitoring systems to track relevant information on social media is vitally important. This paper describes our submission to the WNUT-2020 Task 2: Identification of informative COVID-19 English Tweets. We investigate the effectiveness for a variety of classification models, and found that domain-specific pre-trained BERT models lead to the best performance. On top of this, we attempt a variety of ensembling strategies, but these at-tempts did not lead to further improvements.Our final best model, the standalone CT-BERT model, proved to be highly competitive, leading to a shared first place in the shared task.Our results emphasize the importance of do-main and task-related pre-training.
OriginalsprogEngelsk
TitelProceedings of the Sixth Workshop on Noisy User-generated Text (W-NUT 2020)
ForlagAssociation for Computational Linguistics
Publikationsdatonov. 2020
Sider331-336
StatusUdgivet - nov. 2020

Emneord

  • COVID-19 Pandemic
  • Social Media Monitoring
  • Informative Tweets
  • Pre-trained BERT Models
  • Domain-Specific Classification

Fingeraftryk

Dyk ned i forskningsemnerne om 'NLP North at WNUT-2020 Task 2: Pre-training versus Ensembling for Detection of Informative COVID-19 English Tweets'. Sammen danner de et unikt fingeraftryk.

Citationsformater