Strong Baselines for Neural Semi-Supervised Learning under Domain Shift

Sebastian Ruder, Barbara Plank

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

Novel neural models have been proposed in recent years for learning under domain shift. Most models, however, only evaluate on a single task, on proprietary datasets, or compare to weak baselines, which makes comparison of models difficult. In this paper, we re-evaluate classic general-purpose bootstrapping approaches in the context of neural networks under domain shifts vs. recent neural approaches and propose a novel multi-task tri-training method that reduces the time and space complexity of classic tri-training. Extensive experiments on two benchmarks are negative: while our novel method establishes a new state-of-the-art for sentiment analysis, it does not fare consistently the best. More importantly, we arrive at the somewhat surprising conclusion that classic tri-training, with some additions, outperforms the state of the art. We conclude that classic approaches constitute an important and strong baseline.
Original languageEnglish
Title of host publicationProceedings of the 56th Annual Meeting of the Association for Computational Linguistics
PublisherAssociation for Computational Linguistics
Publication date2018
Publication statusPublished - 2018
EventThe 56th Annual Meeting of the Association for Computational Linguistics - Melbourne, Melbourne, Australia
Duration: 15 Jul 201820 Jul 2018
http://acl2018.org/

Conference

ConferenceThe 56th Annual Meeting of the Association for Computational Linguistics
LocationMelbourne
Country/TerritoryAustralia
CityMelbourne
Period15/07/201820/07/2018
Internet address

Keywords

  • Neural models
  • Domain shift
  • Bootstrapping approaches
  • Tri-training
  • Sentiment analysis

Fingerprint

Dive into the research topics of 'Strong Baselines for Neural Semi-Supervised Learning under Domain Shift'. Together they form a unique fingerprint.

Cite this