A taxonomy and review of generalization research in NLP

Dieuwke Hupkes, Mario Giulianelli, Verna Dankers, Mikel Artetxe, Yanai Elazar, Tiago Pimentel, Christos Christodoulopoulos, Karim Lasri, Naomi Saphra, Arabella Sinclair, Dennis Thomas Ulmer, Florian Schottmann, Khuyagbaatar Batsuren, Kaiser Sun, Koustouv Sinha, Leila Khalatbari, Maria Ryskina, Ryan Cotterell, Zhijing Jin

Research output: Contribution to conference - NOT published in proceeding or journalPaperResearchpeer-review

Abstract

The ability to generalize well is one of the primary desiderata for models of natural language processing (NLP), but what ‘good generalization’ entails and how it should be evaluated is not well understood. In this Analysis we present a taxonomy for characterizing and understanding generalization research in NLP. The proposed taxonomy is based on an extensive literature review and contains five axes along which generalization studies can differ: their main motivation, the type of generalization they aim to solve, the type of data shift they consider, the source by which this data shift originated, and the locus of the shift within the NLP modelling pipeline. We use our taxonomy to classify over 700 experiments, and we use the results to present an in-depth analysis that maps out the current state of generalization research in NLP and make recommendations for which areas deserve attention in the future.
Original languageEnglish
Publication date1 Oct 2023
Publication statusPublished - 1 Oct 2023

Keywords

  • Generalization in NLP
  • Taxonomy of Generalization
  • Data Shift Origins
  • Natural Language Processing Evaluation
  • Generalization Research Analysis

Fingerprint

Dive into the research topics of 'A taxonomy and review of generalization research in NLP'. Together they form a unique fingerprint.

Cite this