Exploring Predictive Uncertainty and Calibration in NLP: A Study on the Impact of Method & Data Scarcity

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review


We investigate the problem of determining the predictive confidence (or, conversely, uncertainty) of a neural classifier through the lens of low-resource languages. By training models on sub-sampled datasets in three different languages, we assess the quality of estimates from a wide array of approaches and their dependence on the amount of available data. We find that while approaches based on pre-trained models and ensembles achieve the best results overall, the quality of uncertainty estimates can surprisingly suffer with more data. We also perform a qualitative analysis of uncertainties on sequences, discovering that a model's total uncertainty seems to be influenced to a large degree by its data uncertainty, not model uncertainty. All model implementations are open-sourced in a software package.
Original languageEnglish
Title of host publicationFindings of 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Publication date7 Dec 2022
Publication statusPublished - 7 Dec 2022
EventEmpirical Methods in Natural Language Processing - Abu Dhabi National Exhibition Center (ADNEC), Abu Dhabi, United Arab Emirates
Duration: 7 Dec 202211 Dec 2022


ConferenceEmpirical Methods in Natural Language Processing
LocationAbu Dhabi National Exhibition Center (ADNEC)
Country/TerritoryUnited Arab Emirates
CityAbu Dhabi
Internet address


  • Predictive confidence
  • Neural classifier
  • Low-resource languages
  • Uncertainty estimation
  • Pre-trained models
  • Ensembles
  • Data uncertainty
  • Model uncertainty
  • Sequence analysis
  • Open-source software


Dive into the research topics of 'Exploring Predictive Uncertainty and Calibration in NLP: A Study on the Impact of Method & Data Scarcity'. Together they form a unique fingerprint.

Cite this