Evidence > Intuition: Transferability Estimation for Encoder Selection

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

With the increase in availability of large pre-trained language models (LMs) in Natural Language Processing (NLP), it becomes critical to assess their fit for a specific target task a priori—as fine-tuning the entire space of available LMs is computationally prohibitive and unsustainable. However, encoder transferability estimation has received little to no attention in NLP. In this paper, we propose to generate quantitative evidence to predict which LM, out of a pool of models, will perform best on a target task without having to fine-tune all candidates. We provide a comprehensive study on LM ranking for 10 NLP tasks spanning
the two fundamental problem types of classification and structured prediction. We adopt the state-of-the-art Logarithm of Maximum Evidence (LogME) measure from Computer Vision (CV) and find that it positively correlates with final LM performance in 94% of the setups. In the first study of its kind, we further compare transferability measures with the de facto standard of human practitioner ranking, finding that evidence from quantitative metrics is more robust than pure intuition and can help identify unexpected LM candidates.
Original languageEnglish
Title of host publicationThe 2022 Conference on Empirical Methods in Natural Language Processing : EMNLP 2022
Place of PublicationAbu Dhabi, United Arab Emirates
PublisherAssociation for Computational Linguistics
Publication date7 Dec 2022
Pages4218–4227
Publication statusPublished - 7 Dec 2022

Keywords

  • Pre-trained Language Models
  • Natural Language Processing
  • Transferability Estimation
  • Logarithm of Maximum Evidence
  • Model Fine-tuning

Fingerprint

Dive into the research topics of 'Evidence > Intuition: Transferability Estimation for Encoder Selection'. Together they form a unique fingerprint.

Cite this