A Primer in BERTology: What We Know About How BERT Works

Anna Rogers, Olga Kovaleva, Anna Rumshisky

    Research output: Journal Article or Conference Article in JournalJournal articleResearchpeer-review

    Abstract

    Transformer-based models have pushed state of the art in many areas of NLP, but our understanding of what is behind their success is still limited. This paper is the first survey of over 150 studies of the popular BERT model. We review the current state of knowledge about how BERT works, what kind of information it learns and how it is represented, common modifications to its training objectives and architecture, the overparameterization issue, and approaches to compression. We then outline directions for future research.
    Original languageEnglish
    JournalTransactions of the Association for Computational Linguistics
    Volume8
    Pages (from-to)842-866
    Number of pages25
    ISSN2307-387X
    Publication statusPublished - 1 Dec 2020

    Keywords

    • Transformer-based models
    • BERT model analysis
    • Neural network overparameterization
    • Training objective modifications
    • Model compression techniques

    Fingerprint

    Dive into the research topics of 'A Primer in BERTology: What We Know About How BERT Works'. Together they form a unique fingerprint.

    Cite this