Revealing the Dark Secrets of BERT

Olga Kovaleva, Alexey Romanov, Anna Rogers, Anna Rumshisky

    Publikation: Konference artikel i Proceeding eller bog/rapport kapitelKonferencebidrag i proceedingsForskningpeer review

    Abstract

    BERT-based architectures currently give state-of-the-art performance on many NLP tasks, but little is known about the exact mechanisms that contribute to its success. In the current work, we focus on the interpretation of self-attention, which is one of the fundamental underlying components of BERT. Using a subset of GLUE tasks and a set of handcrafted features-of-interest, we propose the methodology and carry out a qualitative and quantitative analysis of the information encoded by the individual BERT's heads. Our findings suggest that there is a limited set of attention patterns that are repeated across different heads, indicating the overall model overparametrization. While different heads consistently use the same attention patterns, they have varying impact on performance across different tasks. We show that manually disabling attention in certain heads leads to a performance improvement over the regular fine-tuned BERT models.
    OriginalsprogEngelsk
    TitelProceedings of EMNLP-IJCNLP)
    Antal sider10
    UdgivelsesstedHong Kong, China
    ForlagAssociation for Computational Linguistics
    Publikationsdato2019
    Sider4356-4365
    DOI
    StatusUdgivet - 2019

    Emneord

    • BERT-based architectures
    • self-attention mechanisms
    • NLP tasks performance
    • GLUE tasks analysis
    • attention pattern repetition

    Fingeraftryk

    Dyk ned i forskningsemnerne om 'Revealing the Dark Secrets of BERT'. Sammen danner de et unikt fingeraftryk.

    Citationsformater