What Can We Do to Improve Peer Review in NLP?

Anna Rogers, Isabelle Augenstein

    Publikation: Konference artikel i Proceeding eller bog/rapport kapitelKonferencebidrag i proceedingsForskningpeer review

    Abstract

    Peer review is our best tool for judging the quality of conference submissions, but it is becoming increasingly spurious. We argue that a part of the problem is that the reviewers and area chairs face a poorly defined task forcing apples-to-oranges comparisons. There are several potential ways forward, but the key difficulty is creating the incentives and mechanisms for their consistent implementation in the NLP community.
    OriginalsprogEngelsk
    TitelFindings of EMNLP
    Antal sider7
    UdgivelsesstedOnline
    ForlagAssociation for Computational Linguistics
    Publikationsdato1 nov. 2020
    Sider1256-1262
    StatusUdgivet - 1 nov. 2020

    Emneord

    • Peer review
    • Conference submissions
    • Evaluation criteria
    • Incentive mechanisms
    • NLP community

    Fingeraftryk

    Dyk ned i forskningsemnerne om 'What Can We Do to Improve Peer Review in NLP?'. Sammen danner de et unikt fingeraftryk.

    Citationsformater