What Can We Do to Improve Peer Review in NLP?

Anna Rogers, Isabelle Augenstein

    Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

    Abstract

    Peer review is our best tool for judging the quality of conference submissions, but it is becoming increasingly spurious. We argue that a part of the problem is that the reviewers and area chairs face a poorly defined task forcing apples-to-oranges comparisons. There are several potential ways forward, but the key difficulty is creating the incentives and mechanisms for their consistent implementation in the NLP community.
    Original languageEnglish
    Title of host publicationFindings of EMNLP
    Number of pages7
    Place of PublicationOnline
    PublisherAssociation for Computational Linguistics
    Publication date1 Nov 2020
    Pages1256-1262
    Publication statusPublished - 1 Nov 2020

    Keywords

    • Peer review
    • Conference submissions
    • Evaluation criteria
    • Incentive mechanisms
    • NLP community

    Fingerprint

    Dive into the research topics of 'What Can We Do to Improve Peer Review in NLP?'. Together they form a unique fingerprint.

    Cite this