What Factors Should Paper-Reviewer Assignments Rely On? Community Perspectives on Issues and Ideals in Conference Peer-Review

Terne Thorn Jakobsen, Anna Rogers

    Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

    Abstract

    Both scientific progress and individual researcher careers depend on the quality of peer review, which in turn depends on paper-reviewer matching. Surprisingly, this problem has been mostly approached as an automated recommendation problem rather than as a matter where different stakeholders (area chairs, reviewers, authors) have accumulated experience worth taking into account. We present the results of the first survey of the NLP community, identifying common issues and perspectives on what factors should be considered by paper-reviewer matching systems. This study contributes actionable recommendations for improving future NLP conferences, and desiderata for interpretable peer review assignments.
    Original languageEnglish
    Title of host publicationProceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
    Number of pages14
    Place of PublicationSeattle, United States
    PublisherAssociation for Computational Linguistics
    Publication date1 Jul 2022
    Pages4810-4823
    Publication statusPublished - 1 Jul 2022

    Keywords

    • Peer Review
    • Paper-reviewer Matching
    • NLP Community Survey
    • Automated Recommendation Systems
    • Interpretable Peer Review Assignments

    Fingerprint

    Dive into the research topics of 'What Factors Should Paper-Reviewer Assignments Rely On? Community Perspectives on Issues and Ideals in Conference Peer-Review'. Together they form a unique fingerprint.

    Cite this