Peer review is our best tool for judging the quality of conference submissions, but it is becoming increasingly spurious. We argue that a part of the problem is that the reviewers and area chairs face a poorly defined task forcing apples-to-oranges comparisons. There are several potential ways forward, but the key difficulty is creating the incentives and mechanisms for their consistent implementation in the NLP community.
|Titel||Findings of EMNLP|
|Forlag||Association for Computational Linguistics|
|Publikationsdato||1 nov. 2020|
|Status||Udgivet - 1 nov. 2020|