Abstract
Peer review is our best tool for judging the quality of conference submissions, but it is becoming increasingly spurious. We argue that a part of the problem is that the reviewers and area chairs face a poorly defined task forcing apples-to-oranges comparisons. There are several potential ways forward, but the key difficulty is creating the incentives and mechanisms for their consistent implementation in the NLP community.
| Originalsprog | Engelsk |
|---|---|
| Titel | Findings of EMNLP |
| Antal sider | 7 |
| Udgivelsessted | Online |
| Forlag | Association for Computational Linguistics |
| Publikationsdato | 1 nov. 2020 |
| Sider | 1256-1262 |
| Status | Udgivet - 1 nov. 2020 |
Emneord
- Peer review
- Conference submissions
- Evaluation criteria
- Incentive mechanisms
- NLP community