Abstract
Peer review is our best tool for judging the quality of conference submissions, but it is becoming increasingly spurious. We argue that a part of the problem is that the reviewers and area chairs face a poorly defined task forcing apples-to-oranges comparisons. There are several potential ways forward, but the key difficulty is creating the incentives and mechanisms for their consistent implementation in the NLP community.
| Original language | English |
|---|---|
| Title of host publication | Findings of EMNLP |
| Number of pages | 7 |
| Place of Publication | Online |
| Publisher | Association for Computational Linguistics |
| Publication date | 1 Nov 2020 |
| Pages | 1256-1262 |
| Publication status | Published - 1 Nov 2020 |
Keywords
- Peer review
- Conference submissions
- Evaluation criteria
- Incentive mechanisms
- NLP community
Fingerprint
Dive into the research topics of 'What Can We Do to Improve Peer Review in NLP?'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver