Maintaining quality in FEVER annotation

Henri Schulte, Julie Binau, Leon Derczynski

Publikation: Konference artikel i Proceeding eller bog/rapport kapitelKonferencebidrag i proceedingsForskningpeer review

Abstract

We propose two measures for measuring the quality of constructed claims in the FEVER task. Annotating data for this task involves the creation of supporting and refuting claims over a set of evidence. Automatic annotation processes often leave superficial patterns in data, which learning systems can detect instead of performing the underlying task. Humans also can leave these superficial patterns, either voluntarily or involuntarily (due to e.g. fatigue). The two measures introduced attempt to detect the impact of these superficial patterns. One is a new information-theoretic and distributionality based measure, DCI; and the other an extension of neural probing work over the ARCT task, utility. We demonstrate these measures over a recent major dataset, that from the English FEVER task in 2019.
OriginalsprogEngelsk
TitelProceedings of the Third Workshop on Fact Extraction and VERification (FEVER) : Association for Computational Linguistics
ForlagAssociation for Computational Linguistics
Publikationsdato9 jul. 2020
Sider42-46
ISBN (Elektronisk)978-1-952148-10-1
StatusUdgivet - 9 jul. 2020

Fingeraftryk

Dyk ned i forskningsemnerne om 'Maintaining quality in FEVER annotation'. Sammen danner de et unikt fingeraftryk.

Citationsformater