SemEval-2019 Task 7: RumourEval 2019: Determining Rumour Veracity and Support for Rumours
Research output: Conference Article in Proceeding or Book/Report chapter › Article in proceedings › Research › peer-review
Standard
SemEval-2019 Task 7: RumourEval 2019: Determining Rumour Veracity and Support for Rumours. / Gorrell, Genevieve; Kochkina, Elena; Liakata, Maria; Aker, Ahmet; Zubiaga, Arkaitz; Bontcheva, Kalina; Derczynski, Leon.
Proceedings of the 13th International Workshop on Semantic Evaluation: NAACL HLT 2019. Association for Computational Linguistics, 2019. p. 845-854.Research output: Conference Article in Proceeding or Book/Report chapter › Article in proceedings › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - GEN
T1 - SemEval-2019 Task 7: RumourEval 2019: Determining Rumour Veracity and Support for Rumours
AU - Gorrell, Genevieve
AU - Kochkina, Elena
AU - Liakata, Maria
AU - Aker, Ahmet
AU - Zubiaga, Arkaitz
AU - Bontcheva, Kalina
AU - Derczynski, Leon
PY - 2019/6/7
Y1 - 2019/6/7
N2 - Since the first RumourEval shared task in 2017, interest in automated claim validation has greatly increased, as the danger of ``fake news'' has become a mainstream concern. However automated support for rumour verification remains in its infancy.It is therefore important that a shared task in this area continues to provide a focus for effort, which is likely to increase. Rumour verification is characterised by the need to consider evolving conversations and news updates to reach a verdict on a rumour's veracity. As in RumourEval 2017 we provided a dataset of dubious posts and ensuing conversations in social media, annotated both for stance and veracity. The social media rumours stem from a variety of breaking news stories and the dataset is expanded to include Reddit as well as new Twitter posts. There were two concrete tasks; rumour stance prediction and rumour verification, which we present in detail along with results achieved by participants. We received 22 system submissions (a 70\% increase from RumourEval 2017) many of which used state-of-the-art methodology to tackle the challenges involved.
AB - Since the first RumourEval shared task in 2017, interest in automated claim validation has greatly increased, as the danger of ``fake news'' has become a mainstream concern. However automated support for rumour verification remains in its infancy.It is therefore important that a shared task in this area continues to provide a focus for effort, which is likely to increase. Rumour verification is characterised by the need to consider evolving conversations and news updates to reach a verdict on a rumour's veracity. As in RumourEval 2017 we provided a dataset of dubious posts and ensuing conversations in social media, annotated both for stance and veracity. The social media rumours stem from a variety of breaking news stories and the dataset is expanded to include Reddit as well as new Twitter posts. There were two concrete tasks; rumour stance prediction and rumour verification, which we present in detail along with results achieved by participants. We received 22 system submissions (a 70\% increase from RumourEval 2017) many of which used state-of-the-art methodology to tackle the challenges involved.
M3 - Article in proceedings
SN - 978-1-950737-06-2
SP - 845
EP - 854
BT - Proceedings of the 13th International Workshop on Semantic Evaluation
PB - Association for Computational Linguistics
ER -
ID: 84235438