ITU

SemEval-2019 Task 7: RumourEval 2019: Determining Rumour Veracity and Support for Rumours

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Standard

SemEval-2019 Task 7: RumourEval 2019: Determining Rumour Veracity and Support for Rumours. / Gorrell, Genevieve; Kochkina, Elena; Liakata, Maria; Aker, Ahmet; Zubiaga, Arkaitz; Bontcheva, Kalina; Derczynski, Leon.

Proceedings of the 13th International Workshop on Semantic Evaluation: NAACL HLT 2019. Association for Computational Linguistics, 2019. p. 845-854.

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Harvard

Gorrell, G, Kochkina, E, Liakata, M, Aker, A, Zubiaga, A, Bontcheva, K & Derczynski, L 2019, SemEval-2019 Task 7: RumourEval 2019: Determining Rumour Veracity and Support for Rumours. in Proceedings of the 13th International Workshop on Semantic Evaluation: NAACL HLT 2019. Association for Computational Linguistics, pp. 845-854. <https://www.aclweb.org/anthology/S19-2147>

APA

Gorrell, G., Kochkina, E., Liakata, M., Aker, A., Zubiaga, A., Bontcheva, K., & Derczynski, L. (2019). SemEval-2019 Task 7: RumourEval 2019: Determining Rumour Veracity and Support for Rumours. In Proceedings of the 13th International Workshop on Semantic Evaluation: NAACL HLT 2019 (pp. 845-854). Association for Computational Linguistics. https://www.aclweb.org/anthology/S19-2147

Vancouver

Gorrell G, Kochkina E, Liakata M, Aker A, Zubiaga A, Bontcheva K et al. SemEval-2019 Task 7: RumourEval 2019: Determining Rumour Veracity and Support for Rumours. In Proceedings of the 13th International Workshop on Semantic Evaluation: NAACL HLT 2019. Association for Computational Linguistics. 2019. p. 845-854

Author

Gorrell, Genevieve ; Kochkina, Elena ; Liakata, Maria ; Aker, Ahmet ; Zubiaga, Arkaitz ; Bontcheva, Kalina ; Derczynski, Leon. / SemEval-2019 Task 7: RumourEval 2019: Determining Rumour Veracity and Support for Rumours. Proceedings of the 13th International Workshop on Semantic Evaluation: NAACL HLT 2019. Association for Computational Linguistics, 2019. pp. 845-854

Bibtex

@inproceedings{8af2d48bf36b49b99f1718ee5599a026,
title = "SemEval-2019 Task 7: RumourEval 2019: Determining Rumour Veracity and Support for Rumours",
abstract = "Since the first RumourEval shared task in 2017, interest in automated claim validation has greatly increased, as the danger of ``fake news'' has become a mainstream concern. However automated support for rumour verification remains in its infancy.It is therefore important that a shared task in this area continues to provide a focus for effort, which is likely to increase. Rumour verification is characterised by the need to consider evolving conversations and news updates to reach a verdict on a rumour's veracity. As in RumourEval 2017 we provided a dataset of dubious posts and ensuing conversations in social media, annotated both for stance and veracity. The social media rumours stem from a variety of breaking news stories and the dataset is expanded to include Reddit as well as new Twitter posts. There were two concrete tasks; rumour stance prediction and rumour verification, which we present in detail along with results achieved by participants. We received 22 system submissions (a 70\% increase from RumourEval 2017) many of which used state-of-the-art methodology to tackle the challenges involved. ",
author = "Genevieve Gorrell and Elena Kochkina and Maria Liakata and Ahmet Aker and Arkaitz Zubiaga and Kalina Bontcheva and Leon Derczynski",
year = "2019",
month = jun,
day = "7",
language = "English",
isbn = "978-1-950737-06-2",
pages = "845--854",
booktitle = "Proceedings of the 13th International Workshop on Semantic Evaluation",
publisher = "Association for Computational Linguistics",
address = "United States",

}

RIS

TY - GEN

T1 - SemEval-2019 Task 7: RumourEval 2019: Determining Rumour Veracity and Support for Rumours

AU - Gorrell, Genevieve

AU - Kochkina, Elena

AU - Liakata, Maria

AU - Aker, Ahmet

AU - Zubiaga, Arkaitz

AU - Bontcheva, Kalina

AU - Derczynski, Leon

PY - 2019/6/7

Y1 - 2019/6/7

N2 - Since the first RumourEval shared task in 2017, interest in automated claim validation has greatly increased, as the danger of ``fake news'' has become a mainstream concern. However automated support for rumour verification remains in its infancy.It is therefore important that a shared task in this area continues to provide a focus for effort, which is likely to increase. Rumour verification is characterised by the need to consider evolving conversations and news updates to reach a verdict on a rumour's veracity. As in RumourEval 2017 we provided a dataset of dubious posts and ensuing conversations in social media, annotated both for stance and veracity. The social media rumours stem from a variety of breaking news stories and the dataset is expanded to include Reddit as well as new Twitter posts. There were two concrete tasks; rumour stance prediction and rumour verification, which we present in detail along with results achieved by participants. We received 22 system submissions (a 70\% increase from RumourEval 2017) many of which used state-of-the-art methodology to tackle the challenges involved.

AB - Since the first RumourEval shared task in 2017, interest in automated claim validation has greatly increased, as the danger of ``fake news'' has become a mainstream concern. However automated support for rumour verification remains in its infancy.It is therefore important that a shared task in this area continues to provide a focus for effort, which is likely to increase. Rumour verification is characterised by the need to consider evolving conversations and news updates to reach a verdict on a rumour's veracity. As in RumourEval 2017 we provided a dataset of dubious posts and ensuing conversations in social media, annotated both for stance and veracity. The social media rumours stem from a variety of breaking news stories and the dataset is expanded to include Reddit as well as new Twitter posts. There were two concrete tasks; rumour stance prediction and rumour verification, which we present in detail along with results achieved by participants. We received 22 system submissions (a 70\% increase from RumourEval 2017) many of which used state-of-the-art methodology to tackle the challenges involved.

M3 - Article in proceedings

SN - 978-1-950737-06-2

SP - 845

EP - 854

BT - Proceedings of the 13th International Workshop on Semantic Evaluation

PB - Association for Computational Linguistics

ER -

ID: 84235438