ITU

At a Glance: The Impact of Gaze Aggregation Views on Syntactic Tagging

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Standard

At a Glance: The Impact of Gaze Aggregation Views on Syntactic Tagging. / Klerke, Sigrid; Plank, Barbara.

The First Workshop Beyond Vision and LANguage: inTEgrating Real-World kNowledge : EMNLP-IJCNLP Workshop. Hong Kong : Association for Computational Linguistics, 2019. p. 51–61.

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Harvard

Klerke, S & Plank, B 2019, At a Glance: The Impact of Gaze Aggregation Views on Syntactic Tagging. in The First Workshop Beyond Vision and LANguage: inTEgrating Real-World kNowledge : EMNLP-IJCNLP Workshop. Association for Computational Linguistics, Hong Kong, pp. 51–61. https://doi.org/10.18653/v1/D19-6408

APA

Klerke, S., & Plank, B. (2019). At a Glance: The Impact of Gaze Aggregation Views on Syntactic Tagging. In The First Workshop Beyond Vision and LANguage: inTEgrating Real-World kNowledge : EMNLP-IJCNLP Workshop (pp. 51–61). Association for Computational Linguistics. https://doi.org/10.18653/v1/D19-6408

Vancouver

Klerke S, Plank B. At a Glance: The Impact of Gaze Aggregation Views on Syntactic Tagging. In The First Workshop Beyond Vision and LANguage: inTEgrating Real-World kNowledge : EMNLP-IJCNLP Workshop. Hong Kong: Association for Computational Linguistics. 2019. p. 51–61 https://doi.org/10.18653/v1/D19-6408

Author

Klerke, Sigrid ; Plank, Barbara. / At a Glance: The Impact of Gaze Aggregation Views on Syntactic Tagging. The First Workshop Beyond Vision and LANguage: inTEgrating Real-World kNowledge : EMNLP-IJCNLP Workshop. Hong Kong : Association for Computational Linguistics, 2019. pp. 51–61

Bibtex

@inproceedings{a42e228a17df444b802db8266bd911c9,
title = "At a Glance: The Impact of Gaze Aggregation Views on Syntactic Tagging",
abstract = "Readers{\textquoteright} eye movements used as part of the training signal have been shown to improve performance in a wide range of Natural Language Processing (NLP) tasks. Previous work uses gaze data either at the type level or at the token level and mostly from a single eye- tracking corpus. In this paper, we analyze type vs token-level integration options with eye tracking data from two corpora to inform two syntactic sequence labeling problems: bi- nary phrase chunking and part-of-speech tagging. We show that using globally-aggregated measures that capture the central tendency or variability of gaze data is more beneficial than proposed local views which retain individual participant information. While gaze data is in- formative for supervised POS tagging, which complements previous findings on unsupervised POS induction, almost no improvement is obtained for binary phrase chunking, except for a single specific setup. Hence, caution is warranted when using gaze data as signal for NLP, as no single view is robust over tasks, modeling choice and gaze corpus.",
author = "Sigrid Klerke and Barbara Plank",
year = "2019",
doi = "10.18653/v1/D19-6408",
language = "English",
pages = "51–61",
booktitle = "The First Workshop Beyond Vision and LANguage: inTEgrating Real-World kNowledge",
publisher = "Association for Computational Linguistics",
address = "United States",

}

RIS

TY - GEN

T1 - At a Glance: The Impact of Gaze Aggregation Views on Syntactic Tagging

AU - Klerke, Sigrid

AU - Plank, Barbara

PY - 2019

Y1 - 2019

N2 - Readers’ eye movements used as part of the training signal have been shown to improve performance in a wide range of Natural Language Processing (NLP) tasks. Previous work uses gaze data either at the type level or at the token level and mostly from a single eye- tracking corpus. In this paper, we analyze type vs token-level integration options with eye tracking data from two corpora to inform two syntactic sequence labeling problems: bi- nary phrase chunking and part-of-speech tagging. We show that using globally-aggregated measures that capture the central tendency or variability of gaze data is more beneficial than proposed local views which retain individual participant information. While gaze data is in- formative for supervised POS tagging, which complements previous findings on unsupervised POS induction, almost no improvement is obtained for binary phrase chunking, except for a single specific setup. Hence, caution is warranted when using gaze data as signal for NLP, as no single view is robust over tasks, modeling choice and gaze corpus.

AB - Readers’ eye movements used as part of the training signal have been shown to improve performance in a wide range of Natural Language Processing (NLP) tasks. Previous work uses gaze data either at the type level or at the token level and mostly from a single eye- tracking corpus. In this paper, we analyze type vs token-level integration options with eye tracking data from two corpora to inform two syntactic sequence labeling problems: bi- nary phrase chunking and part-of-speech tagging. We show that using globally-aggregated measures that capture the central tendency or variability of gaze data is more beneficial than proposed local views which retain individual participant information. While gaze data is in- formative for supervised POS tagging, which complements previous findings on unsupervised POS induction, almost no improvement is obtained for binary phrase chunking, except for a single specific setup. Hence, caution is warranted when using gaze data as signal for NLP, as no single view is robust over tasks, modeling choice and gaze corpus.

U2 - 10.18653/v1/D19-6408

DO - 10.18653/v1/D19-6408

M3 - Article in proceedings

SP - 51

EP - 61

BT - The First Workshop Beyond Vision and LANguage: inTEgrating Real-World kNowledge

PB - Association for Computational Linguistics

CY - Hong Kong

ER -

ID: 84192361