TY - GEN
T1 - MultiLexNorm: A Shared Task on Multilingual Lexical Normalization
AU - van der Goot, Rob
AU - Ramponi, Alan
AU - Zubiaga, Arkaitz
AU - Plank, Barbara
AU - Muller, Benjamin
AU - San Vicente Roncal, Iñaki
AU - Ljubešic´, Nikola
AU - Çetinoğlu, Özlem
AU - Mahendra, Rahmad
AU - Çolakoglu, Talha
AU - Baldwin, Timothy
AU - Caselli, Tommaso
AU - Sidorenko, Wladimir
PY - 2021/11
Y1 - 2021/11
N2 - Lexical normalization is the task of transforming an utterance into its standardized form. This task is beneficial for downstream analysis, as it provides a way to harmonize (often spontaneous) linguistic variation. Such variation is typical for social media on which information is shared in a multitude of ways, including diverse languages and code-switching. Since the seminal work of Han and Baldwin (2011) a decade ago, lexical normalization has attracted attention in English and multiple other languages. However, there exists a lack of a common benchmark for comparison of systems across languages with a homogeneous data and evaluation setup. The MultiLexNorm shared task sets out to fill this gap. We provide the largest publicly available multilingual lexical normalization benchmark including 13 language variants. We propose a homogenized evaluation setup with both intrinsic and extrinsic evaluation. As extrinsic evaluation, we use dependency parsing and part-of-speech tagging with adapted evaluation metrics (a-LAS, a-UAS, and a-POS) to account for alignment discrepancies. The shared task hosted at W-NUT 2021 attracted 9 participants and 18 submissions. The results show that neural normalization systems outperform the previous state-of-the-art system by a large margin. Downstream parsing and part-of-speech tagging performance is positively affected but to varying degrees, with improvements of up to 1.72 a-LAS, 0.85 a-UAS, and 1.54 a-POS for the winning system.
AB - Lexical normalization is the task of transforming an utterance into its standardized form. This task is beneficial for downstream analysis, as it provides a way to harmonize (often spontaneous) linguistic variation. Such variation is typical for social media on which information is shared in a multitude of ways, including diverse languages and code-switching. Since the seminal work of Han and Baldwin (2011) a decade ago, lexical normalization has attracted attention in English and multiple other languages. However, there exists a lack of a common benchmark for comparison of systems across languages with a homogeneous data and evaluation setup. The MultiLexNorm shared task sets out to fill this gap. We provide the largest publicly available multilingual lexical normalization benchmark including 13 language variants. We propose a homogenized evaluation setup with both intrinsic and extrinsic evaluation. As extrinsic evaluation, we use dependency parsing and part-of-speech tagging with adapted evaluation metrics (a-LAS, a-UAS, and a-POS) to account for alignment discrepancies. The shared task hosted at W-NUT 2021 attracted 9 participants and 18 submissions. The results show that neural normalization systems outperform the previous state-of-the-art system by a large margin. Downstream parsing and part-of-speech tagging performance is positively affected but to varying degrees, with improvements of up to 1.72 a-LAS, 0.85 a-UAS, and 1.54 a-POS for the winning system.
KW - Lexical normalization
KW - Social media
KW - Multilingual benchmark
KW - Neural normalization systems
KW - Extrinsic evaluation
KW - Dependency parsing
KW - Part-of-speech tagging
KW - Evaluation metrics
KW - Cross-linguistic analysis
KW - Code-switching
UR - https://robvanderg.github.io/doc/wnut2021.1.slides.pdf
UR - https://bitbucket.org/robvanderg/multilexnorm/src
M3 - Article in proceedings
SP - 493
EP - 509
BT - Proceedings of the Seventh Workshop on Noisy User-generated Text (W-NUT 2021)
PB - Association for Computational Linguistics
T2 - Seventh Workshop on Noisy User-generated Text (W-NUT 2021)
Y2 - 11 November 2021 through 11 November 2021
ER -