An In-depth Analysis of the Effect of Lexical Normalization on the Dependency Parsing of Social Media

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

View graph of relations

Existing natural language processing systems have often been designed with standard texts in mind. However, when these tools are used on the substantially different texts from social media, their performance drops dramatically. One solution is to translate social media data to standard language before processing, this is also called normalization. It is well-known that this improves performance for many natural language processing tasks on social media data. However, little is known about which types of normalization replacements have the most effect. Furthermore, it is unknown what the weaknesses of existing lexical normalization systems are in an extrinsic setting. In this paper, we analyze the effect of manual as well as automatic lexical normalization for dependency parsing. After our analysis, we conclude that for most categories, automatic normalization scores close to manually annotated normalization and that small annotation differences are important to take into consideration when exploiting normalization in a pipeline setup.
Original languageEnglish
Title of host publicationProceedings of the 5th Workshop on Noisy User-generated Text (W-NUT 2019)
Number of pages5
Place of PublicationHong Kong, China
PublisherAssociation for Computational Linguistics
Publication dateOct 2019
Publication statusPublished - Oct 2019
EventThe 5th Workshop on Noisy User-generated Text - Asia World Expo, Hong Kong, Hong Kong
Duration: 4 Nov 20194 Nov 2019
Conference number: 5


ConferenceThe 5th Workshop on Noisy User-generated Text
LocationAsia World Expo
LandHong Kong
ByHong Kong

Bibliographical note

Code & data:


No data available

ID: 84712467