Abstract
Social media is a valuable data resource for various natural language processing (NLP) tasks.
However, standard NLP tools were often designed with standard texts in mind, and their performance decreases heavily when applied to social media data.
One solution to this problem is to adapt the input text to a more standard form, a task also referred to as normalization. Automatic approaches to normalization have shown that they can be used to improve performance on a variety of NLP tasks. However, all of these systems are supervised, thereby being heavily dependent on the availability of training data for the correct language and domain. In this work, we attempt to overcome this dependence by automatically generating training data for lexical normalization. Starting with raw tweets, we attempt two directions, to insert non-standardness (noise) and to automatically normalize in an unsupervised setting. Our best results are achieved by automatically inserting noise. We evaluate our approaches by using an existing lexical normalization system; our best scores are achieved by custom error generation system, which makes use of some manually created datasets. With this system, we score 94.29 accuracy on the test data, compared to 95.22 when it is trained on human-annotated data. Our best system which does not depend on any type of annotation is based on word embeddings and scores 92.04 accuracy. Finally, we perform an experiment in which we asked humans to predict whether a sentence was written by a human or generated by our best model. This experiment showed that in most cases it is hard for a human to detect automatically generated sentences.
However, standard NLP tools were often designed with standard texts in mind, and their performance decreases heavily when applied to social media data.
One solution to this problem is to adapt the input text to a more standard form, a task also referred to as normalization. Automatic approaches to normalization have shown that they can be used to improve performance on a variety of NLP tasks. However, all of these systems are supervised, thereby being heavily dependent on the availability of training data for the correct language and domain. In this work, we attempt to overcome this dependence by automatically generating training data for lexical normalization. Starting with raw tweets, we attempt two directions, to insert non-standardness (noise) and to automatically normalize in an unsupervised setting. Our best results are achieved by automatically inserting noise. We evaluate our approaches by using an existing lexical normalization system; our best scores are achieved by custom error generation system, which makes use of some manually created datasets. With this system, we score 94.29 accuracy on the test data, compared to 95.22 when it is trained on human-annotated data. Our best system which does not depend on any type of annotation is based on word embeddings and scores 92.04 accuracy. Finally, we perform an experiment in which we asked humans to predict whether a sentence was written by a human or generated by our best model. This experiment showed that in most cases it is hard for a human to detect automatically generated sentences.
Original language | English |
---|---|
Title of host publication | Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC 2020) |
Publisher | European Language Resources Association |
Publication date | May 2020 |
Pages | 6300-6309 |
Publication status | Published - May 2020 |
Event | LREC 2020 - Marseille, France Duration: 17 May 2020 → 22 May 2020 https://lrec2020.lrec-conf.org/en/ |
Conference
Conference | LREC 2020 |
---|---|
Country/Territory | France |
City | Marseille |
Period | 17/05/2020 → 22/05/2020 |
Internet address |
Keywords
- Social media data
- Natural language processing
- Text normalization
- Supervised learning
- Unsupervised learning