Abstract
Automatically detecting the intent of an utterance is important for various downstream natural language processing tasks. This task is also called Dialogue Act Classification (DAC) and was primarily researched on spoken one-to-one conversations. The rise of social media has made this an interesting data source to explore within DAC, although it comes with some difficulties: non-standard form, variety of language types (across and within platforms), and quickly evolving norms. We therefore investigate the robustness of DAC on social media data in this paper. More concretely, we provide a benchmark that includes cross-domain data splits, as well as a variety of improvements on our transformer-based baseline. Our experiments show that lexical normalization is not beneficial in this setup, balancing the labels through resampling is beneficial in some cases, and incorporating context is crucial for this task and leads to the highest performance improvements 7 F1 percentage points in-domain and 20 cross-domain).
Originalsprog | Engelsk |
---|---|
Titel | Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022) |
Forlag | Association for Computational Linguistics |
Publikationsdato | okt. 2022 |
Sider | 180-193 |
Status | Udgivet - okt. 2022 |
Begivenhed | 29th International Conference on Computational Linguistics - Varighed: 12 okt. 2022 → 17 nov. 2022 |
Konference
Konference | 29th International Conference on Computational Linguistics |
---|---|
Periode | 12/10/2022 → 17/11/2022 |
Emneord
- Dialogue Act Classification
- Social media analysis
- Transformer models
- Cross-domain data
- Contextual information
Fingeraftryk
Dyk ned i forskningsemnerne om 'Increasing Robustness for Cross-domain Dialogue Act Classification on Social Media Data'. Sammen danner de et unikt fingeraftryk.Priser
-
Best paper award W-NUT 2022
van der Goot, R. (Modtager), Vielsted, M. (Modtager) & Wallenius, N. (Modtager), 22 okt. 2022
Pris: Priser, stipendier, udnævnelser