Abstract
Automatically detecting the intent of an utterance is important for various downstream natural language processing tasks. This task is also called Dialogue Act Classification (DAC) and was primarily researched on spoken one-to-one conversations. The rise of social media has made this an interesting data source to explore within DAC, although it comes with some difficulties: non-standard form, variety of language types (across and within platforms), and quickly evolving norms. We therefore investigate the robustness of DAC on social media data in this paper. More concretely, we provide a benchmark that includes cross-domain data splits, as well as a variety of improvements on our transformer-based baseline. Our experiments show that lexical normalization is not beneficial in this setup, balancing the labels through resampling is beneficial in some cases, and incorporating context is crucial for this task and leads to the highest performance improvements 7 F1 percentage points in-domain and 20 cross-domain).
Original language | English |
---|---|
Title of host publication | Proceedings of the Eighth Workshop on Noisy User-generated Text (W-NUT 2022) |
Publisher | Association for Computational Linguistics |
Publication date | Oct 2022 |
Pages | 180-193 |
Publication status | Published - Oct 2022 |
Event | 29th International Conference on Computational Linguistics - Duration: 12 Oct 2022 → 17 Nov 2022 |
Conference
Conference | 29th International Conference on Computational Linguistics |
---|---|
Period | 12/10/2022 → 17/11/2022 |
Keywords
- Dialogue Act Classification
- Social media analysis
- Transformer models
- Cross-domain data
- Contextual information
Fingerprint
Dive into the research topics of 'Increasing Robustness for Cross-domain Dialogue Act Classification on Social Media Data'. Together they form a unique fingerprint.Prizes
-
Best paper award W-NUT 2022
van der Goot, R. (Recipient), Vielsted, M. (Recipient) & Wallenius, N. (Recipient), 22 Oct 2022
Prize: Prizes, scholarships, distinctions