Projekter pr. år
Abstract
Recent complementary strands of research have shown that leveraging information on the data source through encoding their properties into embeddings can lead to performance increase when training a single model on heterogeneous data sources. However, it remains unclear in which situations these dataset embeddings are most effective, because they are used in a large variety of settings, languages and tasks. Furthermore, it is usually assumed that gold information on the data source is available, and that the test data is from a distribution seen during training. In this work, we compare the effect of dataset embeddings in mono-lingual settings, multi-lingual settings, and with predicted data source label in a zero-shot setting. We evaluate on three morphosyntactic tasks: morphological tagging, lemmatization, and dependency parsing, and use 104 datasets, 66 languages, and two different dataset grouping strategies. Performance increases are highest when the datasets are of the same language, and we know from which distribution the test-instance is drawn. In contrast, for setups where the data is from an unseen distribution, performance increase vanishes.
Originalsprog | Engelsk |
---|---|
Titel | Proceedings of the Second Workshop on Domain Adaptation for NLP : EACL 2021 workshop |
Forlag | Association for Computational Linguistics |
Publikationsdato | apr. 2021 |
Sider | 183–194 |
Status | Udgivet - apr. 2021 |
Emneord
- Dataset Embeddings
- Heterogeneous Data
- Morphosyntactic Tasks
- Zero-Shot Setting
- Multi-Lingual Models
Fingeraftryk
Dyk ned i forskningsemnerne om 'On the Effectiveness of Dataset Embeddings in Mono-lingual, Multi-lingual and Zero-shot Conditions'. Sammen danner de et unikt fingeraftryk.Projekter
- 1 Afsluttet
-
Multi-Task Sequence Labeling Under Adverse Conditions
Plank, B. (PI) & van der Goot, R. (CoI)
01/04/2019 → 31/08/2020
Projekter: Projekt › Andet