While high performance have been obtained for high-resource languages, performance on low-resource languages lags behind. In this paper we focus on the parsing of the low-resource language Frisian. We use a sample of code-switched, spontaneously spoken data, which proves to be a challenging setup. We propose to train a parser specifically tailored towards the target domain, by selecting instances from multiple treebanks. Specifically, we use Latent Dirichlet Allocation (LDA), with word and character N-grams. We use a deep biaffine parser initialized with mBERT. The best single source treebank (nl_alpino) resulted in an LAS of 54.7 whereas our data selection outperformed the single best transfer treebank and led to 55.6 LAS on the test data. Additional experiments consisted of removing diacritics from our Frisian data, creating more similar training data by cropping sentences and running our best model using XLM-R. These experiments did not lead to a better performance.
|Proceedings of the Second Workshop on Domain Adaptation for NLP
|Association for Computational Linguistics
|Udgivet - apr. 2021
|Second Workshop on Domain Adaptation for NLP (EACL 2021) -
Varighed: 20 apr. 2021 → …
|Second Workshop on Domain Adaptation for NLP (EACL 2021)
|20/04/2021 → …