Character-level Supervision for Low-resource POS Tagging

Katharina Kann, Johannes Bjerva, Isabelle Augenstein, Barbara Plank, Anders Søgaard

Publikation: Konference artikel i Proceeding eller bog/rapport kapitelKonferencebidrag i proceedingsForskningpeer review


Neural part-of-speech (POS) taggers are known to not perform well with little train- ing data. As a step towards overcoming this problem, we present an architecture for learning more robust neural POS taggers by jointly training a hierarchical, recurrent model and a recurrent character- based sequence-to-sequence network supervised using an auxiliary objective. This way, we introduce stronger character-level supervision into the model, which enables better generalization to unseen words and provides regularization, making our en- coding less prone to overfitting. We experiment with three auxiliary tasks: lemmatization, character-based word autoencoding, and character-based random string autoencoding. Experiments with minimal amounts of labeled data on 34 languages show that our new architecture out- performs a single-task baseline and, surprisingly, that, on average, raw text autoencoding can be as beneficial for low- resource POS tagging as using lemma in- formation. Our neural POS tagger closes the gap to a state-of-the-art POS tagger (MarMoT) for low-resource scenarios by 43%, even outperforming it on languages with templatic morphology, e.g., Arabic, Hebrew, and Turkish, by some margin.
TitelProceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP
ForlagAssociation for Computational Linguistics
Publikationsdatoaug. 2018
ISBN (Trykt)978-1-948087-47-6
StatusUdgivet - aug. 2018


Dyk ned i forskningsemnerne om 'Character-level Supervision for Low-resource POS Tagging'. Sammen danner de et unikt fingeraftryk.