Abstract
Neural part-of-speech (POS) taggers are known to not perform well with little train- ing data. As a step towards overcoming this problem, we present an architecture for learning more robust neural POS taggers by jointly training a hierarchical, recurrent model and a recurrent character- based sequence-to-sequence network supervised using an auxiliary objective. This way, we introduce stronger character-level supervision into the model, which enables better generalization to unseen words and provides regularization, making our en- coding less prone to overfitting. We experiment with three auxiliary tasks: lemmatization, character-based word autoencoding, and character-based random string autoencoding. Experiments with minimal amounts of labeled data on 34 languages show that our new architecture out- performs a single-task baseline and, surprisingly, that, on average, raw text autoencoding can be as beneficial for low- resource POS tagging as using lemma in- formation. Our neural POS tagger closes the gap to a state-of-the-art POS tagger (MarMoT) for low-resource scenarios by 43%, even outperforming it on languages with templatic morphology, e.g., Arabic, Hebrew, and Turkish, by some margin.
Original language | English |
---|---|
Title of host publication | Proceedings of the Workshop on Deep Learning Approaches for Low-Resource NLP |
Place of Publication | Melbourne |
Publisher | Association for Computational Linguistics |
Publication date | Aug 2018 |
Pages | 1-11 |
ISBN (Print) | 978-1-948087-47-6 |
Publication status | Published - Aug 2018 |
Keywords
- Neural POS taggers
- Minimal training data
- Hierarchical recurrent model
- Recurrent character-based network
- Auxiliary tasks