Abstract
The word ordering in a Sanskrit verse is often not aligned with its corresponding prose order. Conversion of the verse to its corresponding prose helps in better comprehension of the construction. Owing to the resource constraints, we formulate this task as a word ordering (linearisation) task. In doing so, we completely ignore the word arrangement at the verse side. kāvya guru, the approach we propose, essentially consists of a pipeline of two pretraining steps followed by a seq2seq model. The first pretraining step learns task-specific token embeddings from pretrained embeddings. In the next step, we generate multiple possible hypotheses for possible word arrangements of the input %using another pretraining step. We then use them as inputs to a neural seq2seq model for the final prediction. We empirically show that the hypotheses generated by our pretraining step result in predictions that consistently outperform predictions based on the original order in the verse. Overall, kāvya guru outperforms current state of the art models in linearisation for the poetry to prose conversion task in Sanskrit.
Original language | English |
---|---|
Title of host publication | Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics |
Number of pages | 6 |
Publisher | Association for Computational Linguistics |
Publication date | Jul 2019 |
Pages | 1160–1166 |
DOIs | |
Publication status | Published - Jul 2019 |
Externally published | Yes |
Keywords
- Sanskrit verse conversion
- Word ordering
- Linearisation task
- seq2seq model
- Pretraining steps