Abstract
This study addresses the challenge of extending Large Language Models (LLMs) to non-English languages, specifically those using non-Roman scripts. We propose an approach that utilizes the romanized form of text as an interface for LLMs, hypothesizing that its frequent informal use and shared tokens with English enhance cross-lingual alignment. Our approach involve the continual pretraining of a English LLM like Llama 2 on romanized text of non-English, non-Roman script languages, followed by instruction tuning on romanized data. The results indicate that romanized text not only reduces token fertility by 2x-4x but also matches if not outperforms native script representation across various NLU, NLG and MT tasks. Moreover, the embeddings computed on romanized text exhibit closer alignment with their English translations than those from the native script. Our approach presents a promising direction for leveraging the power of English LLMs in languages traditionally underrepresented in NLP research.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) |
| Number of pages | 23 |
| Publisher | Association for Computational Linguistics |
| Publication date | 2024 |
| Pages | 15593-15615 |
| DOIs | |
| Publication status | Published - 2024 |
| Externally published | Yes |
Keywords
- Romanization
- Non-Roman-script languages
- Cross-lingual alignment
- Continual pretraining
- Instruction tuning