Skip to main navigation Skip to search Skip to main content

IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages.

  • Jay P. Gala
  • , Pranjal A. Chitale
  • , Raghavan AK
  • , Varun Gumma
  • , Sumanth Doddapaneni
  • , Kumar M. Aswanth
  • , Janki Atul Nawale
  • , Anupama Sujatha
  • , Ratish Puduppully
  • , Vivek Raghavan
  • , Pratyush Kumar
  • , Mitesh M. Khapra
  • , Raj Dabre
  • , Anoop Kunchukuttan

Research output: Journal Article or Conference Article in JournalJournal articleResearchpeer-review

Abstract

India has a rich linguistic landscape with languages from 4 major language families spoken by over a billion people. 22 of these languages are listed in the Constitution of India (referred to as scheduled languages) are the focus of this work. Given the linguistic diversity, high-quality and accessible Machine Translation (MT) systems are essential in a country like India. Prior to this work, there was (i) no parallel training data spanning all 22 languages, (ii) no robust benchmarks covering all these languages and containing content relevant to India, and (iii) no existing translation models which support all the 22 scheduled languages of India. In this work, we aim to address this gap by focusing on the missing pieces required for enabling wide, easy, and open access to good machine translation systems for all 22 scheduled Indian languages. We identify four key areas of improvement: curating and creating larger training datasets, creating diverse and high-quality benchmarks, training multilingual models, and releasing models with open access. Our first contribution is the release of the Bharat Parallel Corpus Collection (BPCC), the largest publicly available parallel corpora for Indic languages. BPCC contains a total of 230M bitext pairs, of which a total of 126M were newly added, including 644K manually translated sentence pairs created as part of this work. Our second contribution is the release of the first n-way parallel benchmark covering all 22 Indian languages, featuring diverse domains, Indian-origin content, and source-original test sets. Next, we present IndicTrans2, the first model to support all 22 languages, surpassing existing models on multiple existing and new benchmarks created as a part of this work. Lastly, to promote accessibility and collaboration, we release our models and associated data with permissive licenses.
Original languageEnglish
JournalTransactions on Machine Learning Research
Volume2023
Number of pages90
ISSN2835-8856
Publication statusPublished - 2023
Externally publishedYes

Fingerprint

Dive into the research topics of 'IndicTrans2: Towards High-Quality and Accessible Machine Translation Models for all 22 Scheduled Indian Languages.'. Together they form a unique fingerprint.

Cite this