The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset


    Publikation: Konference artikel i Proceeding eller bog/rapport kapitelKonferencebidrag i proceedingsForskningpeer review


    As language models grow ever larger, the need for large-scale high-quality text datasets has never been more pressing, especially in multilingual settings. The BigScience workshop, a 1-year international and multidisciplinary initiative, was formed with the goal of researching and training large language models as a values-driven undertaking, putting issues of ethics, harm, and governance in the foreground. This paper documents the data creation and curation efforts undertaken by BigScience to assemble the Responsible Open-science Open-collaboration Text Sources (ROOTS) corpus, a 1.6TB dataset spanning 59 languages that was used to train the 176-billion-parameter BigScience Large Open-science Open-access Multilingual (BLOOM) language model. We further release a large initial subset of the corpus and analyses thereof, and hope to empower large-scale monolingual and multilingual modeling projects with both the data and the processing tools, as well as stimulate research around this large multilingual corpus.
    TitelThirty-Sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track
    UdgivelsesstedNew Orleans, United States
    Publikationsdato1 nov. 2022
    StatusUdgivet - 1 nov. 2022


    Dyk ned i forskningsemnerne om 'The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset'. Sammen danner de et unikt fingeraftryk.