TY - GEN
T1 - BLOOM: A 176B-Parameter Open-Access Multilingual Language Model
AU - BigScience
AU - Scao, Teven Le
AU - Fan, Angela
AU - Akiki, Christopher
AU - Pavlick, Ellie
AU - Ilić, Suzana
AU - Hesslow, Daniel
AU - Castagné, Roman
AU - Luccioni, Alexandra Sasha
AU - Yvon, François
AU - Gallé, Matthias
AU - Tow, Jonathan
AU - Rush, Alexander M.
AU - Biderman, Stella
AU - Webson, Albert
AU - Ammanamanchi, Pawan Sasanka
AU - Wang, Thomas
AU - Sagot, Benoît
AU - Muennighoff, Niklas
AU - del Moral, Albert Villanova
AU - Ruwase, Olatunji
AU - Bawden, Rachel
AU - Bekman, Stas
AU - McMillan-Major, Angelina
AU - Beltagy, Iz
AU - Nguyen, Huu
AU - Saulnier, Lucile
AU - Tan, Samson
AU - Suarez, Pedro Ortiz
AU - Sanh, Victor
AU - Laurençon, Hugo
AU - Jernite, Yacine
AU - Launay, Julien
AU - Mitchell, Margaret
AU - Raffel, Colin
AU - Gokaslan, Aaron
AU - Simhi, Adi
AU - Soroa, Aitor
AU - Aji, Alham Fikri
AU - Alfassy, Amit
AU - Rogers, Anna
AU - Nitzav, Ariel Kreisberg
AU - Xu, Canwen
AU - Mou, Chenghao
AU - Emezue, Chris
AU - Klamm, Christopher
AU - Leong, Colin
AU - van Strien, Daniel
AU - Adelani, David Ifeoluwa
AU - Radev, Dragomir
AU - Ponferrada, Eduardo González
PY - 2022/12/1
Y1 - 2022/12/1
N2 - Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
AB - Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
KW - Large language models
KW - Open-access
KW - BLOOM
KW - Multitask prompted finetuning
KW - ROOTS corpus
U2 - 10.48550/arXiv.2211.05100
DO - 10.48550/arXiv.2211.05100
M3 - Other contribution
PB - arXiv
ER -