Skip to main navigation Skip to search Skip to main content

SnakModel: Lessons Learned from Training an Open Danish Large Language Model

  • Aalborg University

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

We present SnakModel, a Danish large language model (LLM) based on Llama2-7B, which we continuously pre-train on 13.6B Danish words, and further tune on 3.7M Danish instructions. As best practices for creating LLMs for smaller language communities have yet to be established, we examine the effects of early modeling and training decisions on downstream performance throughout the entire training pipeline, including (1) the creation of a strictly curated corpus of Danish text from diverse sources; (2) the language modeling and instruction-tuning training process itself, including the analysis of intermediate training dynamics, and ablations across different hyperparameters; (3) an evaluation on eight language and culturally-specific tasks. Across these experiments SnakModel achieves the highest overall performance, outperforming multiple contemporary Llama2-7B-based models. By making SnakModel, the majority of our pre-training corpus, and the associated code available under open licenses, we hope to foster further research and development in Danish Natural Language Processing, and establish training guidelines for languages with similar resource constraints.
Original languageEnglish
Title of host publicationProceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025)
Number of pages14
Place of PublicationTallinn, Estonia
PublisherUniversity of Tartu Library
Publication dateMar 2025
Pages812-825
ISBN (Electronic)978-9908-53-109-0
Publication statusPublished - Mar 2025
EventNordic Conference on Computational Linguistics - Tallinn, Estonia
Duration: 2 Mar 20255 Mar 2025
Conference number: 25
http://www.wikicfp.com/cfp/servlet/event.showcfp?eventid=180454
https://sites.google.com/view/nodalida-bhlt2025/proceedings

Conference

ConferenceNordic Conference on Computational Linguistics
Number25
Country/TerritoryEstonia
CityTallinn
Period02/03/202505/03/2025
Internet address

Keywords

  • Danish language model
  • large language model
  • pre-training instruction-tuning
  • open-source NLP
  • low-resource language NLP

Fingerprint

Dive into the research topics of 'SnakModel: Lessons Learned from Training an Open Danish Large Language Model'. Together they form a unique fingerprint.

Cite this