Small, Safe LLMs for In-Game Generation

Tony Veale, Paolo Burelli, Amy K Hoover, Antonios Liapis, Gwaredd Mountain, Hendrik Skubch

Research output: Conference Article in Proceeding or Book/Report chapterReport chapterResearch

Abstract

Scaling laws for large language models (LLMs) have allowed LLMs to achieve dramatic improvements in prediction accuracy and generative quality as the depth of their architectures (the number of layers, and of learnable parameters) and the breadth of their training datasets (ever larger subsets of the world-wide-web) grow in size and ambition (Kaplan et al., 2020). This impressive scaling allows LLMs to be applied to tasks that seem to demand more than mere gap-filling or next-word-prediction in text. However, the old truism, popularized by Stan Lee of Marvel Comics, that “with great power comes great responsibility” seems increasingly apt as LLMs grow in generative power. Their ability to produce novel and imaginative responses to arbitrary prompts– one might even say “creative” responses– gives them the ability to amuse and inform, but also an ability to misinform and offend. With the subtle and contextualized generalizations derived from their large training sets, and encoded in their large parameter sets, these models learn the best and the worst of the human condition. This duality, the ability to be used for good or (perhaps unintentionally) for ill, gives us pause when considering the role that LLMs might play in a new generation of computer games. This working group, which explored the topic of “smaller, safer language models for games”, explored whether smaller LLMs (so-called SLMs), with smaller and more selective training sets, can mitigate some of the concerns that are foreseen in a games context. The f indings of our group are briefly summarized in the following sub-sections.
Original languageEnglish
Title of host publicationReport from Dagstuhl Seminar 24261
Publication date2024
Pages206-208
DOIs
Publication statusPublished - 2024
EventDagstuhl Seminar on Computational Creativity for Game Development - Schloss Dagstuhl, Germany
Duration: 23 Jun 202428 Jun 2024
http://www.dagstuhl.de/24261

Conference

ConferenceDagstuhl Seminar on Computational Creativity for Game Development
LocationSchloss Dagstuhl
Country/TerritoryGermany
Period23/06/202428/06/2024
Internet address

Fingerprint

Dive into the research topics of 'Small, Safe LLMs for In-Game Generation'. Together they form a unique fingerprint.

Cite this