Small, Safe LLMs for In-Game Generation

Tony Veale, Paolo Burelli, Amy K Hoover, Antonios Liapis, Gwaredd Mountain, Hendrik Skubch

Publikation: Konference artikel i Proceeding eller bog/rapport kapitelBidrag til rapportForskning

Abstract

Scaling laws for large language models (LLMs) have allowed LLMs to achieve dramatic improvements in prediction accuracy and generative quality as the depth of their architectures (the number of layers, and of learnable parameters) and the breadth of their training datasets (ever larger subsets of the world-wide-web) grow in size and ambition (Kaplan et al., 2020). This impressive scaling allows LLMs to be applied to tasks that seem to demand more than mere gap-filling or next-word-prediction in text. However, the old truism, popularized by Stan Lee of Marvel Comics, that “with great power comes great responsibility” seems increasingly apt as LLMs grow in generative power. Their ability to produce novel and imaginative responses to arbitrary prompts– one might even say “creative” responses– gives them the ability to amuse and inform, but also an ability to misinform and offend. With the subtle and contextualized generalizations derived from their large training sets, and encoded in their large parameter sets, these models learn the best and the worst of the human condition. This duality, the ability to be used for good or (perhaps unintentionally) for ill, gives us pause when considering the role that LLMs might play in a new generation of computer games. This working group, which explored the topic of “smaller, safer language models for games”, explored whether smaller LLMs (so-called SLMs), with smaller and more selective training sets, can mitigate some of the concerns that are foreseen in a games context. The f indings of our group are briefly summarized in the following sub-sections.
OriginalsprogEngelsk
TitelReport from Dagstuhl Seminar 24261
Publikationsdato2024
Sider206-208
DOI
StatusUdgivet - 2024
BegivenhedDagstuhl Seminar on Computational Creativity for Game Development - Schloss Dagstuhl, Tyskland
Varighed: 23 jun. 202428 jun. 2024
http://www.dagstuhl.de/24261

Konference

KonferenceDagstuhl Seminar on Computational Creativity for Game Development
LokationSchloss Dagstuhl
Land/OmrådeTyskland
Periode23/06/202428/06/2024
Internetadresse

Fingeraftryk

Dyk ned i forskningsemnerne om 'Small, Safe LLMs for In-Game Generation'. Sammen danner de et unikt fingeraftryk.

Citationsformater