Abstract
Dynamic Difficulty Adjustment studies how games can adapt content to
their users’ skill level, aiming to keep them in flow. Most of these methods
maximize engagement or minimize churn by adapting factors like the opponent
AI or the availability of resources. However, such methods do not
maintain a model of the player, and use technologies that are highly specific
to the games in which they are tested (e.g. requiring forward models
for enemy AIs based on planning agents). Designers may also intend to
find content that is more difficult/easier on purpose, and current methods
do not allow for such targeting.
This thesis proposes and tests a framework for adapting game content to
users based on Bayesian Optimization, giving designers flexibility when
choosing which skill level to target. Starting with a design space, a metric
to be measured, a prior over this metric, and a target value, our framework
quickly searches possible levels/tasks for one with ideal difficulty (i.e. close
to the specified target). In the process, our framework maintains a simple
data-driven model of the player, which could be used for further decisionmaking
and analysis.
We test this framework in two settings: adapting content to planning agents
based on search algorithms likeMonte Carlo Tree Search and Rolling Horizon
Evolution in a dungeon crawler-type game, and adapting both Sudoku
puzzles and dungeon crawler levels to players. Our framework successfully
adapts content to planning agents as long as their skill level is not extreme,
and takes roughly 7 iterations to find an appropriate Sudoku puzzle.
Additionally, instead of relying on designers to specify a real-valued encoding
of the content (e.g. the number of pre-filled cells in a Sudoku puzzle),
we investigate learning this encoding automatically usingDeep Generative
Models. In other words, we explore design spaces learned as latent spaces
of Variational Autoencoders using tile-based representations of games like
SuperMario Bros and The Legend of Zelda.
Our final contribution is a novel way of interpolating, sampling and optimizing
in the playable regions of latent spaces of Variational Autoencoders,
and addresses the challenge that generative models are not always guaranteed
to decode playable content. This contribution, based on differential
geometry, is inspired by recent advancements in domains like robotics and
proteinmodeling. We combine these ideas of safe generation with content
optimization and propose a restricted version of Bayesian Optimization,
which optimizes content inside playable regions. We see a clear trade-off:
restricting the latent space to playable regions decreases the diversity of
the generated content, as well as the quality of the optimal values in the
optimization.
In summary, this thesis studies applications of Bayesian Optimization and
Deep Generative Models to the problem of creating and adapting game
content to users. We develop a framework that quickly finds relevant levels
in settings varying from corpora of levels to the latent spaces of generative
models, and we show in experiments involving both human and artificial
players that this framework finds appropriate game content in a few iterations.
This framework is readily applicable, and could be used to create
games that learn and adapt to their players.
their users’ skill level, aiming to keep them in flow. Most of these methods
maximize engagement or minimize churn by adapting factors like the opponent
AI or the availability of resources. However, such methods do not
maintain a model of the player, and use technologies that are highly specific
to the games in which they are tested (e.g. requiring forward models
for enemy AIs based on planning agents). Designers may also intend to
find content that is more difficult/easier on purpose, and current methods
do not allow for such targeting.
This thesis proposes and tests a framework for adapting game content to
users based on Bayesian Optimization, giving designers flexibility when
choosing which skill level to target. Starting with a design space, a metric
to be measured, a prior over this metric, and a target value, our framework
quickly searches possible levels/tasks for one with ideal difficulty (i.e. close
to the specified target). In the process, our framework maintains a simple
data-driven model of the player, which could be used for further decisionmaking
and analysis.
We test this framework in two settings: adapting content to planning agents
based on search algorithms likeMonte Carlo Tree Search and Rolling Horizon
Evolution in a dungeon crawler-type game, and adapting both Sudoku
puzzles and dungeon crawler levels to players. Our framework successfully
adapts content to planning agents as long as their skill level is not extreme,
and takes roughly 7 iterations to find an appropriate Sudoku puzzle.
Additionally, instead of relying on designers to specify a real-valued encoding
of the content (e.g. the number of pre-filled cells in a Sudoku puzzle),
we investigate learning this encoding automatically usingDeep Generative
Models. In other words, we explore design spaces learned as latent spaces
of Variational Autoencoders using tile-based representations of games like
SuperMario Bros and The Legend of Zelda.
Our final contribution is a novel way of interpolating, sampling and optimizing
in the playable regions of latent spaces of Variational Autoencoders,
and addresses the challenge that generative models are not always guaranteed
to decode playable content. This contribution, based on differential
geometry, is inspired by recent advancements in domains like robotics and
proteinmodeling. We combine these ideas of safe generation with content
optimization and propose a restricted version of Bayesian Optimization,
which optimizes content inside playable regions. We see a clear trade-off:
restricting the latent space to playable regions decreases the diversity of
the generated content, as well as the quality of the optimal values in the
optimization.
In summary, this thesis studies applications of Bayesian Optimization and
Deep Generative Models to the problem of creating and adapting game
content to users. We develop a framework that quickly finds relevant levels
in settings varying from corpora of levels to the latent spaces of generative
models, and we show in experiments involving both human and artificial
players that this framework finds appropriate game content in a few iterations.
This framework is readily applicable, and could be used to create
games that learn and adapt to their players.
Originalsprog | Dansk |
---|
Udgivelsessted | Copenhagen |
---|---|
Forlag | IT-Universitetet i København |
Vol/bind | 1 |
Udgave | 1 |
Antal sider | 152 |
ISBN (Trykt) | 978-87-7949-403-9 |
Status | Udgivet - 1 aug. 2023 |
Navn | ITU-DS |
---|---|
ISSN | 1602-3536 |