Strategies for Using Proximal Policy Optimization in Mobile Puzzle Games

Publikation: Konference artikel i Proceeding eller bog/rapport kapitelKonferencebidrag i proceedingsForskningpeer review

Abstrakt

While traditionally a labour intensive task, the testing of game content is progressively becoming more automated.
Among the many directions in which this automation is taking shape, automatic play-testing is one of the most promising thanks also to advancements of many supervised and reinforcement learning (RL) algorithms.
However these type of algorithms, while extremely powerful, often suffer in production environments due to issues with reliability and transparency in their training and usage.

In this research work we are investigating and evaluating strategies to apply the popular RL method Proximal Policy Optimization (PPO) in a casual mobile puzzle game with a specific focus on improving its reliability in training and generalization during game playing.

We have implemented and tested a number of different strategies against a real-world mobile puzzle game (Lily's Garden from Tactile Games).
We isolated the conditions that lead to a failure in either training or generalization during testing and we identified a few strategies to ensure a more stable behaviour of the algorithm in this game genre.
OriginalsprogEngelsk
TitelProceedings of the International Conference on the Foundations of Digital Games
ForlagAssociation for Computing Machinery
Publikationsdato2020
ISBN (Elektronisk)9781450388078
DOI
StatusUdgivet - 2020
BegivenhedFoundations of Digitale Games - Malta, Malta
Varighed: 16 sep. 202018 sep. 2020
Konferencens nummer: 2020

Konference

KonferenceFoundations of Digitale Games
Nummer2020
LokationMalta
Land/OmrådeMalta
Periode16/09/202018/09/2020

Fingeraftryk

Dyk ned i forskningsemnerne om 'Strategies for Using Proximal Policy Optimization in Mobile Puzzle Games'. Sammen danner de et unikt fingeraftryk.

Citationsformater