Strategies for Using Proximal Policy Optimization in Mobile Puzzle Games

Jeppe Theiss Kristensen, Paolo Burelli

Publikation: Konference artikel i Proceeding eller bog/rapport kapitelKonferencebidrag i proceedingsForskningpeer review


While traditionally a labour intensive task, the testing of game content is progressively becoming more automated.
Among the many directions in which this automation is taking shape, automatic play-testing is one of the most promising thanks also to advancements of many supervised and reinforcement learning (RL) algorithms.
However these type of algorithms, while extremely powerful, often suffer in production environments due to issues with reliability and transparency in their training and usage.

In this research work we are investigating and evaluating strategies to apply the popular RL method Proximal Policy Optimization (PPO) in a casual mobile puzzle game with a specific focus on improving its reliability in training and generalization during game playing.

We have implemented and tested a number of different strategies against a real-world mobile puzzle game (Lily's Garden from Tactile Games).
We isolated the conditions that lead to a failure in either training or generalization during testing and we identified a few strategies to ensure a more stable behaviour of the algorithm in this game genre.
TitelProceedings of the International Conference on the Foundations of Digital Games
ForlagAssociation for Computing Machinery
ISBN (Elektronisk)9781450388078
StatusUdgivet - 2020
BegivenhedFDG 2020: Foundations of Digitale Games - Malta, Malta
Varighed: 16 sep. 202018 sep. 2020
Konferencens nummer: 2020


KonferenceFDG 2020: Foundations of Digitale Games


Dyk ned i forskningsemnerne om 'Strategies for Using Proximal Policy Optimization in Mobile Puzzle Games'. Sammen danner de et unikt fingeraftryk.