ITU

Estimating Player Completion Rate in Mobile Puzzle Games Using Reinforcement Learning

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Standard

Estimating Player Completion Rate in Mobile Puzzle Games Using Reinforcement Learning. / Kristensen, Jeppe Theiss; Valdivia, Arturo; Burelli, Paolo.

2020 IEEE Conference on Games (CoG). IEEE, 2020.

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Harvard

Kristensen, JT, Valdivia, A & Burelli, P 2020, Estimating Player Completion Rate in Mobile Puzzle Games Using Reinforcement Learning. in 2020 IEEE Conference on Games (CoG). IEEE, IEEE Conference on Games 2020, 24/08/2020. https://doi.org/10.1109/CoG47356.2020.9231581

APA

Vancouver

Author

Bibtex

@inproceedings{00e4b270e1e64b80975230103614f7b4,
title = "Estimating Player Completion Rate in Mobile Puzzle Games Using Reinforcement Learning",
abstract = "In this work we investigate whether it is plausibleto use the performance of a reinforcement learning (RL) agentto estimate the difficulty measured as the player completion rateof different levels in the mobile puzzle game Lily{\textquoteright}s Garden.For this purpose we train an RL agent and measure thenumber of moves required to complete a level. This is thencompared to the level completion rate of a large sample of realplayers.We find that the strongest predictor of player completion ratefor a level is the number of moves taken to complete a level of the∼5% best runs of the agent on a given level. A very interestingobservation is that, while in absolute terms, the agent is unable toreach human-level performance across all levels, the differencesin terms of behaviour between levels are highly correlated to thedifferences in human behaviour. Thus, despite performing sub-par, it is still possible to use the performance of the agent toestimate, and perhaps further model, player metrics",
author = "Kristensen, {Jeppe Theiss} and Arturo Valdivia and Paolo Burelli",
year = "2020",
doi = "10.1109/CoG47356.2020.9231581",
language = "English",
isbn = "978-1-7281-4534-1",
booktitle = "2020 IEEE Conference on Games (CoG)",
publisher = "IEEE",
address = "United States",
note = "IEEE Conference on Games 2020, CoG ; Conference date: 24-08-2020 Through 27-11-2020",
url = "https://ieee-cog.org/2020/",

}

RIS

TY - GEN

T1 - Estimating Player Completion Rate in Mobile Puzzle Games Using Reinforcement Learning

AU - Kristensen, Jeppe Theiss

AU - Valdivia, Arturo

AU - Burelli, Paolo

PY - 2020

Y1 - 2020

N2 - In this work we investigate whether it is plausibleto use the performance of a reinforcement learning (RL) agentto estimate the difficulty measured as the player completion rateof different levels in the mobile puzzle game Lily’s Garden.For this purpose we train an RL agent and measure thenumber of moves required to complete a level. This is thencompared to the level completion rate of a large sample of realplayers.We find that the strongest predictor of player completion ratefor a level is the number of moves taken to complete a level of the∼5% best runs of the agent on a given level. A very interestingobservation is that, while in absolute terms, the agent is unable toreach human-level performance across all levels, the differencesin terms of behaviour between levels are highly correlated to thedifferences in human behaviour. Thus, despite performing sub-par, it is still possible to use the performance of the agent toestimate, and perhaps further model, player metrics

AB - In this work we investigate whether it is plausibleto use the performance of a reinforcement learning (RL) agentto estimate the difficulty measured as the player completion rateof different levels in the mobile puzzle game Lily’s Garden.For this purpose we train an RL agent and measure thenumber of moves required to complete a level. This is thencompared to the level completion rate of a large sample of realplayers.We find that the strongest predictor of player completion ratefor a level is the number of moves taken to complete a level of the∼5% best runs of the agent on a given level. A very interestingobservation is that, while in absolute terms, the agent is unable toreach human-level performance across all levels, the differencesin terms of behaviour between levels are highly correlated to thedifferences in human behaviour. Thus, despite performing sub-par, it is still possible to use the performance of the agent toestimate, and perhaps further model, player metrics

U2 - 10.1109/CoG47356.2020.9231581

DO - 10.1109/CoG47356.2020.9231581

M3 - Article in proceedings

SN - 978-1-7281-4534-1

BT - 2020 IEEE Conference on Games (CoG)

PB - IEEE

T2 - IEEE Conference on Games 2020

Y2 - 24 August 2020 through 27 November 2020

ER -

ID: 85512747