Abstract
Evaluation is an open problem in procedural content generation
research. The eld is now in a state where there
is a glut of content generators, each serving dierent purposes
and using a variety of techniques. It is dicult to
understand, quantitatively or qualitatively, what makes one
generator dierent from another in terms of its output. To
remedy this, we have conducted a large-scale comparative
evaluation of level generators for the Mario AI Benchmark,
a research-friendly clone of the classic platform game Super
Mario Bros. In all, we compare the output of seven dierent
level generators from the literature, based on dierent
algorithmic methods, plus the levels from the original Super
Mario Bros game. To compare them, we have dened six
expressivity metrics, of which two are novel contributions in
this paper. These metrics are shown to provide interestingly
dierent characterizations of the level generators. The results
presented in this paper, and the accompanying source
code, is meant to become a benchmark against which to test
new level generators and expressivity metrics.
Original language | English |
---|---|
Publication date | 2014 |
Number of pages | 8 |
Publication status | Published - 2014 |
Event | International Conference on the Foundations of Digital Games - Sailing from Ft. Lauderdale, FL , United States Duration: 3 Apr 2014 → 7 Apr 2014 Conference number: 9 http://www.fdg2014.org/ |
Conference
Conference | International Conference on the Foundations of Digital Games |
---|---|
Number | 9 |
Location | Sailing from Ft. Lauderdale, FL |
Country/Territory | United States |
Period | 03/04/2014 → 07/04/2014 |
Internet address |
Keywords
- Procedural Content Generation
- Level Generators
- Mario AI Benchmark
- Expressivity Metrics
- Comparative Evaluation