Surprise Benchmarking: The Why, What, and How

Lawrence Benson, Carsten Binnig, Jan-Micha Bodensohn, Federico Lorenzi, Jigao Luo, Danica Porobic, Tilmann Rabl, Anupam Sanghi, Russell Sears, Pinar Tözün, Tobias Ziegler

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

Standardized benchmarks are crucial to ensure a fair comparison of performance across systems. While extremely valuable, these benchmarks all use a setup where the workload is well-defined and known in advance. Unfortunately, this has led to overly-tuning data management systems for particular benchmark workloads such as TPC-H or TPC-C. As a result, benchmarking results frequently do not reflect the behavior of these systems in many real-world settings since workloads often significantly vary from the “known” benchmarking workloads. To address this issue, we present surprise benchmarking , a complementary approach to the current standardized benchmarking where “unknown” queries are exercised during the evaluation.
Original languageEnglish
Title of host publicationProceedings of the Tenth International Workshop on Testing Database Systems, DBTest 2024, Santiago, Chile, 9 June 2024
Number of pages8
PublisherAssociation for Computing Machinery
Publication date2024
Pages1-8
DOIs
Publication statusPublished - 2024

Keywords

  • Benchmarking
  • Database

Fingerprint

Dive into the research topics of 'Surprise Benchmarking: The Why, What, and How'. Together they form a unique fingerprint.

Cite this