Skip to main navigation Skip to search Skip to main content

Surprise Benchmarking: The Why, What, and How

  • Lawrence Benson
  • , Carsten Binnig
  • , Jan-Micha Bodensohn
  • , Federico Lorenzi
  • , Jigao Luo
  • , Danica Porobic
  • , Tilmann Rabl
  • , Anupam Sanghi
  • , Russell Sears
  • , Pinar Tözün
  • , Tobias Ziegler

Research output: Conference Article in Proceeding or Book/Report chapterArticle in proceedingsResearchpeer-review

Abstract

Standardized benchmarks are crucial to ensure a fair comparison of performance across systems. While extremely valuable, these benchmarks all use a setup where the workload is well-defined and known in advance. Unfortunately, this has led to overly-tuning data management systems for particular benchmark workloads such as TPC-H or TPC-C. As a result, benchmarking results frequently do not reflect the behavior of these systems in many real-world settings since workloads often significantly vary from the “known” benchmarking workloads. To address this issue, we present surprise benchmarking , a complementary approach to the current standardized benchmarking where “unknown” queries are exercised during the evaluation.
Original languageEnglish
Title of host publicationProceedings of the Tenth International Workshop on Testing Database Systems, DBTest 2024, Santiago, Chile, 9 June 2024
Number of pages8
PublisherAssociation for Computing Machinery
Publication date9 Jun 2024
Pages1-8
ISBN (Print)9798400706691
DOIs
Publication statusPublished - 9 Jun 2024
EventInternational Workshop on Testing Database Systems - Santiago, Chile
Duration: 9 Jun 2024 → …
Conference number: 10
https://dbtest-workshop.github.io/2024/index.html

Workshop

WorkshopInternational Workshop on Testing Database Systems
Number10
Country/TerritoryChile
CitySantiago
Period09/06/2024 → …
Internet address

Keywords

  • Benchmarking
  • Database

Fingerprint

Dive into the research topics of 'Surprise Benchmarking: The Why, What, and How'. Together they form a unique fingerprint.

Cite this