Surprise Benchmarking: The Why, What, and How

  • Lawrence Benson
  • , Carsten Binnig
  • , Jan-Micha Bodensohn
  • , Federico Lorenzi
  • , Jigao Luo
  • , Danica Porobic
  • , Tilmann Rabl
  • , Anupam Sanghi
  • , Russell Sears
  • , Pinar Tözün
  • , Tobias Ziegler

Publikation: Konference artikel i Proceeding eller bog/rapport kapitelKonferencebidrag i proceedingsForskningpeer review

Abstract

Standardized benchmarks are crucial to ensure a fair comparison of performance across systems. While extremely valuable, these benchmarks all use a setup where the workload is well-defined and known in advance. Unfortunately, this has led to overly-tuning data management systems for particular benchmark workloads such as TPC-H or TPC-C. As a result, benchmarking results frequently do not reflect the behavior of these systems in many real-world settings since workloads often significantly vary from the “known” benchmarking workloads. To address this issue, we present surprise benchmarking , a complementary approach to the current standardized benchmarking where “unknown” queries are exercised during the evaluation.
OriginalsprogEngelsk
TitelProceedings of the Tenth International Workshop on Testing Database Systems, DBTest 2024, Santiago, Chile, 9 June 2024
Antal sider8
ForlagAssociation for Computing Machinery
Publikationsdato9 jun. 2024
Sider1-8
ISBN (Trykt)9798400706691
DOI
StatusUdgivet - 9 jun. 2024
BegivenhedInternational Workshop on Testing Database Systems - Santiago, Chile
Varighed: 9 jun. 2024 → …
Konferencens nummer: 10
https://dbtest-workshop.github.io/2024/index.html

Workshop

WorkshopInternational Workshop on Testing Database Systems
Nummer10
Land/OmrådeChile
BySantiago
Periode09/06/2024 → …
Internetadresse

Emneord

  • Benchmarking
  • Database

Fingeraftryk

Dyk ned i forskningsemnerne om 'Surprise Benchmarking: The Why, What, and How'. Sammen danner de et unikt fingeraftryk.

Citationsformater