Abstract
Standardized benchmarks are crucial to ensure a fair comparison of performance across systems. While extremely valuable, these benchmarks all use a setup where the workload is well-defined and known in advance. Unfortunately, this has led to overly-tuning data management systems for particular benchmark workloads such as TPC-H or TPC-C. As a result, benchmarking results frequently do not reflect the behavior of these systems in many real-world settings since workloads often significantly vary from the “known” benchmarking workloads. To address this issue, we present surprise benchmarking , a complementary approach to the current standardized benchmarking where “unknown” queries are exercised during the evaluation.
| Original language | English |
|---|---|
| Title of host publication | Proceedings of the Tenth International Workshop on Testing Database Systems, DBTest 2024, Santiago, Chile, 9 June 2024 |
| Number of pages | 8 |
| Publisher | Association for Computing Machinery |
| Publication date | 9 Jun 2024 |
| Pages | 1-8 |
| ISBN (Print) | 9798400706691 |
| DOIs | |
| Publication status | Published - 9 Jun 2024 |
| Event | International Workshop on Testing Database Systems - Santiago, Chile Duration: 9 Jun 2024 → … Conference number: 10 https://dbtest-workshop.github.io/2024/index.html |
Workshop
| Workshop | International Workshop on Testing Database Systems |
|---|---|
| Number | 10 |
| Country/Territory | Chile |
| City | Santiago |
| Period | 09/06/2024 → … |
| Internet address |
Keywords
- Benchmarking
- Database
Fingerprint
Dive into the research topics of 'Surprise Benchmarking: The Why, What, and How'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver